Repository navigation

#

trusted-ai

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Python
5478
13 小时前

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Python
2636
8 个月前

Interpretability and explainability of data and machine learning models

Python
1719
6 个月前

Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.

Python
265
3 个月前

Paddle with Decentralized Trust based on Xuperchain

Go
89
1 年前

Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks

Python
43
4 年前

A self-hosted, privacy-focused RAG (Retrieval-Augmented Generation) interface for intelligent document interaction. Turn any document into a knowledge base you can chat with.

Python
5
4 个月前

Hands on workshop material evaluating performance, fairness and robustness of models

4
6 年前

Security protocols for estimating adversarial robustness of machine learning models for both tabular and image datasets. This package implements a set of evasion attacks based on metaheuristic optimization algorithms, and complex cost functions to give reliable results for tabular problems.

Jupyter Notebook
3
6 个月前