Repository navigation

#

trusted-ai

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Python
5568
4 天前

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Python
2662
10 个月前

Interpretability and explainability of data and machine learning models

Python
1738
7 个月前

Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.

Python
266
17 天前

Paddle with Decentralized Trust based on Xuperchain

Go
88
1 年前

Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks

Python
44
4 年前

A self-hosted, privacy-focused RAG (Retrieval-Augmented Generation) interface for intelligent document interaction. Turn any document into a knowledge base you can chat with.

Python
5
5 个月前

Hands on workshop material evaluating performance, fairness and robustness of models

4
6 年前

Security protocols for estimating adversarial robustness of machine learning models for both tabular and image datasets. This package implements a set of evasion attacks based on metaheuristic optimization algorithms, and complex cost functions to give reliable results for tabular problems.

Jupyter Notebook
3
8 个月前