Repository navigation
adversarial-attacks
- Website
- Wikipedia
TOTALLY HARMLESS LIBERATION PROMPTS FOR GOOD LIL AI'S!
Adversary Emulation Framework
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Data augmentation for NLP
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
A unified evaluation framework for large language models
PyTorch implementation of adversarial attacks [torchattacks]
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Must-read Papers on Textual Adversarial Attack and Defense
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
A pytorch adversarial library for attack and defense methods on images and graphs
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
A curated list of adversarial attacks and defenses papers on graph-structured data.
An Open-Source Package for Textual Adversarial Attack.
This repository is a compilation of all APT simulations that target many vital sectors,both private and governmental. The simulation includes written tools, C2 servers, backdoors, exploitation techniques, stagers, bootloaders, and many other tools that attackers might have used in actual attacks. These tools and TTPs are simulated here.
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Raising the Cost of Malicious AI-Powered Image Editing
A Harder ImageNet Test Set (CVPR 2021)