Repository navigation
ai-red-team
- Website
- Wikipedia
🐢 Open-Source Evaluation & Testing library for LLM Agents
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
An offensive security toolset for Microsoft 365 focused on Microsoft Copilot, Copilot Studio and Power Platform
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
AI Red Teaming Range
AspGoat is an intentionally vulnerable ASP.NET Core application for learning and practicing web application security.
🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
This is my prompts for Lakera's Gandalf challenges
🛡️ Automate security scans for JavaScript/Node.js vulnerabilities in GitHub repos, analyze package usage, and generate pull requests with fixes.
An Offensive Security Blog
Hackaprompt v1.0 AIRT Agents