Repository navigation
ai-security
- Website
- Wikipedia
This repository is primarily maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), artificial intelligence security, vulnerability research, exploit development, reverse engineering, and more.
🐢 Open-Source Evaluation & Testing library for LLM Agents
企业级 AI 编程助手,支持私有化离线部署,兼容第三方及本地化大模型,具备企业级管理面板,具备代码安全功能。
A curated list of useful resources that cover Offensive AI.
A list of backdoor learning resources
ToolHive makes deploying MCP servers easy, secure and fun
a security scanner for custom LLM applications
Reconmap is a collaboration-first security operations platform for infosec teams and MSSPs, enabling end‑to‑end engagement management, from reconnaissance through execution and reporting. With built-in command automation, output parsing, and AI‑assisted summaries, it delivers faster, more structured, and high‑quality security assessments.
A security scanner for your LLM agentic workflows
A deliberately vulnerable banking application designed for practicing Security Testing of Web App, APIs, AI integrated App and secure code reviews. Features common vulnerabilities found in real-world applications, making it an ideal platform for security professionals, developers, and enthusiasts to learn pentesting and secure coding practices.
MCP for Security: A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Integrate security testing and penetration testing into AI workflows.
All-in-one offensive security toolbox with AI agent and MCP architecture. Integrates tools like Nmap, Metasploit, FFUF, SQLMap. Enables pentesting, bug bounty hunting, threat hunting, and reporting. RAG-based responses with local knowledge base support.
RuLES: a benchmark for evaluating rule-following in language models
Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
A curated list of academic events on AI Security & Privacy
Build Secure and Compliant AI agents and MCP Servers. YC W23
Framework for testing vulnerabilities of large language models (LLM).
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models