Repository navigation
llm-security
- Website
- Wikipedia
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
the LLM vulnerability scanner
🐢 Open-Source Evaluation & Testing library for LLM Agents
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
A secure low code honeypot framework, leveraging AI for System Virtualization.
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
An easy-to-use Python framework to generate adversarial jailbreak prompts.
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
A security scanner for your LLM agentic workflows
Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt Injection attacks and defenses
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
AI-driven Threat modeling-as-a-Code (TaaC-AI)
The fastest Trust Layer for AI Agents
Framework for testing vulnerabilities of large language models (LLM).