Repository navigation

#

llm-security

pathwaycom/llm-app

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

Jupyter Notebook
23857
8 天前

[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

Jupyter Notebook
3075
4 个月前
msoedov/agentic_security
Python
1296
4 天前

An easy-to-use Python framework to generate adversarial jailbreak prompts.

Python
624
23 天前

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

Jupyter Notebook
517
17 天前

Papers and resources related to the security and privacy of LLMs 🤖

Python
494
5 个月前

⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs

Python
377
1 年前

This repository provides a benchmark for prompt Injection attacks and defenses

Python
183
2 天前
Svelte
163
5 个月前

Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.

Python
157
15 天前

The fastest Trust Layer for AI Agents

Python
129
1 个月前

Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily

Python
115
9 个月前

Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️

Python
115
9 个月前