Repository navigation

#

llm-security

pathwaycom/llm-app

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

Jupyter Notebook
29922
21 天前

[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

Jupyter Notebook
3312
8 个月前
msoedov/agentic_security
Python
1619
6 天前

OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)

TeX
863
21 分钟前

An easy-to-use Python framework to generate adversarial jailbreak prompts.

Python
701
5 个月前

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

Jupyter Notebook
697
1 个月前

Papers and resources related to the security and privacy of LLMs 🤖

Python
527
2 个月前

⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs

Python
407
2 年前

This repository provides a benchmark for prompt Injection attacks and defenses

Python
261
1 个月前

Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.

Python
182
4 个月前
Svelte
179
9 个月前