Repository navigation
aisecurity
- Website
- Wikipedia
CodeGate: Security, Workspaces and Muxing for AI Applications, coding assistants, and agentic frameworks.
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
CTF challenges designed and implemented in machine learning applications
An interactive CLI application for interacting with authenticated Jupyter instances.
Powerful LLM Query Framework with YAML Prompt Templates. Made for Automation
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
A collection list for Large Language Model (LLM) Watermark
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
This repo contains reference implementations, tutorials, samples, and documentation for working with Bosch AIShield
LLM Security Project with Llama Guard
JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares
Zero Trust AI 360
A Safe and Reliable AI Tools Navigation & Resource Management Platform
CyberBrain_Model is an advanced AI project designed for fine-tuning the model `DeepSeek-R1-Distill-Qwen-14B` specifically for cyber security tasks.
FIMjector is an exploit for OpenAI GPT models based on Fill-In-the-Middle (FIM) tokens.
This repository demonstrates a variety of **MCP Poisoning Attacks** affecting real-world AI agent workflows.
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
An intentionally vulnerable AI chatbot to learn and practice AI Security.