Repository navigation
llm-eval
- Website
- Wikipedia
Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
AI Observability & Evaluation
🐢 Open-Source Evaluation & Testing for AI & LLM systems
ETL, Analytics, Versioning for Unstructured Data
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.
Python SDK for running evaluations on LLM generated responses
Generate ideal question-answers for testing RAG
A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.
Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)
Бенчмарк сравнивает русские аналоги ChatGPT: Saiga, YandexGPT, Gigachat
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it understands context, its tone, and more. This helps you see how good your LLM applications are.
Develop reliable AI apps
An open source library for asynchronous querying of LLM endpoints
This is an opensource project allowing you to compare two LLM's head to head with a given prompt, this section will be regarding the backend of this project, allowing for llm api's to be incorporated and used in the front-end
Realign is a testing and simulation framework for AI applications.
Code for "Prediction-Powered Ranking of Large Language Models", NeurIPS 2024.
Create an evaluation framework for your LLM based app. Incorporate it into your test suite. Lay the monitoring foundation.
The prompt engineering, prompt management, and prompt evaluation tool for Python
The prompt engineering, prompt management, and prompt evaluation tool for TypeScript, JavaScript, and NodeJS.