Repository navigation
openai-compatible-api
- Website
- Wikipedia
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model w/CPU ONNX and NVIDIA GPU PyTorch support, handling, and auto-stitching
Turn GitHub Copilot into OpenAI/Anthropic API compatible server. Usable with Claude Code!
LM inference server implementation based on *.cpp.
ThunderAI is a Thunderbird Addon that uses the capabilities of ChatGPT, Gemini, Anthropic or Ollama to enhance email management.
A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.
Visual inference exploration & experimentation playground
Deploy open-source LLMs on AWS in minutes — with OpenAI-compatible APIs and a powerful CLI/SDK toolkit.
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
duck.ai openai compatible api server
An OpenAI Compatible API which integrates LLM, Embedding and Reranker. 一个集成 LLM、Embedding 和 Reranker 的 OpenAI 兼容 API
Demonstrates that Spring AI can be configured to work with OpenRouter so that you can work with multiple LLMs through a single Open AI compatible interface. Potentially helpful when evaluating the performance and quality of model responses.
An adapter to add OpenAI compatibility to your Hono app using the Vercel AI SDK
A robust Node.js proxy server that automatically rotates API keys for Gemini and OpenAI APIs when rate limits (429 errors) are encountered. Built with zero dependencies and comprehensive logging.
C# client for KoboldCpp.
The C# client for LM Studio.
OpenAI compatible ChatBot API template with openapi standard
A FastAPI-powered REST API offering a comprehensive suite of natural language processing services using machine learning models with PyTorch and Transformers, packaged in a Docker container to run efficiently.
Tool-based LLM integration with Zenodo via the Model Context Protocol (MCP)
Config files for my GitHub profile.