Repository navigation
ai-inference
- Website
- Wikipedia
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
Workflow-based Multi-platform AI Deployment Tool
oneAPI Data Analytics Library (oneDAL)
The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
Cross-platform c++ sdk & model hub for easy ai inference
Client library to interact with various APIs used within Philips in a simple and uniform way
llama.cpp + ROCm + llama-swap
Local LLM Inference Library
Customed version of Google's tflite-micro
Enterprise evolution of nano-vLLM - Currently in development. Built with respect on @GeeeekExplorer's foundation.
No more Hugging Face cost leaks.
A powerful, faster, scalable full-stack boilerplace for AI inference using Node.js, Python, Redis, and Docker
Unity TTS plugin: Piper neural synthesis + OpenJTalk Japanese + Unity AI Inference Engine. Windows/Mac/Linux/Android ready. High-quality voices for games & apps.
Arbitrary Numbers
🌱 Intelligent IoT greenhouse fan controller using AI/ML for automated climate control. Features ESP32 + DHT22 sensors, real-time Firebase integration, Flutter mobile app with TensorFlow Lite on-device inference, and Wokwi simulation. Complete full-stack solution demonstrating IoT + AI integration.
UniUi uses AI to allow you to talk directly to your system.
Professional nano-vLLM Enterprise enhances the original nano-vLLM, transforming it into a robust, production-ready LLM engine. Explore its features on GitHub! 🚀✨
Citadel AI OS – Enterprise AI Runtime Environment for Inference, Agents, and Business Operations