Repository navigation
offline-llm
- Website
- Wikipedia
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.
Chat offline with open-source LLMs like deepseek-r1, nemotron, qwen, llama and more all through a simple R package powered by Shiny and Ollama. 🚀
A tool for concealing writing style using LLM
Claude Deep Research config for Claude Code.
Obrew Studio - Server: A self-hostable machine learning engine. Build agents and schedule workflows private to you.
Attempt to summarize text from `stdin`, using a large language model (locally and offline), to `stdout`
Offline AI assistant plugin for Obsidian using encrypted local LLM models.
A private, free, offline-first chat application powered by Open Source AI models like DeepSeek, Llama, Mistral, etc. through Ollama.
Local-first Copilot-style assistant powered by screen, mic, and clipboard input — fully offline, works with any LLM or OCR engine. Press a key, get results. No cloud, no lock-in.
A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes
🍳 Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free
Lightweight offline AI assistant for Windows 11 with voice and GUI support. Built with HuggingFace, Tkinter, and DirectML for fast local inference.
A containerized, offline-capable LLM API powered by Ollama. Automatically pulls models and serves them via a REST API. Perfect for homelab, personal AI assistants, and portable deployments.
Optimize your voice AI experience with Faster-Local-Voice-AI. Achieve low-latency STT and TTS on Ubuntu, all offline and fully configurable. 🚀💻
Offline AI assistant plugin for Obsidian using encrypted local LLM models.
A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes