Repository navigation

#

offline-llm

khoj-ai/khoj

Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.

Python
30751
1 天前

Chat offline with open-source LLMs like deepseek-r1, nemotron, qwen, llama and more all through a simple R package powered by Shiny and Ollama. 🚀

R
21
5 个月前

A tool for concealing writing style using LLM

Python
16
1 年前

Obrew Studio - Server: A self-hostable machine learning engine. Build agents and schedule workflows private to you.

Python
11
4 个月前

Attempt to summarize text from `stdin`, using a large language model (locally and offline), to `stdout`

Rust
9
2 年前

Offline AI assistant plugin for Obsidian using encrypted local LLM models.

Python
4
3 个月前

A private, free, offline-first chat application powered by Open Source AI models like DeepSeek, Llama, Mistral, etc. through Ollama.

JavaScript
3
7 个月前

Local-first Copilot-style assistant powered by screen, mic, and clipboard input — fully offline, works with any LLM or OCR engine. Press a key, get results. No cloud, no lock-in.

C#
3
7 天前
Python
2
1 年前

A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes

C
1
19 小时前

🍳 Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free

Python
1
7 个月前

Lightweight offline AI assistant for Windows 11 with voice and GUI support. Built with HuggingFace, Tkinter, and DirectML for fast local inference.

Python
0
1 个月前

A containerized, offline-capable LLM API powered by Ollama. Automatically pulls models and serves them via a REST API. Perfect for homelab, personal AI assistants, and portable deployments.

Python
0
1 个月前

Optimize your voice AI experience with Faster-Local-Voice-AI. Achieve low-latency STT and TTS on Ubuntu, all offline and fully configurable. 🚀💻

Python
0
1 个月前

A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes

C
0
3 个月前