Repository navigation
mixtral-8x7b
- Website
- Wikipedia
TeleChat: 🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Replit.
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
[ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
A Free OpenAI-compatible API designed to interact with models like GPT-4o, Claude 3 Haiku, Mixtral 8x7b & Llama 3 70b through DuckDuckGo's AI Chat.
An innovative Python project that integrates AI-driven agents for Agile software development, leveraging advanced language models and collaborative task automation.
An unofficial C#/.NET SDK for accessing the Mistral AI API
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
DuckDuckGo AI to OpenAI API
A project to show howto use SpringAI with OpenAI to chat with the documents in a library. Documents are stored in a normal/vector database. The AI is used to create embeddings from documents that are stored in the vector database. The vector database is used to query for the nearest document. That document is used by the AI to generate the answer.
📤 Email Classification and Automatic Re-routing with the power of LLMs and Distributed Task Queues. 🏆 Winner at Barclays Hack-O-Hire 2024!
DelphiMistralAI wrapper brings Mistral’s text-vision-audio models and agentic Conversations to Delphi, with chat, embeddings, Codestral codegen, fine-tuning, batching, moderation, async/await helpers and live request monitoring.
A lightweight Python API wrapper and CLI for Groq’s offering of language models using their ultra fast LPU Inference Engine.
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
Notes on the Mistral AI model
Chat with your PDF files for free, using Langchain, Groq, ChromaDB, and Jina AI embeddings.