Repository navigation
llama-2
- Website
- Wikipedia
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
improve Llama-2's proficiency in comprehension, generation, and translation of Chinese.
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
InsightSolver: Colab notebooks for exploring and solving operational issues using deep learning, machine learning, and related models.
LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces
[KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Chat to LLaMa 2 that also provides responses with reference documents over vector database. Locally available model using GPTQ 4bit quantization.
The course provides guidance on best practices for prompting and building applications with the powerful open commercial license models of Llama 2.
LLM Security Project with Llama Guard