Repository navigation

#

moe

Python
18602
39 分钟前

TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in performant way.

C++
11772
20 小时前

Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, GLM4.5, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, Llava, GLM4v, Phi4, ...) (AAAI 2025).

Python
10190
1 天前
czy0729/Bangumi

electron An unofficial https://bgm.tv ui first app client for Android and iOS, built with React Native. 一个无广告、以爱好为驱动、不以盈利为目的、专门做 ACG 的类似豆瓣的追番记录,bgm.tv 第三方客户端。为移动端重新设计,内置大量加强的网页端难以实现的功能,且提供了相当的自定义选项。 目前已适配 iOS / Android。

TypeScript
4830
1 天前

GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

Python
2841
5 天前

【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models

Python
2247
3 个月前

MoBA: Mixture of Block Attention for Long-Context LLMs

Python
1911
6 个月前

PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538

Python
1182
1 年前

⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)

Python
992
10 个月前

Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4

C
928
19 天前

A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI

Python
770
2 年前

An open-source solution for full parameter fine-tuning of DeepSeek-V3/R1 671B, including complete code and scripts from training to inference, as well as some practical experiences and conclusions. (DeepSeek-V3/R1 满血版 671B 全参数微调的开源解决方案,包含从训练到推理的完整代码和脚本,以及实践中积累一些经验和结论。)

Python
764
7 个月前

中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)

Python
609
1 年前

😘 A pinterest-style layout site, shows illusts on pixiv.net order by popularity.

TypeScript
366
3 年前

Speed Always Wins: A Survey on Efficient Architectures for Large Language Models

337
1 个月前

MoH: Multi-Head Attention as Mixture-of-Head Attention

Python
276
1 年前