Repository navigation

#

linear-attention

RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.

Python
13902
13 小时前

[NeurIPS 2024] Official code of ”LION: Linear Group RNN for 3D Object Detection in Point Clouds“

Python
192
2 个月前

Explorations into the recently proposed Taylor Series Linear Attention

Python
100
1 年前

Speed Always Wins: A Survey on Efficient Architectures for Large Language Models

98
5 天前

CUDA implementation of autoregressive linear attention, with all the latest research findings

Python
44
2 年前

Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)

Python
26
7 个月前

Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)

Python
24
1 年前

Code for the paper "Cottention: Linear Transformers With Cosine Attention"

Cuda
17
10 个月前

Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)

Python
13
3 年前

[ICML 2024] Official implementation of "LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions."

Python
10
9 个月前

Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)

Python
10
2 个月前

LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference

Jupyter Notebook
4
2 年前

SAUTE is a lightweight transformer-based architecture adapted for dialog modeling

Python
2
2 个月前

Pure PyTorch implementations of Popular Linear Attention models

Python
0
1 个月前