Repository navigation

#

linear-attention

RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.

Python
13523
12 天前

[NeurIPS 2024] Official code of ”LION: Linear Group RNN for 3D Object Detection in Point Clouds“

Python
171
6 个月前

Explorations into the recently proposed Taylor Series Linear Attention

Python
97
8 个月前

Implementation of Agent Attention in Pytorch

Python
90
9 个月前

CUDA implementation of autoregressive linear attention, with all the latest research findings

Python
44
2 年前

Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)

Python
24
10 个月前

Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)

Python
22
3 个月前

Code for the paper "Cottention: Linear Transformers With Cosine Attention"

Cuda
17
6 个月前

Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)

Python
13
2 年前

[ICML 2024] Official implementation of "LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions."

Python
9
5 个月前

Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)

Python
8
1 个月前

LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference

Jupyter Notebook
4
2 年前