Repository navigation
roberta
- Website
- Wikipedia
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
RoBERTa中文预训练模型: RoBERTa for Chinese
news-please - an integrated web crawler and information extractor for news that just works
The implementation of DeBERTa
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
🤖 A PyTorch library of curated Transformer models and their composable components
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型