Repository navigation
pretrained-language-models
- Website
- Wikipedia
A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
On Transferability of Prompt Tuning for Natural Language Processing
The code for the ACL 2023 paper "Linear Classifier: An Often-Forgotten Baseline for Text Classification".
Code for the paper "Exploiting Pretrained Biochemical Language Models for Targeted Drug Design", to appear in Bioinformatics, Proceedings of ECCB2022.
FusionDTI utilises a Token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction Prediction.
A Keras-based and TensorFlow-backend NLP Models Toolkit.
The official repository for AAAI 2024 Oral paper "Structured Probabilistic Coding".
This research examines the performance of Large Language Models (GPT-3.5 Turbo and Gemini 1.5 Pro) in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dataset. It explores zero-shot and few-shot scenarios to evaluate their efficacy in low-resource settings.
Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT, and Bio_ClinicalBERT. Achieved F1 scores of 0.73 and 0.77 for BIOES and BIO tagging models, respectively.
A python tool for evaluating the quality of few-shot prompt learning.
LSTM models for text classification on character embeddings.
Fine tuned BERT, mBERT and XLMRoBERTa for Abusive Comments Detection in Telugu, Code-Mixed Telugu and Telugu-English.
The code of An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering