Repository navigation
flan-t5
- Website
- Wikipedia
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
[Preprint] Learning to Filter Context for Retrieval-Augmented Generaton
Official implementation of the paper "CoEdIT: Text Editing by Task-Specific Instruction Tuning" (EMNLP 2023)
LLMs4OL: Large Language Models for Ontology Learning
Official implementation of the paper "CoEdIT: Text Editing by Task-Specific Instruction Tuning" (EMNLP 2023)
This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.
Rethinking Negative Instances for Generative Named Entity Recognition [ACL 2024 Findings]
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art language model to enhance its ability to classify text data.
Tools and our test data developed for the HackAPrompt 2023 competition
Build a Large Language Model (From Scratch) book and Finetuned Models
A template Next.js app for running language models like FLAN-T5 with Replicate's API
In this implementation, using the Flan T5 large language model, we performed the Text Classification task on the IMDB dataset and obtained a very good accuracy of 93%.
The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.
Use AI to personify books, so that you can talk to them 🙊
In-context learning, Fine-Tuning, RLHF on Flan-T5
Document Summarization App using large language model (LLM) and Langchain framework. Used a pre-trained T5 model and its tokenizer from Hugging Face Transformers library. Created a summarization pipeline to generate summary using model.