Repository navigation

#

p-tuning

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

Jupyter Notebook
2766
2 年前

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

Python
2759
2 年前

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

Python
2055
2 年前

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Python
936
3 年前

轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等

Jupyter Notebook
447
1 年前

This repository is an AI Bootcamp material that consist of a workflow for LLM

Jupyter Notebook
93
1 个月前

Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval

Python
26
2 年前

Pipelines for Fine-Tuning of LLMs

Python
4
1 个月前

P-tuning-v2 integrated mrc for ner

Python
3
2 年前

This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.

Jupyter Notebook
2
1 年前

Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.

Python
1
2 年前

Reproduce a prompt-learning method: P-Tuning V2, from the paper 《P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks》, model usage: Deberta + ChatGLM2, additional_task: RACE

Python
0
3 个月前