Repository navigation
sbert
- Website
- Wikipedia
MTEB: Massive Text Embedding Benchmark
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Generative Representational Instruction Tuning
Build and train state-of-the-art natural language processing models using BERT
Search with BERT vectors in Solr, Elasticsearch, OpenSearch and GSI APU
基于sentence transformers和chatglm实现的文档搜索工具
Rust port of sentence-transformers (https://github.com/UKPLab/sentence-transformers)
TextReducer - A Tool for Summarization and Information Extraction
文本相似度,语义向量,文本向量,text-similarity,similarity, sentence-similarity,BERT,SimCSE,BERT-Whitening,Sentence-BERT, PromCSE, SBERT
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
KoBERTopic은 BERTopic을 한국어 데이터에 적용할 수 있도록 토크나이저와 BERT를 수정한 코드입니다.
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Heterogenous, Task- and Domain-Specific Benchmark for Unsupervised Sentence Embeddings used in the TSDAE paper: https://arxiv.org/abs/2104.06979.
Interactive tree-maps with SBERT & Hierarchical Clustering (HAC)
Run sentence-transformers (SBERT) compatible models in Node.js or browser.
Embedding Representation for Indonesian Sentences!
Classification pipeline based on sentenceTransformer and Facebook nearest-neighbor search library