Repository navigation
pre-trained-models
- Website
- Wikipedia
Unified Training of Universal Time Series Forecasting Transformers
Collection of awesome parameter-efficient fine-tuning resources.
🎉 PILOT: A Pre-trained Model-Based Continual Learning Toolbox
The code repository for "Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning"(CVPR24) in PyTorch.
About Code release for "Timer-XL: Long-Context Transformers for Unified Time Series Forecasting"
Vietnamese Legal Question Answering with Machine Reading Comprehension (MRC) and Answer Generation (AG) approches. (KSE 2024)
Compare image similarity using features extracted from the pre-trained VGG16 model. This project leverages cosine similarity for accurate visual similarity assessment, making it ideal for image retrieval and duplicate detection.
Fine-tuning GPT-2 models with custom text corpora, utilizing Hugging Face's Transformers library and advanced training techniques for sophisticated text generation applications.
We built this project during a hackathon to detect emergency vehicles in real-time. Although we had to stop development after reaching the third level, the system still works well for identifying emergency vehicles
Streamlit app that predicts if a painting is a van Gogh
Using deep learning models to accurately classify pet images into different breeds and types, demonstrating effective image classification and model evaluation.
The "Object Detection and Identification Model" is an AI project employing YOLO v3, a pretrained model, and Python. It enables efficient and accurate detection and identification of objects in images, showcasing the prowess of advanced computer vision technology.
This project is a basic face recognition system built using Python and libraries like OpenCV, NumPy, and dlib. It can detect and recognize faces from images and videos by comparing the detected faces with known faces stored in the system.
This project demonstrates how to fine-tune a pre-trained ResNet18 model using PyTorch for binary classification. This model is adapted to identify images in two classes Positive and Negative.
dis-cyril is an Alexa like using pre-trained models and buzin.
A concept on how Machine Learning (ML) can be integrated on Web apps
Testing methods in PTM enabled OSS
Landmark Detection using pre-trained models.
A semester project using ShuffleNet, MobileNetV3 Small & ResNet50 to classify real and fake faces with the specified dataset that taken from Kaggle.
Registration of pre-trained models found on Hugging Face using blockchain technology