Repository navigation
lip-sync
- Website
- Wikipedia
Real time interactive streaming digital human
MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games, in animated cartoons, or in any other project that requires animating mouths based on existing recordings.
Wav2Lip UHQ extension for Automatic1111
MFCC-based LipSync plug-in for Unity using Job System and Burst Compiler
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, Video Generation. Open Source, Local & Free.
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cascaded solutions (ASR-LLM-TTS-THG). Customizable appearance and voice, supporting voice cloning, with initial package delay as low as 3s.
Diffusion-based Portrait and Animal Animation
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)
This project is a digital human that can talk and listen to you. It uses OpenAI's GPT to generate responses, OpenAI's Whisper to transcript the audio, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
A simple Google Colab notebook which can translate an original video into multiple languages along with lip sync.
Full version of wav2lip-onnx including face alignment and face enhancement and more...
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.
Learning Lip Sync of Obama from Speech Audio