Repository navigation
face-animation
- Website
- Wikipedia
Bring portraits to life!
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character mesh.
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, Video Generation. Open Source, Local & Free.
[ECCV 2024 Oral] EDTalk - Official PyTorch Implementation
[CVPR2023] OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.
ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control (e.g., audio, expression).
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.
Blender add-on to implement VOCA neural network.
Speech to Facial Animation using GANs
[NeurIPS 2023] Learning Motion Refinement for Unsupervised Face Animation
A software pipeline for creating realistic videos of people talking, using only images.
One-shot face animation using webcam, capable of running in real time.
Language-Guided Face Animation by Recurrent StyleGAN-based Generator
Face Animation from Text 🧙
Official Implementation of "Style Generator Inversion for Image Enhancement and Animation".