From e454232168451be994f9b6d4500ea95529390be0 Mon Sep 17 00:00:00 2001 From: Zunnan Xu <52844577+kkakkkka@users.noreply.github.com> Date: Sun, 19 Jan 2025 22:34:04 +0800 Subject: [PATCH] update 2024: Add several published papers, GitHub links, and homepage --- README.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 8508216..53d0fe5 100644 --- a/README.md +++ b/README.md @@ -216,14 +216,16 @@ Paper by Folder : [📁/survey](https://github.com/OpenHuman-ai/awesome-gesture_ ### **2024** -- 【CVPR 2024】 DiffTED: One-shot Audio-driven TED Talk Video Generation with Diffusion-based Co-speech Gestures [[paper]]() -- 【CVPR 2024】 EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling [[paper]]() -- 【CVPR 2024】 Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion [[paper]]() -- 【CVPR 2024】 Using Language-Aligned Gesture Embeddings for Understanding Gestures Accompanying Math Terms [[paper]]() +- 【CVPR 2024】 DiffTED: One-shot Audio-driven TED Talk Video Generation with Diffusion-based Co-speech Gestures [[paper]](https://openaccess.thecvf.com/content/CVPR2024W/HuMoGen/papers/Hogue_DiffTED_One-shot_Audio-driven_TED_Talk_Video_Generation_with_Diffusion-based_Co-speech_CVPRW_2024_paper.pdf); [[Ditzley/DiffTED]](https://github.com/Ditzley/DiffTED) +- 【CVPR 2024】 EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling [[paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_EMAGE_Towards_Unified_Holistic_Co-Speech_Gesture_Generation_via_Expressive_Masked_CVPR_2024_paper.pdf); [[PantoMatrix/PantoMatrix]](https://github.com/PantoMatrix/PantoMatrix) +- 【CVPR 2024】 Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion [[paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Chhatre_Emotional_Speech-driven_3D_Body_Animation_via_Disentangled_Latent_Diffusion_CVPR_2024_paper.pdf); [[kiranchhatre/amuse]](https://github.com/kiranchhatre/amuse) +- 【CVPR 2024】 Using Language-Aligned Gesture Embeddings for Understanding Gestures Accompanying Math Terms [[paper]](https://openaccess.thecvf.com/content/CVPR2024W/MAR/papers/Maidment_Using_Language-Aligned_Gesture_Embeddings_for_Understanding_Gestures_Accompanying_Math_Terms_CVPRW_2024_paper.pdf) +- 【SIGGRAPH ASIA 2024】 Body Gesture Generation for Multimodal Conversational Agents [[paper]](https://dl.acm.org/doi/pdf/10.1145/3680528.3687648); [[homepage]](https://pulsekim.github.io/posts/bodygesture/) - 【SIGGRAPH 2024】Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis [[paper]](https://pku-mocca.github.io/Semantic-Gesticulator-Page/) ; [[video]](https://www.youtube.com/watch?v=gKGqCE7id4U) ; [[LuMen-ze/Semantic-Gesticulator-Official]](https://github.com/LuMen-ze/Semantic-Gesticulator-Official) -- SynTalker - Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation [[paper]](https://arxiv.org/abs/2410.00464) ; [[homepage]](https://robinwitch.github.io/SynTalker-Page/) ; [[video]](https://www.youtube.com/watch?v=hkCQLrLarxs) ; [[RobinWitch/SynTalker]](https://github.com/RobinWitch/SynTalker) -- MDT-A2G- Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation [[paper]](https://arxiv.org/abs/2408.03312) ; [[homepage]](https://xiaofenmao.github.io/web-project/MDT-A2G/) +- 【ACMMM 2024】SynTalker - Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation [[paper]](https://arxiv.org/abs/2410.00464) ; [[homepage]](https://robinwitch.github.io/SynTalker-Page/) ; [[video]](https://www.youtube.com/watch?v=hkCQLrLarxs) ; [[RobinWitch/SynTalker]](https://github.com/RobinWitch/SynTalker) +- 【ACMMM 2024】MDT-A2G- Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation [[paper]](https://arxiv.org/abs/2408.03312) ; [[homepage]](https://xiaofenmao.github.io/web-project/MDT-A2G/) - 【ACM MM 2024】 MambaGesture: Enhancing Co-Speech Gesture Generation with Mamba and Disentangled Multi-Modality Fusion [[paper]](https://arxiv.org/abs/2407.19976) ; [[homepage]](https://fcchit.github.io/mambagesture/) +- 【NeurIPS 2024】MambaTalk - Efficient Holistic Gesture Synthesis with Selective State Space Models [[paper]](https://arxiv.org/pdf/2403.09471) ; [[homepage]](https://kkakkkka.github.io/MambaTalk/) ; [[kkakkkka/MambaTalk]](https://github.com/kkakkkka/MambaTalk) - 【ICMI 2024】Gesture Area Coverage to Assess Gesture Expressiveness and Human-Likeness [[paper]](https://openreview.net/pdf?id=Iso5lbByDI) ; [[AI-Unicamp/gesture-area-coverage]](https://github.com/AI-Unicamp/gesture-area-coverage)