diff --git a/README.md b/README.md index d1bbb0e..19e0f42 100644 --- a/README.md +++ b/README.md @@ -127,10 +127,10 @@ Paper by Folder : [📁/survey](https://github.com/OpenHuman-ai/awesome-gesture_ | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | --- | | FineMotion | 【ICMI 2023】The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=rQ8beDwKFaQ) | | | Gesture Motion Graphs | 【ICMI 2023】Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=OFeYOJ6d4d0) | | -| Diffusion-based | 【ICMI 2023】Diffusion-based co-speech gesture generation using joint text and audio representation [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=2ycIAWzOd1E) | | +| Diffusion-based | 【ICMI 2023】(SG) Diffusion-based co-speech gesture generation using joint text and audio representation [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=2ycIAWzOd1E) | ⭐ | | UEA Digital Humans | 【ICMI 2023】The UEA Digital Humans entry to the GENEA Challenge 2023 [[paper]]() ; [[JonathanPWindle/UEA-DH-GENEA23]](https://github.com/JonathanPWindle/uea-dh-genea23) | [[youtube]](https://www.youtube.com/watch?v=u6LXN7ka674) | | | FEIN-Z | 【ICMI 2023】FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=5lur1pDNnvM) | | -| DiffuseStyleGesture+ | 【ICMI 2023】The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=PNKpvTgfh9Q) | 🏆 | +| DiffuseStyleGesture+ | 【ICMI 2023】(SF) The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=PNKpvTgfh9Q) | 🏆 | | Discrete Diffusion | 【ICMI 2023】Discrete Diffusion for Co-Speech Gesture Synthesis [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=JgQdpZ2qCzk) | | | KCL-SAIR | 【ICMI 2023】The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=FT1ePpvpYso) | | | Gesture Generation | 【ICMI 2023】Gesture Generation with Diffusion Models Aided by Speech Activity Information [[paper]]() | [[youtube]](https://www.youtube.com/watch?v=7_I8rT7pXWo) | | @@ -246,6 +246,7 @@ Paper by Folder : [📁/survey](https://github.com/OpenHuman-ai/awesome-gesture_ - 【CVPR 2023】 Semi-supervised Speech-driven 3D Facial Animation via Cross-modal Encoding [[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Yang_Semi-supervised_Speech-driven_3D_Facial_Animation_via_Cross-modal_Encoding_ICCV_2023_paper.pdf) - 【ACM MM 2023】UnifiedGesture - A Unified Gesture Synthesis Model for Multiple Skeletons [[paper]](https://arxiv.org/pdf/2309.07051.pdf) ; [[YoungSeng/UnifiedGesture]](https://github.com/YoungSeng/UnifiedGesture) - 【ICMI 2023】 AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis [[paper]](https://dl.acm.org/doi/pdf/10.1145/3577190.3614135) ; [[hvoss-techfak/AQGT]](https://github.com/hvoss-techfak/AQGT) +- 【ICMI 2023】Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation [[paper]](https://openreview.net/pdf?id=vD3_u_kbkqS) - DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model [[paper]](https://arxiv.org/pdf/2301.10047.pdf) - BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer [[paper]](https://i.cs.hku.hk/~taku/kunkun2023.pdf) - EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation [[paper]](https://arxiv.org/pdf/2305.18891.pdf) ; @@ -255,7 +256,7 @@ Paper by Folder : [📁/survey](https://github.com/OpenHuman-ai/awesome-gesture_ - Audio is all in one: speech-driven gesture synthetics using WavLM pre-trained model [[paper]](https://arxiv.org/pdf/2308.05995.pdf) - The KCL-SAIR team’s entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker [[paper]](https://openreview.net/pdf?id=oW4rUGjbMYg) - The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation [[paper]](https://openreview.net/pdf?id=Mm44wlJICIj) -- Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation [[paper]](https://openreview.net/pdf?id=vD3_u_kbkqS) + - Co-Speech Gesture Generation via Audio and Text Feature Engineering [[paper]](https://openreview.net/pdf?id=mK2qMNf0_Nd) - Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment [[paper]](https://openreview.net/pdf?id=CMivR3x5fpC) - Gesture Generation with Diffusion Models Aided by Speech Activity Information [[paper]](https://openreview.net/pdf?id=S9Efb3MoiZ) diff --git a/papers/2023/Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation.pdf b/papers/2023/Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation.pdf new file mode 100644 index 0000000..4d6c988 Binary files /dev/null and b/papers/2023/Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation.pdf differ