Follow
Dongchan Min
Title
Cited by
Cited by
Year
Meta-stylespeech: Multi-speaker adaptive text-to-speech generation
D Min, DB Lee, E Yang, SJ Hwang
International Conference on Machine Learning, 7748-7759, 2021
1132021
Meta-gmvae: Mixture of gaussian vae for unsupervised meta-learning
DB Lee, D Min, S Lee, SJ Hwang
International Conference on Learning Representations, 2020
402020
Grad-stylespeech: Any-speaker adaptive text-to-speech synthesis with diffusion models
M Kang, D Min, SJ Hwang
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
30*2023
Styletalker: One-shot style-based audio-driven talking head video generation
D Min, M Song, SJ Hwang
arXiv preprint arXiv:2208.10922, 2022
102022
StyleLipSync: Style-based Personalized Lip-sync Video Generation
T Ki, D Min
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
32023
Distortion-aware network pruning and feature reuse for real-time video segmentation
H Rhee, D Min, S Hwang, B Andreis, SJ Hwang
arXiv preprint arXiv:2206.09604, 2022
22022
Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
D Kim, E Ko, H Kim, Y Kim, J Kim, D Min, J Kim, SJ Hwang
arXiv preprint arXiv:2305.19135, 2023
12023
Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
T Ki, D Min, G Chae
arXiv preprint arXiv:2404.00636, 2024
2024
Meta-StyleSpeech
DC Min
한국과학기술원, 2022
2022
StyleLipSync: Style-based Personalized Lip-sync Video Generation Supplementary Material
T Ki, D Min
The system can't perform the operation now. Try again later.
Articles 1–10