Short Bio

Donny (Yuedong) Chen is a Research Scientist at ByteDance Seed (Singapore), developing 3D and 4D foundation models. ALL VIEWS ARE SOLELY HIS OWN (DISCLAIMER).

Previously, Donny obtained his PhD degree at Monash University (Australia) under the supervision of Prof. Jianfei Cai, Prof. Tat-Jen Cham and Dr. Bohan Zhuang. His PhD research focused on Feed-Forward Novel View Synthesis from Sparse Observations. During his PhD, he also collaborated with Haofei Xu, Prof. Marc Pollefeys, Prof. Andreas Geiger, Dr. Chuanxia Zheng, Prof. Andrea Vedaldi and Dr. Qianyi Wu.

Before that, he was a Research Assistant at the Institute for Media Innovation, NTU (Singapore). During this period, his research focused on enhancing emotion recognition by incorporating human prior knowledge.

His academic journey began with the completion of both his MEng and BEng degrees at Sun Yat-sen University, where he majored in Software Engineering. Additionally, he spent a semester as an exchange student at National Chi Nan University (Taiwan) during his BEng studies, collaborating closely with David Cheng.





Selected Publications

🤖🧠👌🏼 He prefers simple yet effective solutions

* indicates Equal Contribution; † indicates Project Lead

World-R1: Reinforcing 3D Constraints for Text-to-Video Generation
ICML 2026
[arXiv] [code] [project page] [Paper of the Day #1]
Feed-Forward 3D Scene Modeling: A Problem-Driven Perspective
Survey 2026
[arXiv] [code] [project page]
Depth Anything 3: Recovering the Visual Space from Any Views
Tech Report 2025
[arXiv] [code ] [project page] [demo] [gallery] [news] [Spaces of the Week #4] [Paper of the Day #2]
[also presented at ICLR 2026 Oral (recording, slides)]
Trace Anything: Representing Any Video in 4D via Trajectory Fields
ICLR 2026
[arXiv] [code] [project page]
Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting
3DV 2026
[arXiv] [code] [project page] [also presented at SpaVLE@NeurIPS'25]
ZPressor: Bottleneck-Aware Compression for Scalable Feed-Forward 3DGS
NeurIPS 2025
[arXiv] [code] [project page]
Explicit Correspondence Matching for Generalizable Neural Radiance Fields
TPAMI 2025
[arXiv] [code] [project page]
MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
NeurIPS 2024
[arXiv] [code] [project page] [also presented at AJCAI'24 & 3DV'25]
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
ECCV 2024 (Oral)
[arXiv] [code ] [project page] [Hacker News] [Trendshift #20] [Most Influential ECCV Paper #13]
MuRF: Multi-Baseline Radiance Fields
CVPR 2024
[arXiv] [code] [project page]
Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance Fields
ECCV 2022
[arXiv] [code] [project page] [demo video]

More on Google Scholar


Projects & Talks

  • 28-01-2025, Invited talk "Feed-forward NVS from Sparse Inputs" at Amazon, Tel Aviv, hosted by Lior Fritz.
  • 08-11-2024, Invited talk "Feed-forward Novel View Synthesis" at Wayve, London, hosted by Joe Polin.
Oral presentation at ICLR 2026
Invited talk at 3DV25 Nectar Track
Invited talk at AJCAI 2024
Oral presentation at ECCV 2024
Invited talk at SHUZIHUANYU
Invited talk at 3DCVer
Demo at Monash Uni. Open Day 2022
A popular replication of GANimation

Miscellanies

  • Conference Reviewer: ECCV(‘24-‘26), CVPR(‘23-‘26), ICCV(‘23-‘25), NeurIPS(‘24-‘25), ICLR(‘25-‘26), ICML(‘25), 3DV(‘24-‘26), AAAI(‘24-‘26), ACMMM(‘21‑’24), ACCV(‘24), ISMAR(‘23-‘24), IEEEVR(‘24)
  • Journal Reviewer: TPAMI, IJCV, TIP, TVCG, TMM, TCSVT, TOMM, TVCJ, Computers & Graphics, The Visual Computer
  • Donny is a native speaker of Teochew, fluent in English, Cantonese, Mandarin, and also familiar with Singlish.
  • You are welcome to use this personal homepage as a template for your own. See the documentation for setup notes.

DISCLAIMER

ALL OPINIONS AND VIEWS EXPRESSED ON THIS PAGE ARE SOLELY HIS OWN AND SHALL NOT BE INTERPRETED AS REPRESENTING OR IMPUTING THE VIEWS, POSITIONS, POLICIES, OR OFFICIAL COMMUNICATIONS OF HIS EMPLOYER. NO STATEMENT, COMMENT, OR MATERIAL PRESENTED HERE SHALL BE CONSTRUED AS PROFESSIONAL ADVICE OR AS ANY FORM OF AUTHORIZED ENDORSEMENT BY THE COMPANY. HE DOES NOT ENGAGE IN, NOR SHALL HE RESPOND TO, ANY INQUIRIES OR COMMUNICATIONS PERTAINING TO CONFIDENTIAL INFORMATION, INTERNAL PROJECTS, PROPRIETARY MATERIALS, OR ANY MATTERS THAT MAY AFFECT OR POTENTIALLY CONFLICT WITH THE INTERESTS OF HIS EMPLOYER. ALL REQUESTS FOR RESEARCH COOPERATION, FORMAL COLLABORATION, OR OTHER OFFICIAL ENGAGEMENTS MUST BE DIRECTED EXCLUSIVELY TO THE COMPANY THROUGH ITS DESIGNATED CHANNELS.