💁‍♀️ About me

I’m a second year PhD student in the Department of CST at University of Cambridge, advised by Prof. Rafal Mantiuk.

Previously, I obtained my B.Sc. degree in pure math at University of Toronto, and Msc. degree in computer engineering at McGill University, where I was gratefully supervised by Prof. Derek Nowrouzezahrai and Prof. Morgan McGuire.

My current research interest lies in the intersection field of Computer Graphics and Machine Learning. My research background includes content adaptive rendering and real-time 3D Reconstruction.

Other than my research work, I love reading, fashion and arts. Psychoanalysis, eastern and western philosophy have been my favorite subjects; life philosphy greatly influenced by I Ching, Plato and Francis Bacon; chilhood’s fav is L’Étoile by Edgar Degas but fascinated by Piet Mondrian and Jack Pollock’s work these days.

🎓 Scholarship

Rabin Ezra Scholarship Trust 2025

Graduate Excellence Awards, McGill University 2021-2023

🧪 Research

1. NeuMaDiff: Neural Material Synthesis via Hyperdiffusion

Chenliang Zhou, Zheyuan Hu, Alejandro Sztrajman, Yancheng Cai, Yaru Liu, Cengiz Oztireli, Submitted to ICCV 2025

NeuMaDiff is a novel neural material synthesis framework utilizing hyperdiffusion. The method employs neural fields as a low-dimensional representation and incorporates a multi-modal conditional hyperdiffusion model to learn the distribution over material weights. This enables flexible guidance through inputs such as material type, text descriptions, or reference images, providing greater control over synthesis.


2. Real-Time Scene Reconstruction using Light Field Probes

Yaru Liu, Derek Nowrouzezahrai, Morgan McGuire, I3D 2024, Poster

Our research explores novel view synthesis methods that reconstruct complex scenes without relying on explicit geometry data. Our approach leverages sparse real-world images to generate multi-scale implicit representations of scene geometries. A key innovation is our probe data structure, which captures highly accurate depth information from dense data points. This allows us to reconstruct detailed scenes at a lower computational cost, making rendering performance independent of scene complexity. Additionally, compressing and streaming probe data is more efficient than handling explicit scene geometry, making our method ideal for large-scale rendering applications.