PhD candidate · graduating May 2026
Tianqi Chen 陈天麒

teaches machines to denoise reality.

Final-year Statistics PhD at UT Austin, advised by Prof. Mingyuan Zhou. I build diffusion models and multimodal generative systems — and the algorithms that keep them fast, safe, and trustworthy.

Austin, TX tqch [at] utexas [dot] edu

Recent updates

May 2025
Internship Joined Google as a Software Engineer Intern in Mountain View, building diffusion-based video super-resolution on top of CogVideoX 1.5.
Jan 2025
ICLR 2025 Score Forgetting Distillation — a swift, data-free method for machine unlearning in diffusion models — accepted at ICLR 2025.
Sep 2024
Award Received the University Graduate Continuing Fellowship from UT Austin.
Jun 2024
Internship Applied Scientist Intern at Amazon, Seattle — short-form video localization and landscape-to-portrait conversion pipelines.
May 2024
ICML 2024 A Dense Reward View on Aligning Text-to-Image Diffusion with Preference accepted at ICML 2024.
Sep 2023
NeurIPS 2023 Beta Diffusion accepted at NeurIPS 2023.

What I work on

I'm interested in the theory and practice of generative modeling — particularly how iterative denoising processes can be generalized beyond the Gaussian playbook and pushed toward faster, safer, and more controllable systems. My recent work spans four threads:

Generalized diffusion
2023 — present
Beyond Gaussian noise

A unified, Bregman-divergence view of iterative corruption / recovery — leading to non-Gaussian variants like Beta Diffusion (NeurIPS '23) and Learning to Jump (ICML '23) for sparse, non-negative, heavy-tailed data.

Trustworthy AI
2024
Machine unlearning for diffusion

Score Forgetting Distillation (ICLR '25): a swift, data-free way to forget unsafe classes or concepts (incl. specific celebrities and NSFW content) while preserving generation quality — and getting up to 1000× sampling speedup for free.

Alignment
2024
Dense reward for T2I diffusion

A dense-reward perspective on aligning text-to-image diffusion with human preference — turning the trajectory itself into the optimization signal (ICML '24).

Multimodal
2023
Visual in-context learning

iPromptDiff: an SD-based architecture that decouples content from task and routes visual perception through text embeddings — strong in-domain and OOD performance even when text prompts are missing.

Papers

ICLR 2025
2025
Score Forgetting Distillation: A Swift, Data-Free Method for Machine Unlearning in Diffusion Models

T. Chen, S. Zhang, M. Zhou

A teacher–student distillation that rapidly removes target classes or concepts from diffusion models without accessing real data, while preserving overall generative quality.

ICML 2024
2024
A Dense Reward View on Aligning Text-to-Image Diffusion with Preference

S. Yang*, T. Chen*, M. Zhou (*equal contribution)

Trajectory-level dense reward signals for aligning text-to-image diffusion to human preference, outperforming sparse-reward baselines.

NeurIPS 2023
2023
Beta Diffusion

M. Zhou, T. Chen, H. Zheng, Z. Wang

A diffusion model defined on the simplex via Beta distributions — well-suited for bounded data such as images and probability vectors.

ICML 2023
2023
Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling

T. Chen, M. Zhou

A binomial / Poisson hierarchical VAE that handles sparsity, skewness, heavy tails, and heterogeneity — natural for count-like and non-negative data.

IEEE Access
2022
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

R. Wang, T. Chen, P. Yao, S. Liu, I. Rajapakse, A. Hero

An information-theoretic surrogate for DkNN classification, plus matching attack and defense algorithms with state-of-the-art adversarial robustness.

IEEE Access
2022
RAILS: A Robust Adversarial Immune-Inspired Learning System

R. Wang, T. Chen, …, I. Rajapakse, A. Hero

An immune-system-inspired adversarial framework that defends against unseen attacks by mimicking B-cell affinity maturation.

Full list on Google Scholar

Where I've worked

Software Engineer Intern · Google May 2025 – Aug 2025 · Mountain View, CA

Built a full data-curation and training pipeline for diffusion-based video super-resolution; designed temporally-consistent simulators of real-world video degradation; developed a VSR diffusion model on CogVideoX 1.5.

Applied Scientist Intern · Amazon Jun 2024 – Oct 2024 · Seattle, WA

Designed an automatic short-form video localization pipeline (object detection, OCR, inpainting, segmentation), and a landscape-to-portrait conversion workflow with KMeans + Gaussian-process smoothing for stable subject tracking.

Research Scientist Intern · ByteDance May 2023 – Nov 2023 · Bellevue, WA

Studied visual in-context learning for diffusion models; proposed iPromptDiff, an SD-based architecture that decouples content from task and pushes visual perception into text embeddings — beating baselines in-domain and OOD.

Graduate Research / Teaching Assistant · UT Austin Jun 2022 – Jun 2024

Built and open-sourced a PyTorch codebase for DDPM, DDIM, and classifier-free guidance (230★+ on GitHub), reproducing published diffusion results and exploring non-Gaussian generalizations.

Research Affiliate · GARD · University of Michigan Jul 2020 – May 2022

Co-developed adversarial soft k-NN (ASK) and the immune-inspired RAILS framework for adversarial robustness on deep classifiers.

Education

PhD, Statistics · The University of Texas at Austin Sep 2021 – May 2026

Advised by Prof. Mingyuan Zhou, IROM, McCombs School of Business.

MS, Applied Statistics · University of Michigan, Ann Arbor Sep 2019 – Apr 2021
BS, Applied Mathematics · Fudan University Sep 2015 – Jun 2019

Awards & service

Fellowships & awards

  • University Graduate Continuing Fellowship 2025
  • McCombs Dean's Fellowship 2022 – 2024
  • NeurIPS Scholar Award 2023
  • UT Professional Development Award 2023
  • Fudan Excellent Freshman Scholarship — Top 1% 2015

Service

  • Reviewer · ICLR '24 – '26
  • Reviewer · NeurIPS '23 – '25
  • Reviewer · ICML '23 – '25
  • Reviewer · AISTATS '21, '26
  • Teaching Assistant · UT Austin '21 – '22, '24

Toolbox

  • Python
  • PyTorch
  • JAX
  • TensorFlow
  • NumPy / SciPy
  • scikit-learn
  • R
  • CUDA
  • Diffusion models
  • VAEs
  • RLHF / DPO
  • CogVideoX · SD · Flow Matching

Let's talk.

I'll be on the 2026 academic and industry job market. If you're hiring, collaborating, or just want to argue about score-based vs. flow-based generative models — say hi.