Yinghui He 何映晖

Hi there! I’m Yinghui He (“Gracie”), a second-year PhD student at Princeton University Computer Science Department. I’m honored to be advised by Sanjeev Arora at Princeton Language and Intelligence (PLI). I aim to build LLMs and agents to understand the two-way relationship between artificial intelligence and human cognition. Specifically, I am interested in methodologies of developing human-like, intelligent large language models, and understanding how human cognition can inspire model training. I finished my Bachelor’s degree in Computer Science at the University of Michigan and Shanghai Jiao Tong University, where I had the honor to work with Rada Mihalcea (at the LIT Lab) and Wei Hu.

Publications

Skill-Targeted Adaptive Training

Yinghui He*, Abhishek Panigrahi*, Yong Lin, Sanjeev Arora
Arxiv preprint

AdaptMI: Adaptive Skill-based In-context Math Instruction for Small Language Models

Yinghui He, Abhishek Panigrahi, Yong Lin, Sanjeev Arora
COLM 2025; ICML 2025 Workshop on Test-Time Adaptation; ICML 2025 Methods and Opportunities at Small Scale Workshop

EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety

Jiahao Qiu*, Yinghui He*, Xinzhe Juan*, Yiming Wang, Yuhan Liu, Zixin Yao, Yue Wu, Xun Jiang, Ling Yang, Mengdi Wang
EMNLP 2025 Main Conference

LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation

Xi Ye, Fangcong Yin*, Yinghui He*, Joie Zhang*, Howard Yen*, Tianyu Gao, Greg Durrett, Danqi Chen
COLM 2025

Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models

Yinghui He, Yufan Wu, Yilin Jia, Rada Mihalcea, Yulong Chen, and Naihao Deng
Findings of EMNLP 2023; ICML 2023 Workshop on Theory of Mind in Communicating Agents

Robust Sparse Mean Estimation via Incremental Learning

Jianhao Ma, Rui Ray Chen, Yinghui He, Salar Fattahi, and Wei Hu
ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning