Yinghui He 何映晖

Hi there! I’m Yinghui He (pronounced as Yee·ng-Hway Huh), a first-year PhD student at Princeton University Computer Science Department. I work with Sanjeev Arora at Princeton Language and Intelligence (PLI). I’m broadly interested in Natural Language Processing, especially the two-way relation between artificial intelligence and human cognition. Prior to that, I finished my B.S.E in Computer Science at the University of Michigan, where I had the honor to work with Rada Mihalcea (at the LIT Lab) and Wei Hu.

Publications

AdaptMI: Adaptive Skill-based In-context Math Instruction for Small Language Models

Yinghui He, Abhishek Panigrahi, Yong Lin, Sanjeev Arora
arxiv preprint

EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety

Jiahao Qiu*, Yinghui He*, Xinzhe Juan*, Yiming Wang, Yuhan Liu, Zixin Yao, Yue Wu, Xun Jiang, Ling Yang, Mengdi Wang
arxiv preprint

LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation

Xi Ye, Fangcong Yin*, Yinghui He*, Joie Zhang*, Howard Yen*, Tianyu Gao, Greg Durrett, Danqi Chen
arxiv preprint

Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models

Yinghui He, Yufan Wu, Yilin Jia, Rada Mihalcea, Yulong Chen, and Naihao Deng
EMNLP 2023

Robust Sparse Mean Estimation via Incremental Learning

Jianhao Ma, Rui Ray Chen, Yinghui He, Salar Fattahi, and Wei Hu
ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning