Yinghui He will present her General Exam "Skill-Targeted Adaptive Learning for LLMs" on Tuesday, January 20, 2026 at 3:00 PM in CS 401 and via zoom.

Zoom link: https://princeton.zoom.us/my/yinghuihe

Committee Members: Sanjeev Arora (advisor), Danqi Chen, Zhuang Liu

Abstract:
Large language models (LLMs) can often “learn” at test time from carefully chosen in-context examples and at train time from supervised fine-tuning (SFT), yet both settings suffer from a common failure mode: generic instruction or data additions frequently yield diminishing returns and can even harm performance in smaller models. We propose skill-targeted adaptive learning as a unifying approach to closing these gaps by using an LLM teacher’s metacognition. First, we study adaptive skill-based in-context learning. We discovered that naive skill prompting can create a “cognitive overload” effect on easy questions by injecting unnecessary information. To address this, we present AdaptMI, which introduces skill-targeted in-context math instructions only when a model struggles, and AdaptMI+, which further targets examples to the specific skills missing from the model’s intermediate reasoning and final responses. Second, we explore skill-targeted adaptive training for settings where vanilla supervised fine-tuning saturates. We present STAT, a fine-tuning strategy in which a strong teacher labels each training instance with required skills, builds a Missing-Skill Profile by auditing the student’s errors, and then either (i) reweights/selects existing data or (ii) synthesizes new examples emphasizing missing skills. Together, these skill-targeted adaptive learning approaches provide a principled way to overcome performance saturation and systematically close capability gaps in language models.

Reading List:
https://docs.google.com/document/d/17ZEbYvjn-D7WVGy66YVyoRsUWGr4-pisie41dI4aX_g/edit?usp=sharing

Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.