Simon Park will present his MSE talk “Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning” on Tuesday, April 15, 2025 at 12:00pm in Room 238, 41 William Street (AI Lab).

Simon Park will present his MSE talk “Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning” on Tuesday, April 15, 2025 at 12:00pm in Room 238, 41 William Street (AI Lab). The members of his committee are as follows: Sanjeev Arora (Adviser) and Danqi Chen (reader) All ae welcome to attend. Please see abstract below. We introduce INSTRUCT-SKILLMIX, an automated approach for creating diverse, high quality SFT data for instruction-following. The pipeline involves two stages, each leveraging an existing powerful LLM: (1) Skill extraction: uses the LLM to extract core “skills” for instruction-following by directly prompting the model. This is inspired by “LLM metacognition” of (Didolkar et al., 2024); (2) Data generation: uses the powerful LLM to generate (instruction, response) data that exhibit a randomly chosen pair of these skills. Here, the use of random skill combinations promotes diversity and difficulty. The estimated cost of creating the dataset is under $600. Vanilla SFT (i.e., no PPO, DPO, or RL methods) on data generated from INSTRUCT-SKILLMIX leads to strong gains on instruction following benchmarks such as AlpacaEval 2.0, MT-Bench, and WildBench. With just 4K examples, LLaMA-3-8B-Base achieves 42.76% length-controlled win rate on AlpacaEval 2.0, a level similar to frontier models like Claude 3 Opus and LLaMA-3.1-405B-Instruct. Ablation studies also suggest plausible reasons for why creating open instruction-tuning datasets via naive crowd-sourcing has proved difficult. In our dataset, adding 20% low quality answers (“shirkers”) causes a noticeable degradation in performance. The INSTRUCT-SKILLMIX pipeline seems flexible and adaptable to other settings.
participants (1)
-
CS Grad Department