Akshara Prabhakar will present her MSE talk "Exploring Reasoning and Interactive Benchmarking of Language Models" on Friday, April 26, 2024 at 3:30pm via Zoom
Akshara Prabhakar will present her MSE talk "Exploring Reasoning and Interactive Benchmarking of Language Models" on Friday, April 26, 2024 at 3:30pm via Zoom zoom: [ https://princeton.zoom.us/j/8646560610 | https://princeton.zoom.us/j/8646560610 ] Adviser: Karthik Narasimhan Reader: Thomas L. Griffiths Title: Exploring Reasoning and Interactive Benchmarking of Language Models Abstract: This thesis aims to deepen our understanding of the reasoning process undertaken in Large Language Models (LLMs) and introduces a challenging benchmark to evaluate language agents' interactive code-generation capabilities. Part 1 delves into untangling the factors affecting the Chain-of-Thought (CoT) reasoning process in LLMs. Despite CoT prompting having demonstrated significant efficacy in enhancing multi-step reasoning capabilities, it has been found to yield biased answers and produce unfaithful explanations, raising interpretability concerns. Additionally, debates persist regarding whether LLMs truly generalize or rely on heuristics. Focusing on the symbolic reasoning task of decoding shift ciphers, we develop a simple probabilistic approach to identify these factors - the probability of the task's expected output (probabilistic effect), what the model has implicitly learned during pre-training (memorization influenced), and the model's attempt to adopt shorter reasoning steps (noisy reasoning) and show the consequent drastic variability in task accuracy. Through a series of experiments, we conclude that LLM behavior exhibits clear hallmarks of both memorization and true reasoning suggesting that CoT resembles a probabilistic, memorization-influenced form of noisy reasoning. Part 2 transitions from LLMs to the emerging domain of language agents, where the LLM is now grounded in a digital environment to aid in decision-making. We introduce InterCode -- an interactive RL environment to benchmark the interactive code generation abilities of language agents with execution-driven feedback. By converting traditional seq2seq datasets into three interactive code environments (Bash, SQL, and Python), we demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results underscore the benefits of interactive code generation and establish InterCode as a robust, scalable benchmark for advancing code understanding and generation capabilities. Furthermore, we use this framework in the cybersecurity realm; by developing a Capture the Flag task in the InterCode environment, we find that while language agents possess rudimentary cybersecurity knowledge, they are not able to perform multi-step cybersecurity tasks out of the box. Overall, it highlights the evolving landscape of LLMs, emphasizing the need for comprehensive evaluation and understanding of their capabilities across different domains.
participants (1)
-
Gradinfo