Ting-Yun (Charlotte) Chang
Hi, I'm Ting-Yun Chang 張婷雲. I am a 4th-year PhD student at USC CS, co-advised by Jesse Thomason and Robin Jia. I am interested in understanding -> controlling large language models' behavior, e.g., studying internals of LLMs to improve their few-shot learning capabilities. Recently, I work on reducing LLM quantization errors by studying the causes of errors.
Previously, I did my bachelor's and master's degrees in Taiwan, both in Computer Science. I was advised by Yun-Nung (Vivian) Chen at National Taiwan University and Chi-Jen Lu at Academia Sinica.
         


  Experience
Research Assistant Fall 2021 - Now University of Southern California Advisor: Prof. Jesse Thomason and Prof. Robin Jia

Research Assistant 2020 - 2021 Academia Sinica, Taiwan Advisor: Prof. Chi-Jen Lu

Research Assistant 2018 - 2020 NTU CSIE Miulab Advisor: Prof. Yun-Nung (Vivian) Chen

Research Intern Summer 2025 Google DeepMind

Applied Scientist Intern Summer 2024 Amazon AWS AI

Applied Scientist Intern Spring 2020 Amazon Alexa AI


  Publications
Why Do Some Inputs Break Low-Bit LLM Quantization? Ting-Yun Chang, Muru Zhang, Jesse Thomason, and Robin Jia preprint 2025 [Paper]

When Parts Are Greater Than Sums: Individual LLM Components Can Outperform Full Models Ting-Yun Chang, Jesse Thomason, and Robin Jia EMNLP 2024 (main) [Paper] [Code] [Blog] [Video]

Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks Ting-Yun Chang, Jesse Thomason, and Robin Jia NAACL 2024 (main) [Paper] [Code] [Slides] [Video]

Data Curation Alone Can Stabilize In-context Learning Ting-Yun Chang, Robin Jia ACL 2023 (main) [Paper] [Code] [Slides] [Slides2] [Video]

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks Tejas Srinivasan, Ting-Yun Chang, Leticia Pinto Alva, Georgios Chochlakis, Mohammad Rostami, and Jesse Thomason NeurIPS 2022 Datasets and Benchmarks Track [Paper] [Code] [Video]

Rethinking Why Intermediate-Task Fine-Tuning Works Ting-Yun Chang, Chi-Jen Lu Findings of EMNLP 2021 [Paper] [Code] [Slides] [Video]

Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, and Dilek Hakkani-Tür IEEE SLT 2021 [Paper] [Slides]

Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, and Dilek Hakkani-Tür DeeLIO Workshop@EMNLP 2020 (best paper award) [Paper] [Slides]

TinyGAN: Distilling BigGAN for Conditional Image Generation Ting-Yun Chang, Chi-Jen Lu Asian Conference on Computer Vision 2020 [Paper] [Code] [Demo] [Video]

What Does This Word Mean? Explaining Contextualized Embeddings with Natural Language Definition Ting-Yun Chang, Yun-Nung Chen EMNLP 2019 [Paper] [Thesis] [Code]


  TA

  Other