Hello! I am a PhD student at JHU CLSP, advised by Mark Dredze.

My interests are protean due to my restless curiosity. I am generally interested in understanding how language models function and exploring how we can change them. Terminologically, I am interested in studying interpretability (broadly), training dynamics, evaluation, and reasoning of language models.

Before my PhD studies, I was a resident at FAIR Labs, working with Adina Williams and Dieuwke Hupkes. I obtained my Masterโ€™s and Bachelorโ€™s degrees of computer science (B.S., M.S.) and mathematics (B.A.) with a minor in classical studies at University of Washington. My advisor was Noah A. Smith, and I was supervised by Ana Marasoviฤ‡. I have also interned at the AWS AI Labs for two times, mentored by Peng Qi, Yuhao Zhang, Jifan Chen, and Danilo Ribeiro. During my undergraduate years, I also worked with Christopher Hoffman on dual random-walk systems. Thanks a lot to the support of my advisors and my donors, I was able to conduct research and keep learning.

I have a limited amount of life experiences, but if you are having questions and you think I can help, feel free to email me.

News

[April 2025] New papers ๐Ÿ“„: Amuro and Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models is accepted into RepL4NLP2025; SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models is accepted into NAACL2025.

[August 2024] Our system achieve great performance ๐Ÿ“ˆ in almost all languages in the IWSLT low-resource speech translation shared task.

[December 2023] Our paper, The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks, received an Honorable Mention ๐Ÿ† in CoNLL2023.

Cat Warning

I have two very cute cats: Bert (white collar with white paws) and Roberta (grey and looks like a little leopard). While you are browsing my website, I hope pictures of Bert and Roberta can make you feel happy and relaxed for a second.

bertRoberta1 bertRoberta