Sohyun An

Sohyun An

CS Ph.D. student at UCLA

UCLA CS

Biography

I’m a Ph.D. student in the Computational Machine Learning Group at UCLA, under the supervision of Prof. Cho-Jui Hsieh. Previously, I was an M.S. student in the Machine Learning and Artificial Intelligence (MLAI) Lab at KAIST AI, supervised by Prof. Sung Ju Hwang.

My research aims to make (multimodal) large language models more reliable, efficient, and adaptable when reasoning over real-world information. I focus on enabling models to strategically think and incorporate external knowledge to better interact with complex environments across different modalities.

My current research focuses include:

  • Reasoning-Centric (M)LLMs: Improving how models engage in effective and trustworthy reasoning within dynamic contexts.
  • Search-Augmented Intelligence: Developing methods that enable models to leverage external information for more accurate and grounded intelligence.
  • Efficient Learning and Optimization: Designing general methodologies that enable (M)LLMs to learn, adapt, and reason efficiently without compromising reliability or depth.
  • Robust Multimodal Generalization: Developing methods that allow multimodal models to reliably generalize over information within complex and unpredictable real-world environments.
Interests
  • Reasoning-Centric (M)LLMs
  • Search-Augmented Intelligence
  • Efficient Learning and Optimization
  • Robust Multimodal Generalization
Education
  • PhD in Computer Science, Sep 2024 - Present

    University of California, Los Angeles (UCLA)

  • MS in Artificial Intelligence, Aug 2022 - Aug 2024

    Korea Advanced Institute of Science and Technology (KAIST)

  • BS in Material Science and Engineering (Summa Cum Laude), Mar 2017 - Aug 2021

    Seoul National University (SNU)

Publications

Quickly discover relevant content by filtering publications.
(2025). DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation. ResponsibleFM @ NeurIPS 2025.

Cite Paper Code

(2025). Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs. NeurIPS 2025.

Cite Paper Code

(2025). Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models. NeurIPS 2025.

Cite Paper

(2024). One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts. ICML 2024.

Cite Paper Code

Experience

 
 
 
 
 
Research Scientist Intern
June 2025 – September 2025 Menlo Park, CA
 
 
 
 
 
Research Intern
April 2022 – August 2022 South Korea
 
 
 
 
 
Full-time Engineer
August 2021 – March 2022 South Korea
 
 
 
 
 
Undergraduate Student Researcher
January 2020 – September 2020 South Korea
 
 
 
 
 
Engineer Intern
June 2019 – August 2019 South Korea

TA Experience

 
 
 
 
 
SNS TA
September 2023 – December 2023 South Korea
 
 
 
 
 
TA for AI618 Generative Model and Unsupervised Learning
March 2023 – June 2023 South Korea

Contact