Sohyun An
Sohyun An
Home
Featured
Publications
Experience
Honors & Awards
Tags
Projects
TA
Contact
Light
Dark
Automatic
Manuscript
Cycle-Consistent Search: Gold-Free Reinforcement Learning for Search Agents
Reinforcement Learning (RL) has shown strong potential for optimizing search agents in complex information retrieval tasks. However, …
Sohyun An
,
Shuibenyang Yuan
,
Hayeon Lee
,
Cho-Jui Hsieh
,
Alexander Min
Cite
Paper
Code
FRESCO: Benchmarking and Optimizing Re-rankers for Evolving Semantic Conflict in Retrieval-Augmented Generation
Retrieval-Augmented Generation (RAG) is a key approach to mitigating the temporal staleness of large language models (LLMs) by …
Sohyun An
,
Hayeon Lee
,
Shuibenyang Yuan
,
Chun-Cheng Jason Chen
,
Vijai Mohan
,
Cho-Jui Hsieh
,
Alexander Min
Cite
Paper
Code
T-MAP: Red-Teaming LLM Agents with Trajectory-aware Evolutionary Search
While prior red-teaming efforts have focused on eliciting harmful text outputs from large language models (LLMs), such approaches fail …
Hyomin Lee
,
Sangwoo Park
,
Yumin Choi
,
Sohyun An
,
Seanie Lee
,
Sung Ju Hwang
Cite
Paper
Code
DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
Contact languages like English exhibit rich regional variations in the form of dialects, which are often used by dialect speakers …
Yu Zhou*
,
Sohyun An*
,
Haikang Deng*
,
Da Yin
,
Clark Peng
,
Cho-Jui Hsieh
,
Kai-Wei Chang
,
Nanyun Peng
Cite
Paper
Code
Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs
Despite Multimodal Large Language Models (MLLMs) showing promising results on general zero-shot image classification tasks, …
Yunqi Hong
,
Sohyun An
,
Andrew Bai
,
Neil Y.C. Lin
,
Cho-Jui Hsieh
Cite
Paper
Code
Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models
While recent success of large reasoning models (LRMs) significantly advanced LLMs’ reasoning capability by optimizing the final …
Sohyun An
,
Ruochen Wang
,
Tianyi Zhou
,
Cho-Jui Hsieh
Cite
Paper
Optimal Neural Architecture Generation with Diffusion Models
Existing NAS methods suffer from either an excessive amount of time for repetitive sampling and training of many task-irrelevant …
Sohyun An
Cite
Paper
One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts
Large Language Models (LLMs) exhibit strong generalization capabilities to novel tasks when prompted with language instructions and …
Ruochen Wang*
,
Sohyun An*
,
Minhao Cheng
,
Tianyi Zhou
,
Sung Ju Hwang
,
Cho-Jui Hsieh
Cite
Paper
Code
DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models
Existing NAS methods suffer from either an excessive amount of time for repetitive sampling and training of many task-irrelevant …
Sohyun An*
,
Hayeon Lee*
,
Jaehyeong Jo
,
Seanie Lee
,
Sung Ju Hwang
Cite
Paper
Code
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture that obtains the best …
Hayeon Lee*
,
Sohyun An*
,
Minseon Kim
,
Sung Ju Hwang
Cite
Paper
Code
»
Cite
×