About
Hi, I'm Hyeonbin.
I’m a Master’s student at KAIST AI advised by Minjoon Seo.
My grad school journey started with using LLM reasoning as a path toward AGI. However, I soon ran into the "bitter lesson": models often feign competence by memorizing training data, causing performance to plateau the moment they step outside that distribution.
My work now focuses on breaking through that ceiling. I’m interested in designing learning signals that push models beyond simple pattern matching and verifying those capabilities in rigorous robotics testbeds.
With this end, I'm currently interning at ALIN Lab, where I’m researching Vision-Language-Action Models.I’m always open to collaborations. Feel free to reach out!
Experience
-
ALIN LabKAISTResearch Intern (Current) 2025-Advisor: Jinwoo Shin
Vision-Language-Action Models -
KRAFTONAI Companion TeamInternship 2025
-
KAIST AIM.S. 2024 ~Advisor: Minjoon Seo
Reasoning of Language Models -
NAVERHealthcare AIInternship 2022EHR summarization
(March ~ August) -
KAIST SoCB.S. 2019 — 2024GPA 3.96 / 4.3
Publications
Selected Publications
-
Let's Predict Sentence by Sentence
H Hwang, B Jeon, S Kim, J Kim, H Chang, S Yang, S Won, D Lee, Y Ahn, ... @ COLM 2025 RAM 2 WS (Oral)
-
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models
H Hwang, D Kim, S Kim, S Ye, M Seo @ EMNLP 2024 (Findings)
@ ACL 2024 NLRSE WS (Oral)
Others
-
Differential Information Distribution: A Bayesian Perspective on Direct Preference Optimization
Y Won, H Lee, H Hwang, M Seo Preprint (under review)
-
The Coverage Principle: A Framework for Understanding Compositional Generalization
H Chang, J Park, H Cho, S Yang, M Ko, H Hwang, S Won, D Lee, Y Ahn, ... Preprint (under review)
-
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think
S Lee, S Kim, M Seo, Y Jo, D Go, H Hwang, J Park, X Yue, S Welleck, G Neubig, M Lee, M Seo Preprint (under review)
-
BiGGen Bench: A Comprehensive Benchmark for Generative Language Models
S Kim, J Suk, ... (others not shown), H Hwang, ... M Seo @ NAACL 2025 (Best Paper)
-
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
J Kim, H Lee, H Cho, J Jang, H Hwang, S Won, Y Ahn, D Lee, M Seo @ ICLR 2025 (Oral)
@ AAAI 2025 KnowFM WS (Best Paper)
-
FLASK: Fine-grained Language Model Evaluation Based on Alignment Skill Sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo @ ICLR 2024 (Spotlight)