I am currently a fourth-year Ph.D. student (Zhiyuan Honors) at the Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, supervised by Prof. Xiaolin Huang. My research focuses on efficient adaptation methods for large language models (LLMs) and vision-language models (VLMs), with particular interests in continual learning and its applications to intelligent agents.

I am now visiting the MMLab at The Chinese University of Hong Kong (CUHK), under Prof. Hongsheng Li.

I am always open to academic discussions and potential collaborations and and please feel free to contact me via email / wechat: HzKinght

🔥 News

📝 Featured Publications

arXiv 2025
sym

VL-RouterBench: A Benchmark for Vision-Language Model Routing

Zhehao Huang*, Baijiong Lin*, Jingyuan Zhang, Yuhang Liu, Ning Lu, Tao Li, Xiaolin Huang

Code | Dataset

This paper addresses the lack of systematic evaluation for VLM routing by introducing VL-RouterBench, a comprehensive benchmark covering 14 datasets and 17 models that measures the trade-offs through evaluating 10 routing methods, it reveals a “routability gain” but highlights a gap between current routers and the Oracle, providing an open-source toolchain to advance more efficient, VLM routing deployments.

ICLR 2026 (Oral)
sym

RAIN-Merging: A Gradient-Free Method to Enhance Instruction Following in Large Reasoning Models with Preserved Thinking Format

Zhehao Huang, Yuhang Liu, Baijiong Lin, Yixin Lou, Zhengbao He, Hanling Tian, Tao Li, Xiaolin Huang

Code | Project | Dataset | Poster | Slides (coming soon)

This paper identifies that LRMs struggle with strict instruction following despite reasoning, and introduces RAIN-Merging, a gradient-free method that integrates instruction-tuned features into the LRM’s null space to preserve structured “thinking” formats while significantly enhancing constraint adherence across various scales and tasks without compromising reasoning quality.

arXiv 2025
sym

T2I-ConBench: Text-to-Image Benchmark for Continual Post-training

Zhehao Huang*, Yuhang Liu*, Yixin Lou*, Zhengbao He, Mingzhen He, Wenxing Zhou, Tao Li, Kehan Li, Zeyi Huang, Xiaolin Huang

Code | Project | Dataset

This paper identifies that naive continual post-training in text-to-image models causes catastrophic forgetting and loss of compositionality, and introduces T2I-ConBench, a unified benchmark that evaluates models across four key dimensions—including generality retention and cross-task generalization, providing a standardized foundation for future research.

arXiv 2025
sym

A Unified Gradient-based Framework for Task-agnostic Continual Learning-Unlearning

Zhehao Huang, Xinwen Cheng, Jie Zhang, Jinghao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang

This paper demonstrates that learning and forgetting are mathematically linked through a unified framework, and proposes UG-CLU to balance new knowledge acquisition with precise data removal by using a weight-adaptation mechanism and manifold constraints, enabling stable, task-agnostic unlearning at both the category and sample levels without retraining from scratch.

NeurIPS 2024 (Spotlight)
sym

Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement

Zhehao Huang, Xinwen Cheng, Jinghao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang

Code | Project | Poster | Slides

This work proposes a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction, free from specific modal constraints, and adaptable across computer vision unlearning tasks, including classification and generation.

TMLR 2024
sym

Online Continual Learning via Logit Adjusted Softmax

Zhehao Huang, Tao Li, Chenhe Yuan, Yingwen Wu, Xiaolin Huang

Code

This paper shows that inter-class imbalance in online continual learning is fundamentally caused by imbalanced (time-varying) class priors, and proposes Logit Adjusted Softmax (LAS) to counter prior-induced bias by adjusting logits during training, improving performance across class-IL and class+domain incremental settings with minimal overhead.

🎖 Honors and Awards

  • 2021.04 2021 Interdisciplinary Contest In Modeling (ICM) Meritorious Winner
  • 2020.08 National University Intelligent Car Competition National Third Prize
  • 2019.11 Shanghai Jiao Tong University B Class Excellent Scholarship(< 10%)

📖 Educations

  • 2022.09 - 2027.06 (expected), Ph.D. Student, Control Science and Engineering, Shanghai Jiao Tong University, China
  • 2018.09 - 2022.06, B.Eng., Automation, Shanghai Jiao Tong University, China

💻 Experience

  • 2024.06 - 2025.05, Intern Algorithm Engineer, Huawei, Research on Continual Post-training for Text-to-Image Models.
  • 2021.08 - 2022.01, Intern Algorithm Engineer, ByteDance, Advertising Business Algorithm Development.