Hanxu Hu


I'm an incoming PhD student at University of Zurich co-supervised by Prof Rico Sennrich and Prof. Ivan Titov at University of Edinburgh, and will work closely with Dr. Jannis Vamvas.

I'm a research intern at Shanghai AI Lab and a member of EdinburghNLP and WestlakeNLP. I have done research about instruction tuning, in-context learning, and cross-lingual transfer.

Currently, I am interested in scalable methods of interpretability which can be used in alignment and long-context reasoning of modern LMs. (e.g. Scalable Oversight, Mechanistic Interpretability)

Hanxu Hu

Selected Papers

You can find all papers in my google scholar page.

Fine-tuning Large Language Models with Sequential Instructions

Hanxu Hu*, Simon Yu*, Pinzhen Chen*, Edoardo M. Ponti

Preprint, 2024

Project Page / arXiv

Chain-of-Symbol Prompting For Spatial Reasoning in Large Language Models

Hanxu Hu*, Hongyuan Lu*, Huajian Zhang, Yun-Ze Song, and Yue Zhang

COLM 2024

Code / arXiv

Meta-Learning For Multi-Modal Cross-lingual Transfer

Hanxu Hu and Frank Keller

MRL at EMNLP 2023 (Best Paper Nomination)

Code / arXiv

Service

Acknowledgments

I am lucky to work with many enthusiastic, intelligent, and hardworking peers, such as Pinzhen Chen at UofEdinburgh, Simon Yu at NEU, Chenmien Tan at UofEdinburgh, Wenhao Zhu at Nanjing University, Zeyu Huang at UofEdinburgh. I learnt a lot from them.