Hang Chen 💪

Hang ChenHáng Chén

(he/him)

PhD student in Xi’an Jiaotong University

Xi'an Jiaotong University (albert2123@stu.xjtu.edu.cn)

Professional Summary

Hang Chen is a Ph.D. student in the School of Computer Science and Technology at Xi’an Jiaotong University, where he is expected to complete his studies in March 2026. His research primarily focuses on the mechanistic interpretability and causality capability of Large Language Models (LLMs), and its applications in parameter updating. He aims to develop methodologies that allow for the causal intervention or modification of model behaviors based on a fundamental understanding of their internal mechanisms.

Education

PhD Computer Science

2021-09-01
2026-03-30

Xi'an Jiaotong University

BS Computer Science

2016-09-01
2020-06-31

Xi'an Jiaotong University

Interests

Large Language Models Mechanistic Interpretability Causality Sentiment Analysis
📚 My Research

My research encompasses a broad spectrum of language models, centered on how these models construct and utilize causal mechanisms. In my earlier work, I investigated methods to endow LLM representations with causal discrimination and explored the phenomenon of causal emergence within these complex architectures. Currently, my focus has shifted toward Mechanistic Interpretability (MI). I am particularly interested in the intersection of MI and parameter updating (such as SFT). My goal is to leverage mechanistic insights—identifying specific functional circuits—to guide more precise, surgical, and interpretable modifications to model behavior. By bridging these two fields, I aim to transform LLMs from “black boxes” into transparent systems that can be reliably controlled and updated for trusted applications. 😃

Featured Publications
CLUE: Conflict-guided Localization for LLM Unlearning Framework featured image

CLUE: Conflict-guided Localization for LLM Unlearning Framework

The LLM unlearning aims to eliminate the influence of undesirable data without affecting causally unrelated information. This process typically involves using a forget set to …

hang-chen-jiaying-zhu-xinyu-yang-wenya-wang
Rethinking Circuit Completeness in Language Models: AND, OR, and ADDER Gates featured image

Rethinking Circuit Completeness in Language Models: AND, OR, and ADDER Gates

Circuit discovery has gradually become one of the prominent methods for mechanistic interpretability, and research on circuit completeness has also garnered increasing attention. …

hang-chen-jiaying-zhu-xinyu-yang-wenya-wang
Towards Causal Relationship in indefinite data: New Datasets and Baseline Model featured image

Towards Causal Relationship in indefinite data: New Datasets and Baseline Model

The cross-fertilization of deep learning and causal discovery has given birth to broader causal data forms, involving multi-structured data like the Netsim dataset, and complex …

hang-chen-xinyu-yang-keqing-du
Recent Publications
(2026). CLUE: Conflict-guided Localization for LLM Unlearning Framework. ICLR.
(2026). Skill Path: Unveiling Language Skills from Circuit Graphs. AAAI (Oral).
(2025). Rethinking Circuit Completeness in Language Models: AND, OR, and ADDER Gates. Neurips.
Recent News