If you are not redirected automatically, follow this link to the new page.

Hanqi Yan

I'm a PostDoc at King's college London, supervised by Prof. Yulan He, where I am focusing on robust and reliable language models.

I passed my PhD viva with no corrections after a great time in University of Warwick (2020.10-2024.04), advised by Prof. Prof. Yulan He and Dr. Lin Gui. I finished my M.S. at Peking University (2017-2020) and my B.E. at Beihang University (2013-2017).

During Ph.D., I started my Causality Journey in visiting professor Kun Zhang affiliated with Causal Learning and Reasoning Group@CMU. Before Ph.D., I started my NLP journey in visiting professor Wenjie Li affiliated Natural Language Processing Group @PolyU Hong Kong.

Email  /  CV  /  Google Scholar  /  Semantic Scholar  /  Twitter  /  Github

Strive not to be a success, but rather to be of value. -- Albert Einstein

profile photo

Research Summary

My research interests lie in the intersection of Machine Learning and Natural Language Processing, i.e., incorporating fundamental representation learning to enhance the interpretability and robustness of different NLP models.

  1. Founded by Representation Learning : I address the intrinsic limitations in representations learnt in Transformers structure: order sensitivity (InfoAC), dimension collapse (TokenUni); principled disentanglement (Matte) ; study the neuron-level interpretability in human-preference alignment (DecPO).
  2. Extend the Impact to Practical NLP Applications
    • Text classification: robust classifier enhanced by Lexicon (LexicalAT), hireachical interpretable classifier (HINT), A Lightweight Adpator for ICL (DiscAda)
    • Recommendation System: GCN-based Q-learning network(GCQN), explainable recommender (GIANT)
    • (Causal) Relation Extraction: weak-to-strong extractor(ReWire), knowledge-augmented graph network (KAG)
    • Controllable generation: identificability-guarantee (MATTE)
    • Reasoning: multiple-perspective self-reflection(Mirror)

News

09.2024: Three papers (1 first-author) are accepted by EMNLP24 Main Conference.
08.2024: I go to Bangkok, Thiland for ACL24.
05.2024: Two papers (1 first-author) are accepted by ACL24, one in main conference, one in findings.
04.2024: I pass the PhD viva with no correctionstrong>.
01.2024: I become a PostDoc at King's College London, NLP Group.
01.2024: I finish my PhD thesis (draft) on the same of my birthday.
01.2024: My first-author paper is finally accepted by TKDE.
07.2023: I go to Hawaii, US to present our Neurips paper.
07.2023: My first-author paper is accepted by Neurips (my neurips paper).
02.2023: I go back to the UK from Abu Dhabi, UAE, finish my Machine Learning Learning trip in MBZUAI.
02.2023: I attend the EMNLP23 held in Abu Dhabi, to present our Computation Linguistics paper.
01.2023: One paper is accepted by EACL23-findingsstrong> (first time as a mentor for a master student).
12.2022: Lionel Messi leads Argentina to win the World Cup championship.
10.2022: I start to be a funded visit student in Machine Learning, Department at MBZUAIstrong>, Abu Dhabi, UAE, advised by Prof. Kun Zhang .
08.2022: I go to Eindhoven, NetherLand to present our UAI paper.
05.2022: My first-author paper is accepted by UAI23 (my first ML paper)
05.2022: My first-author paper is accepted by UAI23 (my first ML paper)
05.2021: The first time! My first-author paper is accepted by ACL21 Oral A super encouragement in my early PhD career.
10.2020: I start my PhD journey at University of Warwick.


Publication

Large Language Model

Encourage or Inhibit Monosemanticity? Revisit Monosemanticity from a Feature Decorrelation Perspective
H. Yan, Y. Xiang, G. Chen, Y. Wang, L. Gui, Y. He
ENMLP24, Main

Study Model-level Monosemanticity (mechanistic Interpretability) in preference alignment process.

Weak Reward Model Transforms Generative Models into Robust Causal Event Extraction Systems
H. Yan, Y. Xiang, G. Chen, Y. Wang, L. Gui, Y. He
EMNLP24, Main

A weak-to-strong information extraction model that uses partial annotated data for reward model while still achieving high performance in PPO.

Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning
H. Yan, Q. Zhu, X. Wang, L. Gui, Y. He
ACL24, Main

Introduce a Navigator model to interact with the Reasoner by providing question-specific and diverse guidance in knowledge-rich self-reflection process without any supervision.

Addressing Order Sensitivity of In-Context Demonstration Examples in Causal Language Models.
Y. Xiang, H. Yan, L. Gui, Y. He
ACL-findings

We attribute the order sensitivity of CausalLMs to the auto-regressive attention masks, which restrict each token from accessing information from subsequent tokens. Thereby leading to our proposed consistencey-based representation learning method in addressing this vulnerability of LLMs.

The Mystery and Fascination of LLMs: A Comprehensive Survey on the Interpretation and Analysis of Emergent Abilities.
Y. Zhou, J. Li, Y.Xiang, H.Yan, L. Gui, Y. He
EMNLP24, Main

From Macro perspective, Why In-Context Learning can learn Different Algorithms without gradient descent, e.g, Regression, Bayesian.

Counterfactual Generation with Identifiability Guarantee
H. Yan, L. Kong, L. Gui, Y. Chi, Eric. Xing, Y. He, K. Zhang.
Neurips23, 2023.

We observed the pitfalls of LLMs in detecting and intervening the implicit sentiment, so we provide Identification guarantees for successful disentanglement of the content and style variables. This principled representations can shed light on the llm alignments, i.e., safe and moral generation.

Self-Explainable Models

Explainable Recommender with Geometric Information Bottleneck
H. Yan, L. Gui, M. Wang, K. Zhang and Y. He
TKDE, 2023

To ease the humman annotation for rationales in Recommender, a prior from user-item interactions is incorporated into the textual latent factors for explaination generation.

Hierarchical Interpretation of Neural Text Classification
H. Yan, L. Gui, M. Wang, K. Zhang and Y. He
Computational Linguistics, 2022, Presented at EMNLP22.

Unsupervised self-explanatory framework for document classification. It can extract word-, sentence-, and topic-level rationales explaining the document-level decision.

Robustenss

A Knowledge-Aware Graph Model for Emotion Cause Extraction
H. Yan, L. Gui, G. Pergola and Y. He
ACL, 2021, Oral.

Commonsense Knowledge, i.e., ConceptNet is applied as invariant feature to tackle the distribution shift and Position Bias.

Counterfactual Generation with Identifiability Guarantee
H. Yan, L. Kong, L. Gui, Y. Chi, Eric. Xing, Y. He, K. Zhang.
Neurips, 2023.

Provide Identification guarantees for successful disentanglement of the content and style variables, further supports the intervention of latent attributes of the text. This principled representations can shed light on the constrained, i.e., safe and moral generation for large language models with noisy pertaining data.

Addressing Token Uniformity in Transformers via Singular Value Transformation
H. Yan, Gui, Y. Y. He.
UAI, 2022, Spotlight

Token uniformity implies more vanished dimensions in the embedding space. _SoftDecay_ is proposed to a range of transformer-based language models and improved performance is observed in STS evaluation and a range of GLUE tasks.

Distinguishability Calibration to In-Context Learning
H. Li, H. Yan, Y. Li, L. Qian, Y. He and L. Gui.
EACL, 2023

Token uniformity issue is still observed in in-context learning, we proposed an adaptor for more discriminative representation learning and improved performance is observed in fine-grained text classification tasks.

Professional Activities

Event Organiser: Co-Chair of AACL-IJCNLP (Student Research Workshop) 2022

Reviewers for NLP: AACL24,EACL23', EMNLP22',23'24, ACL23'24', NAACL24',

Reviewers for ML and AI: Neurocomputing, Knowledge and Information System, TOIS, UAI23', AISTATS24'25', NEURIPS24', ICLR25'


Invited Talks

UC San Diego, NLP Group, 02/2024. Robust and Interpretable NLP via representation learning and Path Ahead

Yale University, NLP Group 01/2024. Robust and Interpretable NLP via representation learning and Path Ahead

Turing AI Fellowship Event, London, 03/2023, Distinguishability Calibration to In-Context Learning

UKRI Fellows Workshop, University of Edinburgh, 04/2022. Interpreting Long Documents and Recommendation Systems via Latent Variable Models

Basically
Blog Posts

Reading List For Large Language Model
Induction Head_ contribute to In-context Learning
Causality101
Debised Recommendation with Causality
Identifiability101 in Causality

Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.