Dongwei Jiang
Speech researcher in the past, NLP researcher now
Second year Master's student
CLSP, Johns Hopkins University
Research Interest
I am broadly interested in reasoning. In the realm of reasoning, I’ve worked on complicated problems like:
- Theorem proving and Logical reasoning that uses theorem prover Lean to help with the reasoning process [1]
- Decompositional entailment that formulates a consistent and theoretically grounded approach to annotating decompositional entailment dataset [2]
However, one puzzling limitation in LLM reasoning is that while these models can solve “superhuman” problems in specific domains, they often fail at simple tasks. This observation led me to question whether LLMs’ problem-solving abilities truly demonstrate superior reasoning capabilities or simply reflect domain-specific overspecialization. As a result, my focus has now turned to the more general, system-2-like reasoning. My work in this area includes:
- Building general-purpose verifier through rationale extraction from unlabelled data to provide process supervision during reasoning [3]
- Investigating the effectiveness of CoT prompting across 100+ papers and 20 datasets and discovering CoT benefits mainly math/symbolic reasoning tasks [4]
I’m also interested in the self-improvement capability of LLMs. If we begin with the “end” (superintelligence/AGI) in mind, relying on human input won’t get us there. We need to teach models to interact with the environment and self-improve. Specifically, I’ve worked on:
- Understanding the reason that prevents LLM from effective self-improvement [5]
- Probing the limits of self-improvement
In addition, my research has frequently drawn inspiration from cognitive science concepts, including cognitive load, system 2 reasoning, and zone of proximal development. This connection seems natural, given that LLMs are fundamentally trained to emulate human cognitive patterns. I’m eager to explore this intersection more deeply in future research.
More About Me
With six years of industry and research experience in speech processing and self-supervised models, my focus is shifting to LLM. To that end, I’m currently studying at JHU as a master’s student, working with Professor Daniel Khashabi and Benjamin Van Drume. I’ve also worked with Professor Shay Cohen from Edinburgh and Greg Durrett from UTA.
Reflecting on my career, I’ve observed a pattern where significant external events have impacted my role. While I was getting comfortable with my role at DiDi, the company encountered serious regulatory issues with the Chinese government, resulting in its delisting from New York Stock Exchange. During my time at YuanFuDao, the Double Reduction Policy was introduced, which imposed strict restrictions on YuanFuDao’s core business operations. As I was adapting to my position at Shopee, the company’s stock price dropped 80% due to the global economic crisis and the tensions between China and the US, leading to its extensive layoffs.
In my free time, I sometimes play Civ 6 or Hearthstone. I also run and go bouldering every other day - well, more like every three or four days, but who’s counting?
Selected Publications
- NAACL