professional_reflection.jpg

Dongwei Jiang

Speech researcher in the past, NLP researcher now
Second year Master's student
CLSP, Johns Hopkins University

Research Interest

I am broadly interested in reasoning. In the realm of reasoning, I’ve worked on complicated problems like:

  • Theorem proving and Logical reasoning that uses theorem prover Lean to help with the reasoning process [1]
  • Decompositional entailment that formulates a consistent and theoretically grounded approach to annotating decompositional entailment dataset [2]

However, one puzzling limitation in LLM reasoning is that while these models can solve “superhuman” problems in specific domains, they often fail at simple tasks. This observation led me to question whether LLMs’ problem-solving abilities truly demonstrate superior reasoning capabilities or simply reflect domain-specific overspecialization. As a result, my focus has now turned to the more general, system-2-like reasoning. My work in this area includes:

  • Building general-purpose verifier through rationale extraction from unlabelled data to provide process supervision during reasoning [3]
  • Investigating the effectiveness of CoT prompting across 100+ papers and 20 datasets and discovering CoT benefits mainly math/symbolic reasoning tasks [4]

I’m also interested in the self-improvement capability of LLMs. If we begin with the “end” (superintelligence/AGI) in mind, relying on human input won’t get us there. We need to teach models to interact with the environment and self-improve. Specifically, I’ve worked on:

  • Understanding the reason that prevents LLM from effective self-improvement [5]
  • Probing the limits of self-improvement

In addition, my research has frequently drawn inspiration from cognitive science concepts, including cognitive load, system 2 reasoning, and zone of proximal development. This connection seems natural, given that LLMs are fundamentally trained to emulate human cognitive patterns. I’m eager to explore this intersection more deeply in future research.


More About Me

With six years of industry and research experience in speech processing and self-supervised models, my focus is shifting to LLM. To that end, I’m currently studying at JHU as a master’s student, working with Professor Daniel Khashabi and Benjamin Van Drume. I’ve also worked with Professor Shay Cohen from Edinburgh and Greg Durrett from UTA.

Reflecting on my career, I’ve observed a pattern where significant external events have impacted my role. While I was getting comfortable with my role at DiDi, the company encountered serious regulatory issues with the Chinese government, resulting in its delisting from New York Stock Exchange. During my time at YuanFuDao, the Double Reduction Policy was introduced, which imposed strict restrictions on YuanFuDao’s core business operations. As I was adapting to my position at Shopee, the company’s stock price dropped 80% due to the global economic crisis and the tensions between China and the US, leading to its extensive layoffs.

In my free time, I sometimes play Civ 6 or Hearthstone. I also run and go bouldering every other day - well, more like every three or four days, but who’s counting?



Selected Publications

  1. arXiv
    RATIONALYST: Pre-training Process-Supervision for Improving Reasoning
    Dongwei Jiang , Guoxuan Wang , Yining Lu , Andrew Wang , Jingyu Zhang , Chuyu Liu , Benjamin Van Durme , and Daniel Khashabi
    arXiv preprint, 2024
  2. arXiv
    To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
    Zayne Sprague , Fangcong Yin , Juan Diego Rodriguez ,  Dongwei Jiang , Manya Wadhwa , Prasann Singhal , Xinyu Zhao , Xi Ye , Kyle Mahowald , and Greg Durrett
    arXiv preprint, 2024
  3. NAACL
    LeanReasoner: Boosting Complex Logical Reasoning with Lean
    Dongwei Jiang , Marcio Fonseca , and Shay B. Cohen
    In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024, 2024
  4. EMNLP
    Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic
    Nathaniel Weir , Kate Sanders , Orion Weller , Shreya Sharma ,  Dongwei Jiang , Zhengping Jiang , Bhavana Dalvi Mishra , Oyvind Tafjord , Peter Jansen , Peter Clark , and Benjamin Van Durme
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  5. arXiv
    SELF-[IN]CORRECT: LLMs Struggle with Refining Self-Generated Responses
    Dongwei Jiang , Jingyu Zhang , Orion Weller , Nathaniel Weir , Benjamin Van Durme , and Daniel Khashabi
    CoRR, Nov 2024
  6. ICASSP
    A Further Study of Unsupervised Pretraining for Transformer Based Speech Recognition
    Dongwei Jiang , Wubo Li , Ruixiong Zhang , Miao Cao , Ne Luo , Yang Han , Wei Zou , Kun Han , and Xiangang Li
    In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, Nov 2021
  7. InterSpeech
    Speech SimCLR: Combining Contrastive and Reconstruction Objective for Self-Supervised Speech Representation Learning
    Dongwei Jiang , Wubo Li , Miao Cao , Wei Zou , and Xiangang Li
    In 22nd Annual Conference of the International Speech Communication Association, Interspeech 2021, Brno, Czechia, August 30 - September 3, 2021, Nov 2021
  8. arxiv
    Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
    Dongwei Jiang , Xiaoning Lei , Wubo Li , Ne Luo , Yuxuan Hu , Wei Zou , and Xiangang Li
    CoRR, Nov 2019