File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

윤상웅

Yoon, Sangwoong
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models

Author(s)
Tang, XiaohangDolga, RaresYoon, SangwoongBogunovic, Ilija
Issued Date
2026-04-23
URI
https://scholarworks.unist.ac.kr/handle/201301/90535
Citation
International Conference on Learning Representations
Abstract
Improving the reasoning capabilities of diffusion-based large language models (dLLMs) through reinforcement learning (RL) remains an open problem. The intractability of dLLMs likelihood function necessitates approximating the current, old, and reference policy likelihoods at each policy optimization step. This reliance introduces additional computational overhead and lead to potentially large bias – particularly when approximation errors occur in the denominator of policy ratios used for importance sampling. To mitigate these issues, we introduce wd1, a novel policy optimization approach that reformulates the objective as a weighted likelihood, requiring only a single approximation for the current parametrized policy likelihood. Experiments on widely used reasoning benchmarks demonstrate that wd1, without supervised fine-tuning (SFT) or any supervised data, outperforms existing RL methods for dLLMs, achieving up to 16% higher accuracy. wd1 delivers additional computational gains, including reduced training time and fewer function evaluations (NFEs) per gradient step. These findings, combined with the simplicity of method’s implementation and R1-Zero-like training (no SFT), position wd1 as a more effective and efficient method for applying RL to dLLMs reasoning.
Publisher
Proceedings of International Conference on Learning Representations (ICLR)

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.