Skip to content

[discrete diffusion] Add dflash pipeline#13699

Open
kashif wants to merge 7 commits intohuggingface:mainfrom
kashif:add-dflash-pipeline
Open

[discrete diffusion] Add dflash pipeline#13699
kashif wants to merge 7 commits intohuggingface:mainfrom
kashif:add-dflash-pipeline

Conversation

@kashif
Copy link
Copy Markdown
Contributor

@kashif kashif commented May 8, 2026

What does this PR do?

Added the DFlash pipeline as a stanalone PR extracted from #12911 and requires huggingface/transformers#45846

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

kashif added 2 commits May 8, 2026 10:03
Adds DFlashPipeline + DFlashTokenDiffusionScheduler for block-diffusion
speculative decoding with a draft DFlash model and a target causal LM.

Verified against the six bug patterns surfaced in the LLaDA2 review
(huggingface#13598). DFlash sidesteps most of them by being batch_size=1 only and
relying on the causal default for attention; the applicable patterns
(huggingface#3 callback bindings, huggingface#4 EOS at first generated position, huggingface#6 inner
progress-bar config preservation) are pinned by regression tests.

Public surface mirrors the LLaDA2 / SDAR / IDLM conventions: lazy import,
dummy objects, scheduler + output dataclass, pipeline + output dataclass,
fast tests for both, scheduler doc page, pipeline doc page.

Sample/train scripts under examples/discrete_diffusion/.
@github-actions github-actions Bot added size/L PR with diff > 200 LOC documentation Improvements or additions to documentation tests utils pipelines examples schedulers and removed size/L PR with diff > 200 LOC labels May 8, 2026
@kashif kashif requested a review from DN6 May 8, 2026 10:14
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

- Training: `position_ids` must span `[0, start + block_size)` so the
  draft's attention RoPE cos/sin covers both `k_ctx` (target_hidden,
  length `start`) and `k_noise` (noise_embedding, length `block_size`).
  Previously we passed only `arange(start, start + block_size)` which
  triggered a K-side broadcast mismatch on the very first batch.
- Docs/examples: target loads as plain Qwen3 / Qwen3.5 (no remote
  code), but the draft's custom DFlashDraftModel class lives in the
  Hub repo's `auto_map`, so `trust_remote_code=True` is required for
  draft loads only. Updated the example docstring, pipeline doc page,
  sample script, train script, and the GPU verify script.

Smoke-tested via srun on z-lab/Qwen3.5-4B-DFlash + Qwen/Qwen3.5-4B
(H100): 3 steps complete, final checkpoint saved.
@github-actions github-actions Bot added the size/L PR with diff > 200 LOC label May 8, 2026
kashif added 3 commits May 8, 2026 11:19
…rgets

The pipeline previously short-circuited to `draft.spec_generate(...)` when
the draft model exposed it (e.g. z-lab/Qwen3-8B-DFlash-b16). That path is
the upstream `dflash_generate` loop, which calls `past_key_values_target.crop()`
unconditionally — fine for full-attention targets, but on hybrid targets it
silently corrupts the linear-attention recurrent state.

Confirmed in transformers 5.8.0.dev0 at cache_utils.py:759-761:

    def crop(self, max_length: int):
        # We don't crop the linear attention cache, so simply do nothing here
        pass

`LinearAttentionCacheLayerMixin.crop` is documented as a no-op, so any
verify loop that relies on `cache.crop()` for rollback is wrong on hybrid
attention targets. Our explicit loop already handles this via
`DFlashTokenDiffusionScheduler.snapshot_cache` / `restore_cache` plus an
accepted-prefix re-forward, and reduces to a plain `.crop()` on full-attn
targets.

Verified end-to-end on GPU after the removal:
- z-lab/Qwen3.5-4B-DFlash + Qwen/Qwen3.5-4B (hybrid attn): "2 + 2 equals 4."
- z-lab/Qwen3-8B-DFlash-b16 + Qwen/Qwen3-8B (full attn):    "2 + 2 equals 4."

Fast tests: 43 passed.
@kashif kashif force-pushed the add-dflash-pipeline branch from e97f7ae to a70e329 Compare May 9, 2026 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation examples pipelines schedulers size/L PR with diff > 200 LOC tests utils

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants