Skip to content

fix(provider): gate zai/zhipuai thinking injection on reasoning capability and restore GLM variants#23352

Closed
A-Diomar wants to merge 1 commit intoanomalyco:devfrom
A-Diomar:fix/zai-thinking-and-glm-variants
Closed

fix(provider): gate zai/zhipuai thinking injection on reasoning capability and restore GLM variants#23352
A-Diomar wants to merge 1 commit intoanomalyco:devfrom
A-Diomar:fix/zai-thinking-and-glm-variants

Conversation

@A-Diomar
Copy link
Copy Markdown

@A-Diomar A-Diomar commented Apr 18, 2026

Problem

Two related bugs affect z.ai / zhipuai providers.

Bug 1 — Non-reasoning GLM models return empty responses

options() unconditionally injects thinking: { type: "enabled", clear_thinking: false } for all models whose providerID contains "zai" or "zhipuai". Non-reasoning models (e.g. glm-5-turbo, glm-4.5-flash) do not support this parameter and silently return empty streaming responses.

Reproduced: model connects, step-start and step-finish events fire, but no text or reasoning parts are emitted.

Bug 2 — GLM reasoning models show no variant selector in UI

variants() included id.includes("glm") in an early-return exclusion block, so GLM reasoning models routed via @ai-sdk/openai-compatible never reached the switch case that returns { low, medium, high } effort variants. This was an unintended side-effect of an earlier fix targeting other providers.

Fix

packages/opencode/src/provider/transform.ts

  1. Add && input.model.capabilities.reasoning guard to the z.ai thinking injection — thinking is only injected when the model actually supports reasoning.

  2. Remove id.includes("glm") from the variants() exclusion block — GLM reasoning models now fall through to the @ai-sdk/openai-compatible switch case and correctly return { low, medium, high } reasoning-effort variants.

packages/opencode/test/provider/transform.test.ts

  • Update GLM variant test: asserts { low, medium, high } efforts returned for a z.ai GLM reasoning model via @ai-sdk/openai-compatible.
  • Add test per provider ID: non-reasoning z.ai model does not get thinking injected.

Testing

bun test packages/opencode/test/provider/transform.test.ts
# 137 pass, 0 fail

…ility and restore GLM variants

- Only inject `thinking: { type: "enabled", clear_thinking: false }` for
  z.ai/zhipuai models that have `capabilities.reasoning = true`. Previously
  this was sent unconditionally, causing non-reasoning GLM models (e.g.
  glm-5-turbo, glm-4.5-flash) to return empty responses silently.

- Remove `id.includes("glm")` from the early-return exclusion block in
  `variants()`. GLM reasoning models routed through `@ai-sdk/openai-compatible`
  now fall through to the correct switch case and return
  `{ low, medium, high }` reasoning-effort variants.

- Update tests: GLM variant test now asserts reasoning efforts are returned;
  add non-reasoning z.ai model test to confirm thinking is not injected.
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions github-actions bot added needs:issue needs:compliance This means the issue will auto-close after 2 hours. labels Apr 18, 2026
@github-actions
Copy link
Copy Markdown
Contributor

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate Found:

Why it's related: This PR also addresses restoring GLM variants and appears to handle similar reasoning/capability-related fixes for provider models. Both PRs seem to be working on restoring variants and gating thinking/reasoning features based on provider capabilities.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Apr 19, 2026
@github-actions github-actions bot closed this Apr 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants