fix(provider): gate zai/zhipuai thinking injection on reasoning capability and restore GLM variants#23352
Conversation
…ility and restore GLM variants
- Only inject `thinking: { type: "enabled", clear_thinking: false }` for
z.ai/zhipuai models that have `capabilities.reasoning = true`. Previously
this was sent unconditionally, causing non-reasoning GLM models (e.g.
glm-5-turbo, glm-4.5-flash) to return empty responses silently.
- Remove `id.includes("glm")` from the early-return exclusion block in
`variants()`. GLM reasoning models routed through `@ai-sdk/openai-compatible`
now fall through to the correct switch case and return
`{ low, medium, high }` reasoning-effort variants.
- Update tests: GLM variant test now asserts reasoning efforts are returned;
add non-reasoning z.ai model test to confirm thinking is not injected.
|
Thanks for your contribution! This PR doesn't have a linked issue. All PRs must reference an existing issue. Please:
See CONTRIBUTING.md for details. |
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate Found:
Why it's related: This PR also addresses restoring GLM variants and appears to handle similar reasoning/capability-related fixes for provider models. Both PRs seem to be working on restoring variants and gating thinking/reasoning features based on provider capabilities. |
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Problem
Two related bugs affect z.ai / zhipuai providers.
Bug 1 — Non-reasoning GLM models return empty responses
options()unconditionally injectsthinking: { type: "enabled", clear_thinking: false }for all models whoseproviderIDcontains"zai"or"zhipuai". Non-reasoning models (e.g.glm-5-turbo,glm-4.5-flash) do not support this parameter and silently return empty streaming responses.Reproduced: model connects,
step-startandstep-finishevents fire, but no text or reasoning parts are emitted.Bug 2 — GLM reasoning models show no variant selector in UI
variants()includedid.includes("glm")in an early-return exclusion block, so GLM reasoning models routed via@ai-sdk/openai-compatiblenever reached the switch case that returns{ low, medium, high }effort variants. This was an unintended side-effect of an earlier fix targeting other providers.Fix
packages/opencode/src/provider/transform.tsAdd
&& input.model.capabilities.reasoningguard to the z.ai thinking injection — thinking is only injected when the model actually supports reasoning.Remove
id.includes("glm")from thevariants()exclusion block — GLM reasoning models now fall through to the@ai-sdk/openai-compatibleswitch case and correctly return{ low, medium, high }reasoning-effort variants.packages/opencode/test/provider/transform.test.ts{ low, medium, high }efforts returned for a z.ai GLM reasoning model via@ai-sdk/openai-compatible.thinkinginjected.Testing