Skip to content

atomic-chat: add provider with initial blessed models#1476

Open
Vect0rM wants to merge 1 commit intoanomalyco:devfrom
Vect0rM:feat/add-atomic-chat-provider
Open

atomic-chat: add provider with initial blessed models#1476
Vect0rM wants to merge 1 commit intoanomalyco:devfrom
Vect0rM:feat/add-atomic-chat-provider

Conversation

@Vect0rM
Copy link
Copy Markdown

@Vect0rM Vect0rM commented Apr 17, 2026

Summary

Adds Atomic Chat as a new OpenAI-compatible local provider serving at http://127.0.0.1:1337/v1.

After merge, any opencode user will see Atomic Chat in /connect and the registered models in /models with zero extra configuration.

Provider manifest

  • name: Atomic Chat
  • npm: @ai-sdk/openai-compatible
  • api: http://127.0.0.1:1337/v1 (port 1337 is a hard public contract — Atomic Chat's local API server binds it by default)
  • env: ["ATOMIC_CHAT_API_KEY"] (declared for schema compliance — server ignores it for local auth, mirroring LMStudio's approach)
  • doc: https://atomic.chat

Included models

Model IDs match the normalized form returned by Atomic Chat's GET /v1/models endpoint (dots in HF repo IDs are replaced with underscores, author prefix is stripped):

File Model ID Upstream (HuggingFace)
Qwen3_5-9B-IQ4_XS.toml Qwen3_5-9B-IQ4_XS unsloth/Qwen3.5-9B-IQ4_XS
gemma-4-E4B-it-IQ4_XS.toml gemma-4-E4B-it-IQ4_XS unsloth/gemma-4-E4B-it-IQ4_XS
MiniMax-M2_5-UD-TQ1_0.toml MiniMax-M2_5-UD-TQ1_0 unsloth/MiniMax-M2.5-UD-TQ1_0

All three are local GGUF quantizations served via llama.cpp — cost is $0.

Qwen3.5 9B ID was empirically verified against a running Atomic Chat server:

$ curl http://127.0.0.1:1337/v1/models | jq '.data[].id'
"Qwen3_5-9B-IQ4_XS"

Test plan

  • bun validate passes
  • Model IDs verified against GET /v1/models of a running Atomic Chat instance
  • Logo bundled as logo.svg (PNG source embedded via base64)
  • Provider schema compliance (env.min(1), doc.min(1), @ai-sdk/openai-compatible + api pairing)

Adds Atomic Chat as a local OpenAI-compatible provider at
http://127.0.0.1:1337/v1. Includes logo and three curated models:

- unsloth/Qwen3.5-9B-IQ4_XS  (id: Qwen3_5-9B-IQ4_XS)
- unsloth/gemma-4-E4B-it-IQ4_XS  (id: gemma-4-E4B-it-IQ4_XS)
- unsloth/MiniMax-M2.5-UD-TQ1_0  (id: MiniMax-M2_5-UD-TQ1_0)

Model ids match the normalized form returned by Atomic Chat's
/v1/models endpoint (dots replaced with underscores).

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant