-
Notifications
You must be signed in to change notification settings - Fork 3.4k
fix: use tool_responses role for gemma4 models in LiteLLM integration #5655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jfrometa88
wants to merge
6
commits into
google:main
Choose a base branch
from
jfrometa88:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+207
−7
Open
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
825e863
test
jfrometa88 7729f3f
changes
jfrometa88 b48819c
changes2
jfrometa88 b4e09f4
Merge branch 'main' into main
rohityan 958d838
fix: scope tool_responses to gemma4, restore docstring, clean up comm…
jfrometa88 d74b521
fix: add comments explaining gemma4 tool_responses scope
jfrometa88 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
187 changes: 187 additions & 0 deletions
187
tests/unittests/models/test_lite_llm_gemma_tool_role.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,187 @@ | ||
| # Copyright 2026 Google LLC | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| """Tests for Gemma-specific tool role handling in _content_to_message_param. | ||
|
|
||
| Gemma's chat template expects role='tool_responses' for tool result messages, | ||
| while the OpenAI-compatible default is role='tool'. This module verifies that | ||
| _content_to_message_param sets the correct role based on the model name. | ||
| """ | ||
|
|
||
| from google.adk.models.lite_llm import _content_to_message_param | ||
| from google.genai import types | ||
| import pytest | ||
|
|
||
| # Auxiliar | ||
|
|
||
|
|
||
| def _make_function_response_content( | ||
| function_name: str = "get_weather", | ||
| response_data: dict | None = None, | ||
| call_id: str = "call_001", | ||
| ) -> types.Content: | ||
| """Builds a types.Content with a single function_response part.""" | ||
| if response_data is None: | ||
| response_data = {"city": "Santiago de Cuba", "condition": "sunny"} | ||
| return types.Content( | ||
| role="user", | ||
| parts=[ | ||
| types.Part( | ||
| function_response=types.FunctionResponse( | ||
| name=function_name, | ||
| response=response_data, | ||
| id=call_id, | ||
| ) | ||
| ) | ||
| ], | ||
| ) | ||
|
|
||
|
|
||
| def _make_multi_function_response_content( | ||
| call_ids: list[str] | None = None, | ||
| ) -> types.Content: | ||
| """Builds a types.Content with multiple function_response parts.""" | ||
| if call_ids is None: | ||
| call_ids = ["call_001", "call_002"] | ||
| return types.Content( | ||
| role="user", | ||
| parts=[ | ||
| types.Part( | ||
| function_response=types.FunctionResponse( | ||
| name=f"tool_{i}", | ||
| response={"result": f"value_{i}"}, | ||
| id=call_id, | ||
| ) | ||
| ) | ||
| for i, call_id in enumerate(call_ids) | ||
| ], | ||
| ) | ||
|
|
||
|
|
||
| def _extract_role(msg) -> str: | ||
| """Extracts role from a litellm message, whether dict or object.""" | ||
| if isinstance(msg, dict): | ||
| return msg["role"] | ||
| return msg.role | ||
|
|
||
|
|
||
| class TestToolRoleSingleResponse: | ||
| """_content_to_message_param with a single function_response part.""" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_non_gemma_model_uses_tool_role(self): | ||
| """Non-Gemma models should get role='tool' (OpenAI-compatible default).""" | ||
| content = _make_function_response_content() | ||
|
|
||
| result = await _content_to_message_param( | ||
| content, model="ollama/qwen2.5-coder:3b" | ||
| ) | ||
|
|
||
| assert _extract_role(result) == "tool" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_gemma4_model_uses_tool_responses_role(self): | ||
| """Models containing 'gemma4' should get role='tool_responses'.""" | ||
| content = _make_function_response_content() | ||
|
|
||
| result = await _content_to_message_param(content, model="ollama/gemma4:e2b") | ||
|
|
||
| assert _extract_role(result) == "tool_responses", ( | ||
| "Gemma models require role='tool_responses' to match their chat " | ||
| "template; role='tool' causes infinite tool-calling loops." | ||
| ) | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_gemma4_uppercase_model_name(self): | ||
| """Model name matching should be case-insensitive.""" | ||
| content = _make_function_response_content() | ||
|
|
||
| result = await _content_to_message_param(content, model="ollama/Gemma4:31b") | ||
|
|
||
| assert _extract_role(result) == "tool_responses" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_tool_call_id_and_content_preserved(self): | ||
| """Fix must not alter tool_call_id or content — only role changes.""" | ||
| content = _make_function_response_content( | ||
| response_data={"status": "ok"}, call_id="my_call_123" | ||
| ) | ||
|
|
||
| result = await _content_to_message_param(content, model="ollama/gemma4:e2b") | ||
|
|
||
| if isinstance(result, dict): | ||
| assert result["tool_call_id"] == "my_call_123" | ||
| assert "ok" in result["content"] | ||
| else: | ||
| assert result.tool_call_id == "my_call_123" | ||
| assert "ok" in result.content | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_empty_model_string_uses_tool_role(self): | ||
| """Empty model string should fall back to default role='tool'.""" | ||
| content = _make_function_response_content() | ||
|
|
||
| result = await _content_to_message_param(content, model="") | ||
|
|
||
| assert _extract_role(result) == "tool" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_unrelated_models_use_tool_role(self): | ||
| """Models that do not contain 'gemma4' must not be affected.""" | ||
| unaffected_models = [ | ||
| "ollama/llama3:8b", | ||
| "anthropic/claude-3-opus", | ||
| "openai/gpt-4o", | ||
| "ollama/gemma3:4b", # gemma3 != gemma4 | ||
| ] | ||
| for model in unaffected_models: | ||
| content = _make_function_response_content() | ||
| result = await _content_to_message_param(content, model=model) | ||
| assert ( | ||
| _extract_role(result) == "tool" | ||
| ), f"Model '{model}' should not be affected by the Gemma4 fix." | ||
|
|
||
|
|
||
| class TestToolRoleMultipleResponses: | ||
| """_content_to_message_param with multiple function_response parts.""" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_gemma4_all_messages_use_tool_responses_role(self): | ||
| """All messages in a multi-response must have role='tool_responses'.""" | ||
| content = _make_multi_function_response_content( | ||
| call_ids=["call_a", "call_b", "call_c"] | ||
| ) | ||
|
|
||
| result = await _content_to_message_param(content, model="ollama/gemma4:4b") | ||
|
|
||
| assert isinstance(result, list) | ||
| assert len(result) == 3 | ||
| for msg in result: | ||
| assert _extract_role(msg) == "tool_responses", ( | ||
| "Every tool message in a multi-response must use 'tool_responses' " | ||
| "for Gemma4 models." | ||
| ) | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_non_gemma_multi_response_uses_tool_role(self): | ||
| """Non-Gemma multi-response messages should all have role='tool'.""" | ||
| content = _make_multi_function_response_content( | ||
| call_ids=["call_a", "call_b"] | ||
| ) | ||
|
|
||
| result = await _content_to_message_param(content, model="openai/gpt-4o") | ||
|
|
||
| assert isinstance(result, list) | ||
| for msg in result: | ||
| assert _extract_role(msg) == "tool" |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be more general? or we only need to do this for gemma4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into this more carefully before broadening the check.
Gemma models before version4 do not support tool use at all, so they would never reach this code path. The
tool_responsesrole convention is specific to Gemma4's chat template, making"gemma4"the correct and intentional scope for this fix.Other models with non-standard tool response formats (e.g. Liquid AI's LFM2, which uses its own tokenization scheme with custom delimiters like
<|tool_response_start|>) represent a structurally different problem that would require a separate solution — conflating them here would overcomplicate this fix.Happy to add a comment in the code explaining why the check is scoped to
gemma4specifically, if that would help reviewers in the future.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks for the detailed explanation! Using "gemma4" sounds good to be. Could you please add the comment for future reference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added comments explaining why the check is scoped to
gemma4. Ready for another look whenever you get a chance