You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: update google provider docs for implicit caching (vercel#6656)
## Background
Google 2.5 model family now supports implicit caching.
## Summary
Update docs.
## Tasks
- [x] Formatting issues have been fixed (run `pnpm prettier-fix` in the
project root)
Copy file name to clipboardExpand all lines: content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx
+48-6Lines changed: 48 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -240,7 +240,46 @@ See [File Parts](/docs/foundations/prompts#file-parts) for details on how to use
240
240
241
241
### Cached Content
242
242
243
-
You can use Google Generative AI language models to cache content:
243
+
Google Generative AI supports both explicit and implicit caching to help reduce costs on repetitive content.
244
+
245
+
#### Implicit Caching
246
+
247
+
Gemini 2.5 models automatically provide cache cost savings without needing to create an explicit cache. When you send requests that share common prefixes with previous requests, you'll receive a 75% token discount on cached content.
248
+
249
+
To maximize cache hits with implicit caching:
250
+
251
+
- Keep content at the beginning of requests consistent
252
+
- Add variable content (like user questions) at the end of prompts
0 commit comments