3 results found
-
Explicit Context Caching
Support for Explicit Context Caching in the Gemini API. When applications frequently sends long, static system instructions and large document sets (100k+ tokens) in every prompt. Implementing an explicit cache would significantly reduce latency for users and lower the token overhead by allowing us to persist this context across multiple turns without re-sending the entire prefix.
1 voteTraining models is not part of the product direction of Firebase AI Logic. Other platforms like Vertex AI are more suitable for that.
-
Train A Model Through API
I don't know if this is already exisiting but i think it should be possible to train a custom model through the vertex AI API for firebase.
5 votesTraining models is not part of the product direction of Firebase AI Logic. Other platforms like Vertex AI are more suitable for that.
-
Automated Function Calling for Kotlin
Allow the SDK to call automatically the functions that the model requires before to proceed with the prompt
3 votesTraining models is not part of the product direction of Firebase AI Logic. Other platforms like Vertex AI are more suitable for that.
- Don't see your idea?