Skip to content

General

General

Categories

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

3 results found

  1. Support for Explicit Context Caching in the Gemini API. When applications frequently sends long, static system instructions and large document sets (100k+ tokens) in every prompt. Implementing an explicit cache would significantly reduce latency for users and lower the token overhead by allowing us to persist this context across multiple turns without re-sending the entire prefix.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  2. I don't know if this is already exisiting but i think it should be possible to train a custom model through the vertex AI API for firebase.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  3. Allow the SDK to call automatically the functions that the model requires before to proceed with the prompt

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  • Don't see your idea?