Gemini Live startAudioConversation(): Add API for user feedback on audio input/output events
Testing startAudioConversation() APIs here:
- https://firebase.google.com/docs/ai-logic/live-api?api=dev
- https://developer.android.com/ai/gemini/live
Including the sample app here:
One feature (new API addition) that I would want:
I would like to know when audio processing is happening, ie. input audio and output - a bit similar to the Gemini app's Live functionality on Android. So I would like to have a way to show:
- audio wave or audio pulsation to indicate to the user that input is being received
Additionally I would also like to +1, in the same vein, the implementation of the transcription API (also, exactly like the Gemini app's functionality) - as was already mentioned in the docs!
Thank you! A very impressive feature.
2
votes