Android XR is the first Android platform built in the Gemini era, and it powers an ecosystem of headsets, glasses, and everything in between. Gemini makes Android XR headsets easier to use and adds unique capabilities by helping your users understand what they're seeing and taking actions on their behalf.
You can access Gemini APIs using Firebase AI Logic, which is available for both native Android apps (with Kotlin) and for Unity. Use these APIs to build AI-powered features that integrate with cloud-based Gemini and Imagen models.
Choose a model
To get started, compare the capabilities of each model available in Firebase. You can then evaluate the results of various prompts for different models in AI Studio to determine which model fits your use case.
Explore other ways to enhance your app with Gemini
After you've determined the model that fits your use case, consider these other ways to enhance your app:
- Provide a voice interface: Android XR uses natural inputs like hands, gaze, and voice to navigate the system. To let your users navigate your app using their voice, use the Gemini live API along with function calling.
- Generate images with multimodal support: generate images using Gemini or Imagen models with the Gemini Developer API.
- Enrich game interactions in Unity apps: Generate structured output using the Gemini Developer API or Vertex AI Gemini API.