Returns a GenerativeModel class with methods for inference and other functionality.
Returns an ImagenModel class with methods for using Imagen.
Returns a LiveGenerativeModel class for real-time, bidirectional communication.
Returns a TemplateGenerativeModel class for executing server-side Gemini templates.
Returns a TemplateImagenModel class for executing server-side Imagen templates.
Error class for the Vertex AI in Firebase SDK.
Base class for Firebase AI model APIs.
Schema class for "array" types.
The items param should refer to the type of item that can be a member
of the array.
Abstract base class representing the configuration for an AI service backend. This class should not be instantiated directly. Use its subclasses; GoogleAIBackend for the Gemini Developer API (via Google AI), and VertexAIBackend for the Vertex AI Gemini API.
Schema class for "boolean" types.
ChatSession class that enables sending chat messages and stores history of sent and received messages so far.
Class for generative model APIs.
Configuration class for the Gemini Developer API.
Defines the image format for images generated by Imagen.
Class for Imagen model APIs.
Schema class for "integer" types.
Class for Live generative model APIs. The Live API enables low-latency, two-way multimodal interactions with Gemini.
Represents an active, real-time, bidirectional conversation with the model.
Schema class for "number" types.
Schema class for "object" types.
The properties param must be a map of Schema objects.
Parent class encompassing all Schema types, with static methods that
allow building specific Schema types. This class can be converted with
JSON.stringify() into a JSON string accepted by Vertex AI REST endpoints.
(This string conversion is automatically done when calling SDK methods.)
Schema class for "string" types. Can be used with or without enum values.
GenerativeModel APIs that execute on a server-side template.
Class for Imagen model APIs that execute on a server-side template.
Configuration class for the Vertex AI Gemini API.
Standardized error codes that AIError can have.
Reason that a prompt was blocked.
Reason that a candidate finished.
Function calling mode for the model.
This property is not supported in the Gemini Developer API (GoogleAIBackend).
Threshold above which a prompt or candidate will be blocked.
Harm categories that would cause prompts or candidates to be blocked.
Probability that a prompt or candidate matches a harm category.
Harm severity levels.
Content part modality.
Contains the list of OpenAPI data types as defined by the OpenAPI specification
An instance of the Firebase AI SDK.
Options for initializing the AI service using getAI(). This allows specifying which backend to use (Vertex AI Gemini API or Gemini Developer API) and configuring its specific options (like location for Vertex AI).
Configuration for audio transcription in Live sessions.
Base parameters for a number of methods.
A single citation.
Citation metadata that may be found on a GenerateContentCandidate.
The results of code execution run by the model.
Represents the code execution result from the model.
A tool that enables the model to use code execution.
Content type for both prompts and response candidates.
Params for calling GenerativeModel.countTokens
Response from calling GenerativeModel.countTokens.
Details object that contains data originating from a bad HTTP response.
Protobuf google.type.Date
Response object wrapped with helper methods.
Details object that may be included in an error response.
An interface for executable code returned by the model.
Represents the code that is executed by the model.
Data pointing to a file uploaded on Google Cloud Storage.
Content part interface if the part represents FileData
A predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name and a structured JSON object containing the parameters and their values.
Content part interface if the part represents a FunctionCall.
Structured representation of a function declaration as defined by the
OpenAPI 3.0 specification.
Included
in this declaration are the function name and parameters. This
FunctionDeclaration is a representation of a block of code that can be used
as a Tool by the model and executed by the client.
A FunctionDeclarationsTool is a piece of code that enables the system to
interact with external systems to perform an action, or set of actions,
outside of knowledge and scope of the model.
The result output from a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a FunctionCall made based on model prediction.
Content part interface if the part represents FunctionResponse.
A candidate returned as part of a GenerateContentResponse.
Request sent through GenerativeModel.generateContent
Individual response from GenerativeModel.generateContent and
GenerativeModel.generateContentStream.
generateContentStream() will return one in each chunk until
the stream is done.
Result object returned from GenerativeModel.generateContent call.
Result object returned from GenerativeModel.generateContentStream call.
Iterate over stream to get chunks as they come in and/or
use the response promise to get the aggregated response when
the stream is done.
Config options for content-related requests
Interface for sending an image.
Specifies the Google Search configuration.
A tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.
Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled.
Metadata returned when grounding is enabled.
Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks.
An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.
Configuration options for generating images with Imagen.
The response from a request to generate images with Imagen.
An image generated by Imagen, represented as inline data.
Parameters for configuring an ImagenModel.
Settings for controlling the aggressiveness of filtering out sensitive content.
Content part interface if the part represents an image.
Configuration parameters used by LiveGenerativeModel to control live content generation.
Params passed to getLiveGenerativeModel.
An incremental content update from the model.
A request from the model for the client to execute one or more functions.
Notification to cancel a previous function call triggered by LiveServerToolCall.
Represents token counting info for a single modality.
Params passed to getGenerativeModel.
Interface for ObjectSchema class.
Configuration for a pre-built voice.
If the prompt was blocked, this will be populated with blockReason and
the relevant safetyRatings.
Params passed to getGenerativeModel.
A safety rating associated with a GenerateContentCandidate
Safety setting that can be sent as part of request parameters.
Interface for Schema class.
Final format for Schema params passed to backend requests.
Basic Schema properties shared across several Schema-related types.
Google search entry point.
Represents a specific segment within a Content object, often used to pinpoint the exact location of text or data that grounding information refers to.
Configures speech synthesis.
Params for GenerativeModel.startChat.
Content part interface if the part represents a text string.
Configuration for "thinking" behavior of compatible Gemini models.
Tool config. This config is shared for all tools provided in the request.
Transcription of audio. This can be returned from a LiveGenerativeModel if transcription
is enabled with the inputAudioTranscription or outputAudioTranscription properties on
the LiveGenerationConfig.
Specifies the URL Context configuration.
Metadata related to URLContextTool.
A tool that allows you to provide additional context to the models in the form of public web URLs. By including URLs in your request, the Gemini model will access the content from those pages to inform and enhance its response.
Metadata for a single URL retrieved by the URLContextTool tool.
Usage metadata about a GenerateContentResponse.
Describes the input video content.
Configuration for the voice to used in speech synthesis.
A grounding chunk from the web.
Type alias representing valid backend types.
It can be either 'VERTEX_AI' or 'GOOGLE_AI'.
Aspect ratios for Imagen images.
A filter level controlling whether generation of images containing people or faces is allowed.
A filter level controlling how aggressively to filter sensitive content.
(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.
The programming language of the code.
The types of responses that can be returned by LiveSession.receive. This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed.
Represents the result of the code execution.
Content part - includes text, image/video, or function call/response part types.
Generation modalities to be returned in generation responses.
Role is the producer of the content.
Defines a tool that model can call to access external knowledge.
A type that includes all specific Schema types.
Type alias for URL retrieval status values.
An enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.
Aspect ratios for Imagen images.
A filter level controlling whether generation of images containing people or faces is allowed.
A filter level controlling how aggressively to filter sensitive content.
(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.
The programming language of the code.
The types of responses that can be returned by LiveSession.receive.
Represents the result of the code execution.
Possible roles.
Generation modalities to be returned in generation responses.
The status of a URL retrieval.
Returns the default AI instance that is associated with the provided @firebase/app!FirebaseApp. If no instance exists, initializes a new instance with the default settings.