React Native Firebase
    Preparing search index...

    Module @react-native-firebase/ai

    Functions

    getAI

    Returns the default AI instance that is associated with the provided @firebase/app!FirebaseApp. If no instance exists, initializes a new instance with the default settings.

    getGenerativeModel

    Returns a GenerativeModel class with methods for inference and other functionality.

    getImagenModel

    Returns an ImagenModel class with methods for using Imagen.

    getLiveGenerativeModel

    Returns a LiveGenerativeModel class for real-time, bidirectional communication.

    getTemplateGenerativeModel

    Returns a TemplateGenerativeModel class for executing server-side Gemini templates.

    getTemplateImagenModel

    Returns a TemplateImagenModel class for executing server-side Imagen templates.

    Classes

    AIError

    Error class for the Vertex AI in Firebase SDK.

    AIModel

    Base class for Firebase AI model APIs.

    ArraySchema

    Schema class for "array" types. The items param should refer to the type of item that can be a member of the array.

    Backend

    Abstract base class representing the configuration for an AI service backend. This class should not be instantiated directly. Use its subclasses; GoogleAIBackend for the Gemini Developer API (via Google AI), and VertexAIBackend for the Vertex AI Gemini API.

    BooleanSchema

    Schema class for "boolean" types.

    ChatSession

    ChatSession class that enables sending chat messages and stores history of sent and received messages so far.

    GenerativeModel

    Class for generative model APIs.

    GoogleAIBackend

    Configuration class for the Gemini Developer API.

    ImagenImageFormat

    Defines the image format for images generated by Imagen.

    ImagenModel

    Class for Imagen model APIs.

    IntegerSchema

    Schema class for "integer" types.

    LiveGenerativeModel

    Class for Live generative model APIs. The Live API enables low-latency, two-way multimodal interactions with Gemini.

    LiveSession

    Represents an active, real-time, bidirectional conversation with the model.

    NumberSchema

    Schema class for "number" types.

    ObjectSchema

    Schema class for "object" types. The properties param must be a map of Schema objects.

    Schema

    Parent class encompassing all Schema types, with static methods that allow building specific Schema types. This class can be converted with JSON.stringify() into a JSON string accepted by Vertex AI REST endpoints. (This string conversion is automatically done when calling SDK methods.)

    StringSchema

    Schema class for "string" types. Can be used with or without enum values.

    TemplateGenerativeModel

    GenerativeModel APIs that execute on a server-side template.

    TemplateImagenModel

    Class for Imagen model APIs that execute on a server-side template.

    VertexAIBackend

    Configuration class for the Vertex AI Gemini API.

    Enumerations

    AIErrorCode

    Standardized error codes that AIError can have.

    BlockReason

    Reason that a prompt was blocked.

    FinishReason

    Reason that a candidate finished.

    FunctionCallingMode

    Function calling mode for the model.

    HarmBlockMethod

    This property is not supported in the Gemini Developer API (GoogleAIBackend).

    HarmBlockThreshold

    Threshold above which a prompt or candidate will be blocked.

    HarmCategory

    Harm categories that would cause prompts or candidates to be blocked.

    HarmProbability

    Probability that a prompt or candidate matches a harm category.

    HarmSeverity

    Harm severity levels.

    Modality

    Content part modality.

    SchemaType

    Contains the list of OpenAPI data types as defined by the OpenAPI specification

    Interfaces

    AI

    An instance of the Firebase AI SDK.

    AIOptions

    Options for initializing the AI service using getAI(). This allows specifying which backend to use (Vertex AI Gemini API or Gemini Developer API) and configuring its specific options (like location for Vertex AI).

    AudioTranscriptionConfig

    Configuration for audio transcription in Live sessions.

    BaseParams

    Base parameters for a number of methods.

    Citation

    A single citation.

    CitationMetadata

    Citation metadata that may be found on a GenerateContentCandidate.

    CodeExecutionResult

    The results of code execution run by the model.

    CodeExecutionResultPart

    Represents the code execution result from the model.

    CodeExecutionTool

    A tool that enables the model to use code execution.

    Content

    Content type for both prompts and response candidates.

    CountTokensRequest

    Params for calling GenerativeModel.countTokens

    CountTokensResponse

    Response from calling GenerativeModel.countTokens.

    CustomErrorData

    Details object that contains data originating from a bad HTTP response.

    Date

    Protobuf google.type.Date

    EnhancedGenerateContentResponse

    Response object wrapped with helper methods.

    ErrorDetails

    Details object that may be included in an error response.

    ExecutableCode

    An interface for executable code returned by the model.

    ExecutableCodePart

    Represents the code that is executed by the model.

    FileData

    Data pointing to a file uploaded on Google Cloud Storage.

    FileDataPart

    Content part interface if the part represents FileData

    FunctionCall

    A predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name and a structured JSON object containing the parameters and their values.

    FunctionCallingConfig
    FunctionCallPart

    Content part interface if the part represents a FunctionCall.

    FunctionDeclaration

    Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

    FunctionDeclarationsTool

    A FunctionDeclarationsTool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

    FunctionResponse

    The result output from a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a FunctionCall made based on model prediction.

    FunctionResponsePart

    Content part interface if the part represents FunctionResponse.

    GenerateContentCandidate

    A candidate returned as part of a GenerateContentResponse.

    GenerateContentRequest

    Request sent through GenerativeModel.generateContent

    GenerateContentResponse

    Individual response from GenerativeModel.generateContent and GenerativeModel.generateContentStream. generateContentStream() will return one in each chunk until the stream is done.

    GenerateContentResult

    Result object returned from GenerativeModel.generateContent call.

    GenerateContentStreamResult

    Result object returned from GenerativeModel.generateContentStream call. Iterate over stream to get chunks as they come in and/or use the response promise to get the aggregated response when the stream is done.

    GenerationConfig

    Config options for content-related requests

    GenerativeContentBlob

    Interface for sending an image.

    GoogleAICitationMetadata
    GoogleAICountTokensRequest
    GoogleAIGenerateContentCandidate
    GoogleAIGenerateContentResponse
    GoogleSearch

    Specifies the Google Search configuration.

    GoogleSearchTool

    A tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.

    GroundingAttribution
    GroundingChunk

    Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled.

    GroundingMetadata

    Metadata returned when grounding is enabled.

    GroundingSupport

    Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks.

    ImagenGCSImage

    An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.

    ImagenGenerationConfig

    Configuration options for generating images with Imagen.

    ImagenGenerationResponse

    The response from a request to generate images with Imagen.

    ImagenInlineImage

    An image generated by Imagen, represented as inline data.

    ImagenModelParams

    Parameters for configuring an ImagenModel.

    ImagenSafetySettings

    Settings for controlling the aggressiveness of filtering out sensitive content.

    InlineDataPart

    Content part interface if the part represents an image.

    LiveGenerationConfig

    Configuration parameters used by LiveGenerativeModel to control live content generation.

    LiveModelParams

    Params passed to getLiveGenerativeModel.

    LiveServerContent

    An incremental content update from the model.

    LiveServerToolCall

    A request from the model for the client to execute one or more functions.

    LiveServerToolCallCancellation

    Notification to cancel a previous function call triggered by LiveServerToolCall.

    ModalityTokenCount

    Represents token counting info for a single modality.

    ModelParams

    Params passed to getGenerativeModel.

    ObjectSchemaInterface

    Interface for ObjectSchema class.

    PrebuiltVoiceConfig

    Configuration for a pre-built voice.

    PromptFeedback

    If the prompt was blocked, this will be populated with blockReason and the relevant safetyRatings.

    RequestOptions

    Params passed to getGenerativeModel.

    RetrievedContextAttribution
    SafetyRating

    A safety rating associated with a GenerateContentCandidate

    SafetySetting

    Safety setting that can be sent as part of request parameters.

    SchemaInterface

    Interface for Schema class.

    SchemaParams

    Params passed to Schema static methods to create specific Schema classes.

    SchemaRequest

    Final format for Schema params passed to backend requests.

    SchemaShared

    Basic Schema properties shared across several Schema-related types.

    SearchEntrypoint

    Google search entry point.

    Segment

    Represents a specific segment within a Content object, often used to pinpoint the exact location of text or data that grounding information refers to.

    SpeechConfig

    Configures speech synthesis.

    StartChatParams
    TextPart

    Content part interface if the part represents a text string.

    ThinkingConfig

    Configuration for "thinking" behavior of compatible Gemini models.

    ToolConfig

    Tool config. This config is shared for all tools provided in the request.

    Transcription

    Transcription of audio. This can be returned from a LiveGenerativeModel if transcription is enabled with the inputAudioTranscription or outputAudioTranscription properties on the LiveGenerationConfig.

    URLContext

    Specifies the URL Context configuration.

    URLContextMetadata

    Metadata related to URLContextTool.

    URLContextTool

    A tool that allows you to provide additional context to the models in the form of public web URLs. By including URLs in your request, the Gemini model will access the content from those pages to inform and enhance its response.

    URLMetadata

    Metadata for a single URL retrieved by the URLContextTool tool.

    UsageMetadata

    Usage metadata about a GenerateContentResponse.

    VideoMetadata

    Describes the input video content.

    VoiceConfig

    Configuration for the voice to used in speech synthesis.

    WebAttribution
    WebGroundingChunk

    A grounding chunk from the web.

    Type Aliases

    BackendType

    Type alias representing valid backend types. It can be either 'VERTEX_AI' or 'GOOGLE_AI'.

    ImagenAspectRatio

    Aspect ratios for Imagen images.

    ImagenPersonFilterLevel

    A filter level controlling whether generation of images containing people or faces is allowed.

    ImagenSafetyFilterLevel

    A filter level controlling how aggressively to filter sensitive content.

    InferenceMode

    (EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.

    Language

    The programming language of the code.

    LiveResponseType

    The types of responses that can be returned by LiveSession.receive. This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed.

    Outcome

    Represents the result of the code execution.

    Part

    Content part - includes text, image/video, or function call/response part types.

    ResponseModality

    Generation modalities to be returned in generation responses.

    Role

    Role is the producer of the content.

    Tool

    Defines a tool that model can call to access external knowledge.

    TypedSchema

    A type that includes all specific Schema types.

    URLRetrievalStatus

    Type alias for URL retrieval status values.

    Variables

    BackendType

    An enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.

    ImagenAspectRatio

    Aspect ratios for Imagen images.

    ImagenPersonFilterLevel

    A filter level controlling whether generation of images containing people or faces is allowed.

    ImagenSafetyFilterLevel

    A filter level controlling how aggressively to filter sensitive content.

    InferenceMode

    (EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.

    Language

    The programming language of the code.

    LiveResponseType

    The types of responses that can be returned by LiveSession.receive.

    Outcome

    Represents the result of the code execution.

    POSSIBLE_ROLES

    Possible roles.

    ResponseModality

    Generation modalities to be returned in generation responses.

    URLRetrievalStatus

    The status of a URL retrieval.