React Native Firebase
    Preparing search index...

    Type Alias ImagenSafetyFilterLevelBeta

    ImagenSafetyFilterLevel: typeof ImagenSafetyFilterLevel[keyof typeof ImagenSafetyFilterLevel]

    A filter level controlling how aggressively to filter sensitive content.

    Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, violence, sexual, derogatory, and toxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.