React Native Firebase
    Preparing search index...

    Variable ImagenSafetyFilterLevelConst Beta

    ImagenSafetyFilterLevel: {
        BLOCK_LOW_AND_ABOVE: "block_low_and_above";
        BLOCK_MEDIUM_AND_ABOVE: "block_medium_and_above";
        BLOCK_NONE: "block_none";
        BLOCK_ONLY_HIGH: "block_only_high";
    } = ...

    A filter level controlling how aggressively to filter sensitive content.

    Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, violence, sexual, derogatory, and toxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.

    Type Declaration

    • ReadonlyBLOCK_LOW_AND_ABOVE: "block_low_and_above"

      The most aggressive filtering level; most strict blocking.

    • ReadonlyBLOCK_MEDIUM_AND_ABOVE: "block_medium_and_above"

      Blocks some sensitive prompts and responses.

    • ReadonlyBLOCK_NONE: "block_none"

      The least aggressive filtering level; blocks very few sensitive prompts and responses.

      Access to this feature is restricted and may require your case to be reviewed and approved by Cloud support.

    • ReadonlyBLOCK_ONLY_HIGH: "block_only_high"

      Blocks few sensitive prompts and responses.