OptionalpromptThe breakdown, by modality, of how many tokens are consumed by the prompt.
OptionaltotalUse totalTokens instead. This property is undefined when using models greater than gemini-1.5-*.
The total number of billable characters counted across all instances from the request.
This property is only supported when using the Vertex AI Gemini API (VertexAIBackend). When using the Gemini Developer API (GoogleAIBackend), this property is not supported and will default to 0.
The total number of tokens counted across all instances from the request.
Response from calling GenerativeModel.countTokens.