BetaOptional BetafrequencyFrequency penalties.
Optional BetainputEnables transcription of audio input.
When enabled, the model will respond with transcriptions of your audio input in the inputTranscriptions property
in LiveServerContent messages. Note that the transcriptions are broken up across
messages, so you may only receive small amounts of text per message. For example, if you ask the model
"How are you today?", the model may transcribe that input across three messages, broken up as "How a", "re yo", "u today?".
Optional BetamaxSpecifies the maximum number of tokens that can be generated in the response. The number of tokens per word varies depending on the language outputted. Is unbounded by default.
Optional BetaoutputEnables transcription of audio input.
When enabled, the model will respond with transcriptions of its audio output in the outputTranscription property
in LiveServerContent messages. Note that the transcriptions are broken up across
messages, so you may only receive small amounts of text per message. For example, if the model says
"How are you today?", the model may transcribe that output across three messages, broken up as "How a", "re yo", "u today?".
Optional BetapresencePositive penalties.
Optional BetaresponseThe modalities of the response.
Optional BetaspeechConfiguration for speech synthesis.
Optional BetatemperatureControls the degree of randomness in token selection. A temperature value of 0 means that the highest
probability tokens are always selected. In this case, responses for a given prompt are mostly
deterministic, but a small amount of variation is still possible.
Optional BetatopChanges how the model selects token for output. A topK value of 1 means the select token is
the most probable among all tokens in the model's vocabulary, while a topK value 3 means that
the next token is selected from among the 3 most probably using probabilities sampled. Tokens
are then further filtered with the highest selected temperature sampling. Defaults to 40
if unspecified.
Optional BetatopChanges how the model selects tokens for output. Tokens are
selected from the most to least probable until the sum of their probabilities equals the topP
value. For example, if tokens A, B, and C have probabilities of 0.3, 0.2, and 0.1 respectively
and the topP value is 0.5, then the model will select either A or B as the next token by using
the temperature and exclude C as a candidate. Defaults to 0.95 if unset.
Configuration parameters used by LiveGenerativeModel to control live content generation.