Class RecognitionConfig
Provides information to the recognizer that specifies how to process the request.
Implements
Inherited Members
Namespace: Google.Apis.Speech.v1p1beta1.Data
Assembly: Google.Apis.Speech.v1p1beta1.dll
Syntax
public class RecognitionConfig : IDirectResponseSchema
Properties
Adaptation
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the
speech adaptation documentation. When speech
adaptation is set it supersedes the speech_contexts
field.
Declaration
[JsonProperty("adaptation")]
public virtual SpeechAdaptation Adaptation { get; set; }
Property Value
Type | Description |
---|---|
SpeechAdaptation |
AlternativeLanguageCodes
A list of up to 3 additional BCP-47 language tags, listing possible alternative languages of the supplied audio. See Language Support for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
Declaration
[JsonProperty("alternativeLanguageCodes")]
public virtual IList<string> AlternativeLanguageCodes { get; set; }
Property Value
Type | Description |
---|---|
IList<string> |
AudioChannelCount
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values
for LINEAR16, OGG_OPUS and FLAC are 1
-8
. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE
is only 1
. If 0
or omitted, defaults to one channel (mono). Note: We only recognize the first channel by
default. To perform independent recognition on each channel set enable_separate_recognition_per_channel
to
'true'.
Declaration
[JsonProperty("audioChannelCount")]
public virtual int? AudioChannelCount { get; set; }
Property Value
Type | Description |
---|---|
int? |
DiarizationConfig
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
Declaration
[JsonProperty("diarizationConfig")]
public virtual SpeakerDiarizationConfig DiarizationConfig { get; set; }
Property Value
Type | Description |
---|---|
SpeakerDiarizationConfig |
DiarizationSpeakerCount
If set, specifies the estimated number of speakers in the conversation. Defaults to '2'. Ignored unless enable_speaker_diarization is set to true. Note: Use diarization_config instead.
Declaration
[JsonProperty("diarizationSpeakerCount")]
public virtual int? DiarizationSpeakerCount { get; set; }
Property Value
Type | Description |
---|---|
int? |
ETag
The ETag of the item.
Declaration
public virtual string ETag { get; set; }
Property Value
Type | Description |
---|---|
string |
EnableAutomaticPunctuation
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
Declaration
[JsonProperty("enableAutomaticPunctuation")]
public virtual bool? EnableAutomaticPunctuation { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableSeparateRecognitionPerChannel
This needs to be set to true
explicitly and audio_channel_count
> 1 to get each channel
recognized separately. The recognition result will contain a channel_tag
field to state which channel that
result belongs to. If this is not true, we will only recognize the first channel. The request is billed
cumulatively for all channels recognized: audio_channel_count
multiplied by the length of the audio.
Declaration
[JsonProperty("enableSeparateRecognitionPerChannel")]
public virtual bool? EnableSeparateRecognitionPerChannel { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableSpeakerDiarization
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_label provided in the WordInfo. Note: Use diarization_config instead.
Declaration
[JsonProperty("enableSpeakerDiarization")]
public virtual bool? EnableSpeakerDiarization { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableSpokenEmojis
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
Declaration
[JsonProperty("enableSpokenEmojis")]
public virtual bool? EnableSpokenEmojis { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableSpokenPunctuation
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
Declaration
[JsonProperty("enableSpokenPunctuation")]
public virtual bool? EnableSpokenPunctuation { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableWordConfidence
If true
, the top result includes a list of words and the confidence for those words. If false
, no
word-level confidence information is returned. The default is false
.
Declaration
[JsonProperty("enableWordConfidence")]
public virtual bool? EnableWordConfidence { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableWordTimeOffsets
If true
, the top result includes a list of words and the start and end time offsets (timestamps) for those
words. If false
, no word-level time offset information is returned. The default is false
.
Declaration
[JsonProperty("enableWordTimeOffsets")]
public virtual bool? EnableWordTimeOffsets { get; set; }
Property Value
Type | Description |
---|---|
bool? |
Encoding
Encoding of audio data sent in all RecognitionAudio
messages. This field is optional for FLAC
and WAV
audio files and required for all other audio formats. For details, see AudioEncoding.
Declaration
[JsonProperty("encoding")]
public virtual string Encoding { get; set; }
Property Value
Type | Description |
---|---|
string |
LanguageCode
Required. The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.
Declaration
[JsonProperty("languageCode")]
public virtual string LanguageCode { get; set; }
Property Value
Type | Description |
---|---|
string |
MaxAlternatives
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of
SpeechRecognitionAlternative
messages within each SpeechRecognitionResult
. The server may return fewer
than max_alternatives
. Valid values are 0
-30
. A value of 0
or 1
will return a maximum of one. If
omitted, will return a maximum of one.
Declaration
[JsonProperty("maxAlternatives")]
public virtual int? MaxAlternatives { get; set; }
Property Value
Type | Description |
---|---|
int? |
Metadata
Metadata regarding this request.
Declaration
[JsonProperty("metadata")]
public virtual RecognitionMetadata Metadata { get; set; }
Property Value
Type | Description |
---|---|
RecognitionMetadata |
Model
Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig. Model Description latest_long Best for long form content like media or conversation. latest_short Best for short form content like commands or single shot directed speech. command_and_search Best for short queries such as voice commands or voice search. phone_call Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate). video Best for audio that originated from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate. default Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate. medical_conversation Best for audio that originated from a conversation between a medical provider and patient. medical_dictation Best for audio that originated from dictation notes by a medical provider.
Declaration
[JsonProperty("model")]
public virtual string Model { get; set; }
Property Value
Type | Description |
---|---|
string |
ProfanityFilter
If set to true
, the server will attempt to filter out profanities, replacing all but the initial character
in each filtered word with asterisks, e.g. "f***". If set to false
or omitted, profanities won't be
filtered out.
Declaration
[JsonProperty("profanityFilter")]
public virtual bool? ProfanityFilter { get; set; }
Property Value
Type | Description |
---|---|
bool? |
SampleRateHertz
Sample rate in Hertz of the audio data sent in all RecognitionAudio
messages. Valid values are:
8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If
that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is
optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see
AudioEncoding.
Declaration
[JsonProperty("sampleRateHertz")]
public virtual int? SampleRateHertz { get; set; }
Property Value
Type | Description |
---|---|
int? |
SpeechContexts
Array of SpeechContext. A means to provide context to assist the speech recognition. For more information, see speech adaptation.
Declaration
[JsonProperty("speechContexts")]
public virtual IList<SpeechContext> SpeechContexts { get; set; }
Property Value
Type | Description |
---|---|
IList<SpeechContext> |
TranscriptNormalization
Optional. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
Declaration
[JsonProperty("transcriptNormalization")]
public virtual TranscriptNormalization TranscriptNormalization { get; set; }
Property Value
Type | Description |
---|---|
TranscriptNormalization |
UseEnhanced
Set to true to use an enhanced model for speech recognition. If use_enhanced
is set to true and the
model
field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the
audio. If use_enhanced
is true and an enhanced version of the specified model does not exist, then the
speech is recognized using the standard version of the specified model.
Declaration
[JsonProperty("useEnhanced")]
public virtual bool? UseEnhanced { get; set; }
Property Value
Type | Description |
---|---|
bool? |