Class GoogleCloudDialogflowV2InputAudioConfig
Instructs the speech recognizer how to process the audio content.
Implements
Inherited Members
Namespace: Google.Apis.Dialogflow.v2.Data
Assembly: Google.Apis.Dialogflow.v2.dll
Syntax
public class GoogleCloudDialogflowV2InputAudioConfig : IDirectResponseSchema
Properties
AudioEncoding
Required. Audio encoding of the audio content to process.
Declaration
[JsonProperty("audioEncoding")]
public virtual string AudioEncoding { get; set; }
Property Value
Type | Description |
---|---|
string |
DisableNoSpeechRecognizedEvent
Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false
and
recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
Declaration
[JsonProperty("disableNoSpeechRecognizedEvent")]
public virtual bool? DisableNoSpeechRecognizedEvent { get; set; }
Property Value
Type | Description |
---|---|
bool? |
ETag
The ETag of the item.
Declaration
public virtual string ETag { get; set; }
Property Value
Type | Description |
---|---|
string |
EnableAutomaticPunctuation
Enable automatic punctuation option at the speech backend.
Declaration
[JsonProperty("enableAutomaticPunctuation")]
public virtual bool? EnableAutomaticPunctuation { get; set; }
Property Value
Type | Description |
---|---|
bool? |
EnableWordInfo
If true
, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the
recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any
word-level information.
Declaration
[JsonProperty("enableWordInfo")]
public virtual bool? EnableWordInfo { get; set; }
Property Value
Type | Description |
---|---|
bool? |
LanguageCode
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
Declaration
[JsonProperty("languageCode")]
public virtual string LanguageCode { get; set; }
Property Value
Type | Description |
---|---|
string |
Model
Optional. Which Speech model to select for the given request. For more information, see Speech models.
Declaration
[JsonProperty("model")]
public virtual string Model { get; set; }
Property Value
Type | Description |
---|---|
string |
ModelVariant
Which variant of the Speech model to use.
Declaration
[JsonProperty("modelVariant")]
public virtual string ModelVariant { get; set; }
Property Value
Type | Description |
---|---|
string |
OptOutConformerModelMigration
If true
, the request will opt out for STT conformer model migration. This field will be deprecated once
force migration takes place in June 2024. Please refer to Dialogflow ES Speech model
migration.
Declaration
[JsonProperty("optOutConformerModelMigration")]
public virtual bool? OptOutConformerModelMigration { get; set; }
Property Value
Type | Description |
---|---|
bool? |
PhraseHints
A list of strings containing words and phrases that the speech recognizer should recognize with higher
likelihood. See the Cloud Speech
documentation for more details. This
field is deprecated. Please use speech_contexts
instead. If you specify both phrase_hints
and
speech_contexts
, Dialogflow will treat the phrase_hints
as a single additional
SpeechContext
.
Declaration
[JsonProperty("phraseHints")]
public virtual IList<string> PhraseHints { get; set; }
Property Value
Type | Description |
---|---|
IList<string> |
PhraseSets
A collection of phrase set resources to use for speech adaptation.
Declaration
[JsonProperty("phraseSets")]
public virtual IList<string> PhraseSets { get; set; }
Property Value
Type | Description |
---|---|
IList<string> |
SampleRateHertz
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
Declaration
[JsonProperty("sampleRateHertz")]
public virtual int? SampleRateHertz { get; set; }
Property Value
Type | Description |
---|---|
int? |
SingleUtterance
If false
(default), recognition does not cease until the client closes the stream. If true
, the
recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the
audio's voice has stopped or paused. In this case, once a detected intent is received, the client should
close the stream and start a new request with a new stream as needed. Note: This setting is relevant only
for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over
StreamingDetectIntentRequest.single_utterance.
Declaration
[JsonProperty("singleUtterance")]
public virtual bool? SingleUtterance { get; set; }
Property Value
Type | Description |
---|---|
bool? |
SpeechContexts
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
Declaration
[JsonProperty("speechContexts")]
public virtual IList<GoogleCloudDialogflowV2SpeechContext> SpeechContexts { get; set; }
Property Value
Type | Description |
---|---|
IList<GoogleCloudDialogflowV2SpeechContext> |