Class: Google::Apis::DialogflowV3::GoogleCloudDialogflowV2StreamingRecognitionResult

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dialogflow_v3/classes.rb,
lib/google/apis/dialogflow_v3/representations.rb,
lib/google/apis/dialogflow_v3/representations.rb

Overview

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance. While end-user audio is being processed, Dialogflow sends a series of results. Each result may contain a transcript value. A transcript represents a portion of the utterance. While the recognizer is processing audio, transcript values may be interim values or finalized values. Once a transcript is finalized, the is_final value is set to true and processing continues for the next transcript. If StreamingDetectIntentRequest.query_input.audio_config.single_utterance was true, and the recognizer has completed processing audio, the message_type value is set to END_OF_SINGLE_UTTERANCE and the following (last) result contains the last finalized transcript. The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results. In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur. Num | transcript | message_type | is_final --- | ----------------------- | ---- ------------------- | -------- 1 | "tube" | TRANSCRIPT | false 2 | "to be a" | TRANSCRIPT | false 3 | "to be" | TRANSCRIPT | false 4 | "to be or not to be" | TRANSCRIPT | true 5 | "that's" | TRANSCRIPT | false 6 | "that is | TRANSCRIPT | false 7 | unset | END_OF_SINGLE_UTTERANCE | unset 8 | " that is the question" | TRANSCRIPT | trueConcatenating the finalized transcripts withis_final` set to true, the complete utterance becomes "to be or not to be that is the question".

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudDialogflowV2StreamingRecognitionResult

Returns a new instance of GoogleCloudDialogflowV2StreamingRecognitionResult.



15812
15813
15814
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15812

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#confidenceFloat

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if is_final is true and you should not rely on it being accurate or even set. Corresponds to the JSON property confidence

Returns:

  • (Float)


15772
15773
15774
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15772

def confidence
  @confidence
end

#is_finalBoolean Also known as: is_final?

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT. Corresponds to the JSON property isFinal

Returns:

  • (Boolean)


15780
15781
15782
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15780

def is_final
  @is_final
end

#language_codeString

Detected language code for the transcript. Corresponds to the JSON property languageCode

Returns:

  • (String)


15786
15787
15788
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15786

def language_code
  @language_code
end

#message_typeString

Type of the result message. Corresponds to the JSON property messageType

Returns:

  • (String)


15791
15792
15793
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15791

def message_type
  @message_type
end

#speech_end_offsetString

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT. Corresponds to the JSON property speechEndOffset

Returns:

  • (String)


15797
15798
15799
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15797

def speech_end_offset
  @speech_end_offset
end

#speech_word_infoArray<Google::Apis::DialogflowV3::GoogleCloudDialogflowV2SpeechWordInfo>

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig. enable_word_info] is set. Corresponds to the JSON property speechWordInfo



15804
15805
15806
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15804

def speech_word_info
  @speech_word_info
end

#transcriptString

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT. Corresponds to the JSON property transcript

Returns:

  • (String)


15810
15811
15812
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15810

def transcript
  @transcript
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



15817
15818
15819
15820
15821
15822
15823
15824
15825
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15817

def update!(**args)
  @confidence = args[:confidence] if args.key?(:confidence)
  @is_final = args[:is_final] if args.key?(:is_final)
  @language_code = args[:language_code] if args.key?(:language_code)
  @message_type = args[:message_type] if args.key?(:message_type)
  @speech_end_offset = args[:speech_end_offset] if args.key?(:speech_end_offset)
  @speech_word_info = args[:speech_word_info] if args.key?(:speech_word_info)
  @transcript = args[:transcript] if args.key?(:transcript)
end