Class: Google::Apis::DialogflowV3::GoogleCloudDialogflowV2StreamingRecognitionResult

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dialogflow_v3/classes.rb,
lib/google/apis/dialogflow_v3/representations.rb,
lib/google/apis/dialogflow_v3/representations.rb

Overview

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance. While end-user audio is being processed, Dialogflow sends a series of results. Each result may contain a transcript value. A transcript represents a portion of the utterance. While the recognizer is processing audio, transcript values may be interim values or finalized values. Once a transcript is finalized, the is_final value is set to true and processing continues for the next transcript. If StreamingDetectIntentRequest.query_input.audio_config.single_utterance was true, and the recognizer has completed processing audio, the message_type value is set to END_OF_SINGLE_UTTERANCE and the following (last) result contains the last finalized transcript. The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results. In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur. Num | transcript | message_type | is_final --- | ----------------------- | ---- ------------------- | -------- 1 | "tube" | TRANSCRIPT | false 2 | "to be a" | TRANSCRIPT | false 3 | "to be" | TRANSCRIPT | false 4 | "to be or not to be" | TRANSCRIPT | true 5 | "that's" | TRANSCRIPT | false 6 | "that is | TRANSCRIPT | false 7 | unset | END_OF_SINGLE_UTTERANCE | unset 8 | " that is the question" | TRANSCRIPT | trueConcatenating the finalized transcripts withis_final` set to true, the complete utterance becomes "to be or not to be that is the question".

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudDialogflowV2StreamingRecognitionResult

Returns a new instance of GoogleCloudDialogflowV2StreamingRecognitionResult.



15828
15829
15830
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15828

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#confidenceFloat

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if is_final is true and you should not rely on it being accurate or even set. Corresponds to the JSON property confidence

Returns:

  • (Float)


15788
15789
15790
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15788

def confidence
  @confidence
end

#is_finalBoolean Also known as: is_final?

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT. Corresponds to the JSON property isFinal

Returns:

  • (Boolean)


15796
15797
15798
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15796

def is_final
  @is_final
end

#language_codeString

Detected language code for the transcript. Corresponds to the JSON property languageCode

Returns:

  • (String)


15802
15803
15804
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15802

def language_code
  @language_code
end

#message_typeString

Type of the result message. Corresponds to the JSON property messageType

Returns:

  • (String)


15807
15808
15809
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15807

def message_type
  @message_type
end

#speech_end_offsetString

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT. Corresponds to the JSON property speechEndOffset

Returns:

  • (String)


15813
15814
15815
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15813

def speech_end_offset
  @speech_end_offset
end

#speech_word_infoArray<Google::Apis::DialogflowV3::GoogleCloudDialogflowV2SpeechWordInfo>

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig. enable_word_info] is set. Corresponds to the JSON property speechWordInfo



15820
15821
15822
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15820

def speech_word_info
  @speech_word_info
end

#transcriptString

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT. Corresponds to the JSON property transcript

Returns:

  • (String)


15826
15827
15828
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15826

def transcript
  @transcript
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



15833
15834
15835
15836
15837
15838
15839
15840
15841
# File 'lib/google/apis/dialogflow_v3/classes.rb', line 15833

def update!(**args)
  @confidence = args[:confidence] if args.key?(:confidence)
  @is_final = args[:is_final] if args.key?(:is_final)
  @language_code = args[:language_code] if args.key?(:language_code)
  @message_type = args[:message_type] if args.key?(:message_type)
  @speech_end_offset = args[:speech_end_offset] if args.key?(:speech_end_offset)
  @speech_word_info = args[:speech_word_info] if args.key?(:speech_word_info)
  @transcript = args[:transcript] if args.key?(:transcript)
end