public interface SynthesizeSpeechResponseOrBuilder extends MessageOrBuilder
Modifier and Type | Method and Description |
---|---|
AudioConfig |
getAudioConfig()
The audio metadata of `audio_content`.
|
AudioConfigOrBuilder |
getAudioConfigOrBuilder()
The audio metadata of `audio_content`.
|
ByteString |
getAudioContent()
The audio data bytes encoded as specified in the request, including the
header for encodings that are wrapped in containers (e.g.
|
Timepoint |
getTimepoints(int index)
A link between a position in the original request input and a corresponding
time in the output audio.
|
int |
getTimepointsCount()
A link between a position in the original request input and a corresponding
time in the output audio.
|
List<Timepoint> |
getTimepointsList()
A link between a position in the original request input and a corresponding
time in the output audio.
|
TimepointOrBuilder |
getTimepointsOrBuilder(int index)
A link between a position in the original request input and a corresponding
time in the output audio.
|
List<? extends TimepointOrBuilder> |
getTimepointsOrBuilderList()
A link between a position in the original request input and a corresponding
time in the output audio.
|
boolean |
hasAudioConfig()
The audio metadata of `audio_content`.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
isInitialized
ByteString getAudioContent()
The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
bytes audio_content = 1;
List<Timepoint> getTimepointsList()
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `<mark>` of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Timepoint getTimepoints(int index)
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `<mark>` of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
int getTimepointsCount()
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `<mark>` of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
List<? extends TimepointOrBuilder> getTimepointsOrBuilderList()
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `<mark>` of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
TimepointOrBuilder getTimepointsOrBuilder(int index)
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `<mark>` of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
boolean hasAudioConfig()
The audio metadata of `audio_content`.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
AudioConfig getAudioConfig()
The audio metadata of `audio_content`.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
AudioConfigOrBuilder getAudioConfigOrBuilder()
The audio metadata of `audio_content`.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Copyright © 2022 Google LLC. All rights reserved.