Class SynthesizeSpeechResponse
The message returned to the client by the SynthesizeSpeech
method.
Implements
Inherited Members
Namespace: Google.Apis.Texttospeech.v1beta1.Data
Assembly: Google.Apis.Texttospeech.v1beta1.dll
Syntax
public class SynthesizeSpeechResponse : IDirectResponseSchema
Properties
AudioConfig
The audio metadata of audio_content
.
Declaration
[JsonProperty("audioConfig")]
public virtual AudioConfig AudioConfig { get; set; }
Property Value
Type | Description |
---|---|
AudioConfig |
AudioContent
The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
Declaration
[JsonProperty("audioContent")]
public virtual string AudioContent { get; set; }
Property Value
Type | Description |
---|---|
string |
ETag
The ETag of the item.
Declaration
public virtual string ETag { get; set; }
Property Value
Type | Description |
---|---|
string |
Timepoints
A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `` of SSML input.
Declaration
[JsonProperty("timepoints")]
public virtual IList<Timepoint> Timepoints { get; set; }
Property Value
Type | Description |
---|---|
IList<Timepoint> |