@Generated(value="by gapic-generator") @BetaApi public class SpeechClient extends Object implements BackgroundResource
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
RecognizeResponse response = speechClient.recognize(config, audio);
}
Note: close() needs to be called on the speechClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
The surface of this class includes several types of Java methods for each of the API's methods:
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of SpeechSettings to create(). For example:
To customize credentials:
SpeechSettings speechSettings =
SpeechSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
SpeechClient speechClient =
SpeechClient.create(speechSettings);
To customize the endpoint:
SpeechSettings speechSettings =
SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();
SpeechClient speechClient =
SpeechClient.create(speechSettings);
Modifier | Constructor and Description |
---|---|
protected |
SpeechClient(SpeechSettings settings)
Constructs an instance of SpeechClient, using the given settings.
|
protected |
SpeechClient(SpeechStub stub) |
Modifier and Type | Method and Description |
---|---|
boolean |
awaitTermination(long duration,
TimeUnit unit) |
void |
close() |
static SpeechClient |
create()
Constructs an instance of SpeechClient with default settings.
|
static SpeechClient |
create(SpeechSettings settings)
Constructs an instance of SpeechClient, using the given settings.
|
static SpeechClient |
create(SpeechStub stub)
Constructs an instance of SpeechClient, using the given stub for making calls.
|
OperationsClient |
getOperationsClient()
Returns the OperationsClient that can be used to query the status of a long-running operation
returned by another API method call.
|
SpeechSettings |
getSettings() |
SpeechStub |
getStub() |
boolean |
isShutdown() |
boolean |
isTerminated() |
OperationFuture<com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> |
longRunningRecognizeAsync(com.google.cloud.speech.v1.LongRunningRecognizeRequest request)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface.
|
OperationFuture<com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> |
longRunningRecognizeAsync(com.google.cloud.speech.v1.RecognitionConfig config,
com.google.cloud.speech.v1.RecognitionAudio audio)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface.
|
UnaryCallable<com.google.cloud.speech.v1.LongRunningRecognizeRequest,Operation> |
longRunningRecognizeCallable()
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface.
|
OperationCallable<com.google.cloud.speech.v1.LongRunningRecognizeRequest,com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> |
longRunningRecognizeOperationCallable()
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations
interface.
|
com.google.cloud.speech.v1.RecognizeResponse |
recognize(com.google.cloud.speech.v1.RecognitionConfig config,
com.google.cloud.speech.v1.RecognitionAudio audio)
Performs synchronous speech recognition: receive results after all audio has been sent and
processed.
|
com.google.cloud.speech.v1.RecognizeResponse |
recognize(com.google.cloud.speech.v1.RecognizeRequest request)
Performs synchronous speech recognition: receive results after all audio has been sent and
processed.
|
UnaryCallable<com.google.cloud.speech.v1.RecognizeRequest,com.google.cloud.speech.v1.RecognizeResponse> |
recognizeCallable()
Performs synchronous speech recognition: receive results after all audio has been sent and
processed.
|
void |
shutdown() |
void |
shutdownNow() |
BidiStreamingCallable<com.google.cloud.speech.v1.StreamingRecognizeRequest,com.google.cloud.speech.v1.StreamingRecognizeResponse> |
streamingRecognizeCallable()
Performs bidirectional streaming speech recognition: receive results while sending audio.
|
protected SpeechClient(SpeechSettings settings) throws IOException
IOException
@BetaApi(value="A restructuring of stub classes is planned, so this may break in the future") protected SpeechClient(SpeechStub stub)
public static final SpeechClient create() throws IOException
IOException
public static final SpeechClient create(SpeechSettings settings) throws IOException
IOException
@BetaApi(value="A restructuring of stub classes is planned, so this may break in the future") public static final SpeechClient create(SpeechStub stub)
public final SpeechSettings getSettings()
@BetaApi(value="A restructuring of stub classes is planned, so this may break in the future") public SpeechStub getStub()
@BetaApi(value="The surface for long-running operations is not stable yet and may change in the future.") public final OperationsClient getOperationsClient()
public final com.google.cloud.speech.v1.RecognizeResponse recognize(com.google.cloud.speech.v1.RecognitionConfig config, com.google.cloud.speech.v1.RecognitionAudio audio)
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
RecognizeResponse response = speechClient.recognize(config, audio);
}
config
- *Required* Provides information to the recognizer that specifies how to
process the request.audio
- *Required* The audio data to be recognized.ApiException
- if the remote call failspublic final com.google.cloud.speech.v1.RecognizeResponse recognize(com.google.cloud.speech.v1.RecognizeRequest request)
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
RecognizeRequest request = RecognizeRequest.newBuilder()
.setConfig(config)
.setAudio(audio)
.build();
RecognizeResponse response = speechClient.recognize(request);
}
request
- The request object containing all of the parameters for the API call.ApiException
- if the remote call failspublic final UnaryCallable<com.google.cloud.speech.v1.RecognizeRequest,com.google.cloud.speech.v1.RecognizeResponse> recognizeCallable()
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
RecognizeRequest request = RecognizeRequest.newBuilder()
.setConfig(config)
.setAudio(audio)
.build();
ApiFuture<RecognizeResponse> future = speechClient.recognizeCallable().futureCall(request);
// Do something
RecognizeResponse response = future.get();
}
@BetaApi(value="The surface for long-running operations is not stable yet and may change in the future.") public final OperationFuture<com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> longRunningRecognizeAsync(com.google.cloud.speech.v1.RecognitionConfig config, com.google.cloud.speech.v1.RecognitionAudio audio)
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(config, audio).get();
}
config
- *Required* Provides information to the recognizer that specifies how to
process the request.audio
- *Required* The audio data to be recognized.ApiException
- if the remote call fails@BetaApi(value="The surface for long-running operations is not stable yet and may change in the future.") public final OperationFuture<com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> longRunningRecognizeAsync(com.google.cloud.speech.v1.LongRunningRecognizeRequest request)
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
LongRunningRecognizeRequest request = LongRunningRecognizeRequest.newBuilder()
.setConfig(config)
.setAudio(audio)
.build();
LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();
}
request
- The request object containing all of the parameters for the API call.ApiException
- if the remote call fails@BetaApi(value="The surface for use by generated code is not stable yet and may change in the future.") public final OperationCallable<com.google.cloud.speech.v1.LongRunningRecognizeRequest,com.google.cloud.speech.v1.LongRunningRecognizeResponse,com.google.cloud.speech.v1.LongRunningRecognizeMetadata> longRunningRecognizeOperationCallable()
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
LongRunningRecognizeRequest request = LongRunningRecognizeRequest.newBuilder()
.setConfig(config)
.setAudio(audio)
.build();
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> future = speechClient.longRunningRecognizeOperationCallable().futureCall(request);
// Do something
LongRunningRecognizeResponse response = future.get();
}
public final UnaryCallable<com.google.cloud.speech.v1.LongRunningRecognizeRequest,Operation> longRunningRecognizeCallable()
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
RecognitionConfig.AudioEncoding encoding = RecognitionConfig.AudioEncoding.FLAC;
int sampleRateHertz = 44100;
String languageCode = "en-US";
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(encoding)
.setSampleRateHertz(sampleRateHertz)
.setLanguageCode(languageCode)
.build();
String uri = "gs://bucket_name/file_name.flac";
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setUri(uri)
.build();
LongRunningRecognizeRequest request = LongRunningRecognizeRequest.newBuilder()
.setConfig(config)
.setAudio(audio)
.build();
ApiFuture<Operation> future = speechClient.longRunningRecognizeCallable().futureCall(request);
// Do something
Operation response = future.get();
}
public final BidiStreamingCallable<com.google.cloud.speech.v1.StreamingRecognizeRequest,com.google.cloud.speech.v1.StreamingRecognizeResponse> streamingRecognizeCallable()
Sample code:
try (SpeechClient speechClient = SpeechClient.create()) {
BidiStream<StreamingRecognizeRequest, StreamingRecognizeResponse> bidiStream =
speechClient.streamingRecognizeCallable().call();
StreamingRecognizeRequest request = StreamingRecognizeRequest.newBuilder().build();
bidiStream.send(request);
for (StreamingRecognizeResponse response : bidiStream) {
// Do something when receive a response
}
}
public final void close()
close
in interface AutoCloseable
public void shutdown()
shutdown
in interface BackgroundResource
public boolean isShutdown()
isShutdown
in interface BackgroundResource
public boolean isTerminated()
isTerminated
in interface BackgroundResource
public void shutdownNow()
shutdownNow
in interface BackgroundResource
public boolean awaitTermination(long duration, TimeUnit unit) throws InterruptedException
awaitTermination
in interface BackgroundResource
InterruptedException
Copyright © 2019 Google LLC. All rights reserved.