SpeechClient

SpeechClient

Service that implements Google Cloud Speech API.

Constructor

new SpeechClient(optionsopt)

Construct an instance of SpeechClient.

Parameters:
Name Type Attributes Description
options object <optional>

The configuration object. See the subsequent parameters for more details.

Properties
Name Type Attributes Description
credentials object <optional>

Credentials object.

Properties
Name Type Attributes Description
client_email string <optional>
private_key string <optional>
email string <optional>

Account email address. Required when using a .pem or .p12 keyFilename.

keyFilename string <optional>

Full path to the a .json, .pem, or .p12 key downloaded from the Google Developers Console. If you provide a path to a JSON file, the projectId option below is not necessary. NOTE: .pem and .p12 require you to specify options.email as well.

port number <optional>

The port on which to connect to the remote host.

projectId string <optional>

The project ID from the Google Developer's Console, e.g. 'grape-spaceship-123'. We will also check the environment variable GCLOUD_PROJECT for your project ID. If your app is running in an environment which supports Application Default Credentials, your project ID will be detected automatically.

promise function <optional>

Custom promise module to use instead of native Promises.

apiEndpoint string <optional>

The domain name of the API remote host.

Source:

Members

(static) apiEndpoint

The DNS address for this API service - same as servicePath(), exists for compatibility reasons.

Source:

(static) port

The port for this API service.

Source:

(static) scopes

The scopes needed to make gRPC calls for every method defined in this service.

Source:

(static) servicePath

The DNS address for this API service.

Source:

Methods

getProjectId(callback)

Return the project ID used by this class.

Parameters:
Name Type Description
callback function

the callback to be called with the current project Id.

Source:

longRunningRecognize(request, optionsopt, callbackopt) → {Promise}

Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an Operation.error or an Operation.response which contains a LongRunningRecognizeResponse message.

Parameters:
Name Type Attributes Description
request Object

The request object that will be sent.

Properties
Name Type Description
config Object

Required Provides information to the recognizer that specifies how to process the request.

This object should have the same structure as RecognitionConfig

audio Object

Required The audio data to be recognized.

This object should have the same structure as RecognitionAudio

options Object <optional>

Optional parameters. You can override the default settings for this call, e.g, timeout, retries, paginations, etc. See gax.CallOptions for the details.

callback function <optional>

The function which will be called with the result of the API call.

The second parameter to the callback is a gax.Operation object.

Source:
Example
const speech = require('@google-cloud/speech');

const client = new speech.v1p1beta1.SpeechClient({
  // optional auth parameters.
});

const encoding = 'FLAC';
const sampleRateHertz = 44100;
const languageCode = 'en-US';
const config = {
  encoding: encoding,
  sampleRateHertz: sampleRateHertz,
  languageCode: languageCode,
};
const uri = 'gs://bucket_name/file_name.flac';
const audio = {
  uri: uri,
};
const request = {
  config: config,
  audio: audio,
};

// Handle the operation using the promise pattern.
client.longRunningRecognize(request)
  .then(responses => {
    const [operation, initialApiResponse] = responses;

    // Operation#promise starts polling for the completion of the LRO.
    return operation.promise();
  })
  .then(responses => {
    const result = responses[0];
    const metadata = responses[1];
    const finalApiResponse = responses[2];
  })
  .catch(err => {
    console.error(err);
  });

const encoding = 'FLAC';
const sampleRateHertz = 44100;
const languageCode = 'en-US';
const config = {
  encoding: encoding,
  sampleRateHertz: sampleRateHertz,
  languageCode: languageCode,
};
const uri = 'gs://bucket_name/file_name.flac';
const audio = {
  uri: uri,
};
const request = {
  config: config,
  audio: audio,
};

// Handle the operation using the event emitter pattern.
client.longRunningRecognize(request)
  .then(responses => {
    const [operation, initialApiResponse] = responses;

    // Adding a listener for the "complete" event starts polling for the
    // completion of the operation.
    operation.on('complete', (result, metadata, finalApiResponse) => {
      // doSomethingWith(result);
    });

    // Adding a listener for the "progress" event causes the callback to be
    // called on any change in metadata when the operation is polled.
    operation.on('progress', (metadata, apiResponse) => {
      // doSomethingWith(metadata)
    });

    // Adding a listener for the "error" event handles any errors found during polling.
    operation.on('error', err => {
      // throw(err);
    });
  })
  .catch(err => {
    console.error(err);
  });

const encoding = 'FLAC';
const sampleRateHertz = 44100;
const languageCode = 'en-US';
const config = {
  encoding: encoding,
  sampleRateHertz: sampleRateHertz,
  languageCode: languageCode,
};
const uri = 'gs://bucket_name/file_name.flac';
const audio = {
  uri: uri,
};
const request = {
  config: config,
  audio: audio,
};

// Handle the operation using the await pattern.
const [operation] = await client.longRunningRecognize(request);

const [response] = await operation.promise();

recognize(request, optionsopt, callbackopt) → {Promise}

Performs synchronous speech recognition: receive results after all audio has been sent and processed.

Parameters:
Name Type Attributes Description
request Object

The request object that will be sent.

Properties
Name Type Description
config Object

Required Provides information to the recognizer that specifies how to process the request.

This object should have the same structure as RecognitionConfig

audio Object

Required The audio data to be recognized.

This object should have the same structure as RecognitionAudio

options Object <optional>

Optional parameters. You can override the default settings for this call, e.g, timeout, retries, paginations, etc. See gax.CallOptions for the details.

callback function <optional>

The function which will be called with the result of the API call.

The second parameter to the callback is an object representing RecognizeResponse.

Source:
Example
const speech = require('@google-cloud/speech');

const client = new speech.v1p1beta1.SpeechClient({
  // optional auth parameters.
});

const encoding = 'FLAC';
const sampleRateHertz = 44100;
const languageCode = 'en-US';
const config = {
  encoding: encoding,
  sampleRateHertz: sampleRateHertz,
  languageCode: languageCode,
};
const uri = 'gs://bucket_name/file_name.flac';
const audio = {
  uri: uri,
};
const request = {
  config: config,
  audio: audio,
};
client.recognize(request)
  .then(responses => {
    const response = responses[0];
    // doThingsWith(response)
  })
  .catch(err => {
    console.error(err);
  });

streamingRecognize(optionsopt) → {Stream}

Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).

Parameters:
Name Type Attributes Description
options Object <optional>

Optional parameters. You can override the default settings for this call, e.g, timeout, retries, paginations, etc. See gax.CallOptions for the details.

Source:
Example
const speech = require('@google-cloud/speech');

const client = new speech.v1p1beta1.SpeechClient({
  // optional auth parameters.
});

const stream = client.streamingRecognize().on('data', response => {
  // doThingsWith(response)
});
const request = {};
// Write request objects.
stream.write(request);