The Cloud Spanner API can be used to manage sessions and execute
transactions on data stored in Cloud Spanner databases.
Constructor
new SpannerClient(optionsopt, gaxInstanceopt)
Construct an instance of SpannerClient.
Parameters:
Name
Type
Attributes
Description
options
object
<optional>
The configuration object.
The options accepted by the constructor are described in detail
in this document.
The common options are:
Properties
Name
Type
Attributes
Description
credentials
object
<optional>
Credentials object.
Properties
Name
Type
Attributes
Description
client_email
string
<optional>
private_key
string
<optional>
email
string
<optional>
Account email address. Required when
using a .pem or .p12 keyFilename.
keyFilename
string
<optional>
Full path to the a .json, .pem, or
.p12 key downloaded from the Google Developers Console. If you provide
a path to a JSON file, the projectId option below is not necessary.
NOTE: .pem and .p12 require you to specify options.email as well.
port
number
<optional>
The port on which to connect to
the remote host.
projectId
string
<optional>
The project ID from the Google
Developer's Console, e.g. 'grape-spaceship-123'. We will also check
the environment variable GCLOUD_PROJECT for your project ID. If your
app is running in an environment which supports
Application Default Credentials,
your project ID will be detected automatically.
apiEndpoint
string
<optional>
The domain name of the
API remote host.
clientConfig
gax.ClientConfig
<optional>
Client configuration override.
Follows the structure of gapicConfig.
fallback
boolean
<optional>
Use HTTP/1.1 REST mode.
For more information, please check the
documentation.
gaxInstance
gax
<optional>
loaded instance of google-gax. Useful if you
need to avoid loading the default gRPC version and want to use the fallback
HTTP implementation. Load only fallback version and pass it to the constructor:
const gax = require('google-gax/build/src/fallback'); // avoids loading google-gax with gRPC const client = new SpannerClient({fallback: true}, gax);
Members
apiEndpoint
The DNS address for this API service.
apiEndpoint
The DNS address for this API service - same as servicePath.
port
The port for this API service.
scopes
The scopes needed to make gRPC calls for every method defined
in this service.
servicePath
The DNS address for this API service.
Methods
batchWrite(request, optionsopt) → {Stream}
Batches the supplied mutation groups in a collection of efficient
transactions. All mutations in a group are committed atomically. However,
mutations across groups can be committed non-atomically in an unspecified
order and thus, they must be independent of each other. Partial failure is
possible, i.e., some groups may have been committed successfully, while
some may have failed. The results of individual batches are streamed into
the response as the batches are applied.
BatchWrite requests are not replay protected, meaning that each mutation
group may be applied more than once. Replays of non-idempotent mutations
may have undesirable effects. For example, replays of an insert mutation
may produce an already exists error or if you use generated or commit
timestamp-based keys, it may result in additional rows being added to the
mutation's table. We recommend structuring your mutation groups to be
idempotent to avoid this issue.
Parameters:
Name
Type
Attributes
Description
request
Object
The request object that will be sent.
Properties
Name
Type
Attributes
Description
session
string
Required. The session in which the batch request is to be run.
Optional. When exclude_txn_from_change_streams is set to true:
Mutations from all transactions in this batch write operation will not
be recorded in change streams with DDL option allow_txn_exclusion=true
that are tracking columns modified by these transactions.
Mutations from all transactions in this batch write operation will be
recorded in change streams with DDL option allow_txn_exclusion=false or not set that are tracking columns modified by these transactions.
When exclude_txn_from_change_streams is set to false or not set,
mutations from all transactions in this batch write operation will be
recorded in all change streams that are tracking columns modified by these
transactions.
Like ExecuteSql, except returns the
result set as a stream. Unlike
ExecuteSql, there is no limit on
the size of the returned result set. However, no individual row in the
result set can exceed 100 MiB, and no column value can exceed 10 MiB.
Parameters:
Name
Type
Attributes
Description
request
Object
The request object that will be sent.
Properties
Name
Type
Attributes
Description
session
string
Required. The session in which the SQL query should be performed.
For queries, if none is provided, the default is a temporary read-only
transaction with strong concurrency.
Standard DML statements require a read-write transaction. To protect
against replays, single-use transactions are not supported. The caller
must either supply an existing transaction ID or begin a new transaction.
Partitioned DML requires an existing Partitioned DML transaction ID.
Parameter names and values that bind to placeholders in the SQL string.
A parameter placeholder consists of the @ character followed by the
parameter name (for example, @firstName). Parameter names must conform
to the naming requirements of identifiers as specified at
https://cloud.google.com/spanner/docs/lexical#identifiers.
Parameters can appear anywhere that a literal value is expected. The same
parameter name can be used more than once, for example:
"WHERE id > @msg_id AND id < @msg_id + 100"
It is an error to execute a SQL statement with unbound parameters.
paramTypes
Array.<number>
It is not always possible for Cloud Spanner to infer the right SQL type
from a JSON value. For example, values of type BYTES and values
of type STRING both appear in
params as JSON strings.
In these cases, param_types can be used to specify the exact
SQL type for some or all of the SQL statement parameters. See the
definition of Type for more information
about SQL types.
resumeToken
Buffer
If this request is resuming a previously interrupted SQL statement
execution, resume_token should be copied from the last
PartialResultSet yielded before the
interruption. Doing this enables the new SQL statement execution to resume
where the last one left off. The rest of the request parameters must
exactly match the request that yielded this token.
Used to control the amount of debugging information returned in
ResultSetStats. If
partition_token is
set, query_mode can only
be set to
QueryMode.NORMAL.
partitionToken
Buffer
If present, results will be restricted to the specified partition
previously created using PartitionQuery(). There must be an exact
match for the values of fields common to this message and the
PartitionQueryRequest message used to create this partition_token.
seqno
number
A per-transaction sequence number used to identify this request. This field
makes each request idempotent such that if the request is received multiple
times, at most one will succeed.
The sequence number must be monotonically increasing within the
transaction. If a request arrives for the first time with an out-of-order
sequence number, the transaction may be aborted. Replays of previously
handled requests will yield the same response as the first execution.
If this is for a partitioned query and this field is set to true, the
request is executed with Spanner Data Boost independent compute resources.
If the field is set to true but the request does not set
partition_token, the API returns an INVALID_ARGUMENT error.
lastStatement
boolean
<optional>
Optional. If set to true, this statement marks the end of the transaction.
The transaction should be committed or aborted after this statement
executes, and attempts to execute any other requests against this
transaction (including reads and queries) will be rejected.
For DML statements, setting this option may cause some error reporting to
be deferred until commit time (e.g. validation of unique constraints).
Given this, successful execution of a DML statement should not be assumed
until a subsequent Commit call completes successfully.
An object stream which emits PartialResultSet on 'data' event.
Please see the documentation
for more details and examples.
getProjectId() → {Promise}
Return the project ID used by this class.
Returns:
Type
Description
Promise
A promise that resolves to string containing the project ID.
initialize() → {Promise}
Initialize the client.
Performs asynchronous operations (such as authentication) and prepares the client.
This function will be called automatically when any class method is called for the
first time, but if you need to initialize it before calling an actual method,
feel free to call initialize() directly.
You can await on this method if you want to make sure the client is initialized.
Returns:
Type
Description
Promise
A promise that resolves to an authenticated service stub.
listSessionsAsync(request, optionsopt) → {Object}
Equivalent to listSessions, but returns an iterable object.
for-await-of syntax is used with the iterable to get response elements on-demand.
Parameters:
Name
Type
Attributes
Description
request
Object
The request object that will be sent.
Properties
Name
Type
Description
database
string
Required. The database in which to list sessions.
pageSize
number
Number of sessions to be returned in the response. If 0 or less, defaults
to the server's maximum allowed page size.
pageToken
string
If non-empty, page_token should contain a
next_page_token
from a previous
ListSessionsResponse.
filter
string
An expression for filtering the results of the request. Filter rules are
case insensitive. The fields eligible for filtering are:
* `labels.key` where key is the name of a label
Some examples of using filters are:
* `labels.env:*` --> The session has the label "env".
* `labels.env:dev` --> The session has the label "env" and the value of
the label contains the string "dev".
An iterable Object that allows async iteration.
When you iterate the returned iterable, each element will be an object representing
Session. The API will be called under the hood as needed, once per the page,
so you can stop the iteration when you don't need more results.
Please see the documentation
for more details and examples.
Equivalent to listSessions, but returns a NodeJS Stream object.
Parameters:
Name
Type
Attributes
Description
request
Object
The request object that will be sent.
Properties
Name
Type
Description
database
string
Required. The database in which to list sessions.
pageSize
number
Number of sessions to be returned in the response. If 0 or less, defaults
to the server's maximum allowed page size.
pageToken
string
If non-empty, page_token should contain a
next_page_token
from a previous
ListSessionsResponse.
filter
string
An expression for filtering the results of the request. Filter rules are
case insensitive. The fields eligible for filtering are:
* `labels.key` where key is the name of a label
Some examples of using filters are:
* `labels.env:*` --> The session has the label "env".
* `labels.env:dev` --> The session has the label "env" and the value of
the label contains the string "dev".
An object stream which emits an object representing Session on 'data' event.
The client library will perform auto-pagination by default: it will call the API as many
times as needed. Note that it can affect your quota.
We recommend using listSessionsAsync()
method described below for async iteration which you can stop as needed.
Please see the documentation
for more details and examples.
Return a fully-qualified session resource name string.
Parameters:
Name
Type
Description
project
string
instance
string
database
string
session
string
Returns:
Type
Description
string
Resource name string.
streamingRead(request, optionsopt) → {Stream}
Like Read, except returns the result set
as a stream. Unlike Read, there is no
limit on the size of the returned result set. However, no individual row in
the result set can exceed 100 MiB, and no column value can exceed
10 MiB.
Parameters:
Name
Type
Attributes
Description
request
Object
The request object that will be sent.
Properties
Name
Type
Attributes
Description
session
string
Required. The session in which the read should be performed.
The transaction to use. If none is provided, the default is a
temporary read-only transaction with strong concurrency.
table
string
Required. The name of the table in the database to be read.
index
string
If non-empty, the name of an index on
table. This index is used instead of
the table primary key when interpreting
key_set and sorting result rows.
See key_set for further
information.
columns
Array.<string>
Required. The columns of table to be
returned for each row matching this request.
Required. key_set identifies the rows to be yielded. key_set names the
primary keys of the rows in table to
be yielded, unless index is present.
If index is present, then
key_set instead names index keys
in index.
If the partition_token
field is empty, rows are yielded in table primary key order (if
index is empty) or index key order
(if index is non-empty). If the
partition_token field is
not empty, rows will be yielded in an unspecified order.
It is not an error for the key_set to name rows that do not
exist in the database. Read yields nothing for nonexistent rows.
limit
number
If greater than zero, only the first limit rows are yielded. If limit
is zero, the default is no limit. A limit cannot be specified if
partition_token is set.
resumeToken
Buffer
If this request is resuming a previously interrupted read,
resume_token should be copied from the last
PartialResultSet yielded before the
interruption. Doing this enables the new read to resume where the last read
left off. The rest of the request parameters must exactly match the request
that yielded this token.
partitionToken
Buffer
If present, results will be restricted to the specified partition
previously created using PartitionRead(). There must be an exact
match for the values of fields common to this message and the
PartitionReadRequest message used to create this partition_token.
By default, Spanner will return result rows in primary key order except for
PartitionRead requests. For applications that do not require rows to be
returned in primary key (ORDER_BY_PRIMARY_KEY) order, setting
ORDER_BY_NO_ORDER option allows Spanner to optimize row retrieval,
resulting in lower latencies in certain cases (e.g. bulk point lookups).