Constructor
new BatchTransaction(optionsopt)
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
options |
TimestampBounds |
<optional> |
Extends
Members
ended
Whether or not the transaction has ended. If true, make no further requests, and discard the transaction.
- Inherited From:
id
The transaction ID.
- Overrides:
metadata
The raw transaction response object. It is populated after Snapshot#begin is called.
- Inherited From:
readTimestamp
Snapshot only The timestamp at which all reads are performed.
- Overrides:
readTimestampProto
Snapshot only The protobuf version of Snapshot#readTimestamp. This is useful if you require microsecond precision.
- Overrides:
Methods
close(callbackopt) → {Promise.<BasicResponse>}
Closes all open resources.
When the transaction is no longer needed, you should call this method to free up resources allocated by the Batch client.
Calling this method would render the transaction unusable everywhere. In particular if this transaction object was being used across multiple machines, calling this method on any of the machine would make the transaction unusable on all the machines. This should only be called when the transaction is no longer needed anywhere
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
callback |
BasicCallback |
<optional> |
Callback function. |
Returns:
Type | Description |
---|---|
Promise.<BasicResponse> |
Example
```
const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner();
const instance = spanner.instance('my-instance');
const database = instance.database('my-database');
database.createBatchTransaction(function(err, transaction) {
if (err) {
// Error handling omitted.
}
transaction.close(function(err, apiResponse) {});
});
//-
// If the callback is omitted, we'll return a Promise.
//-
database.createBatchTransaction().then(function(data) {
const transaction = data[0];
return transaction.close();
});
```
createQueryPartitions(query, callbackopt) → {Promise.<CreateQueryPartitionsResponse>}
Creates a set of query partitions that can be used to execute a query operation in parallel. Partitions become invalid when the transaction used to create them is closed.
Parameters:
Name | Type | Attributes | Description | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
query |
string | object |
A SQL query or
Properties
|
|||||||||||||||||||||
callback |
CreateQueryPartitionsCallback |
<optional> |
Callback callback function. |
Returns:
Type | Description |
---|---|
Promise.<CreateQueryPartitionsResponse> |
Example
// Imports the Google Cloud client library
const {Spanner} = require('@google-cloud/spanner');
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const databaseId = 'my-database';
// Creates a client
const spanner = new Spanner({
projectId: projectId,
});
// Gets a reference to a Cloud Spanner instance and database
const instance = spanner.instance(instanceId);
const database = instance.database(databaseId);
const [transaction] = await database.createBatchTransaction();
const query = {
sql: 'SELECT * FROM Singers',
// DataBoost option is an optional parameter which can also be used for partition read
// and query to execute the request via spanner independent compute resources.
dataBoostEnabled: true,
};
// A Partition object is serializable and can be used from a different process.
const [partitions] = await transaction.createQueryPartitions(query);
console.log(`Successfully created ${partitions.length} query partitions.`);
let row_count = 0;
const promises = [];
partitions.forEach(partition => {
promises.push(
transaction.execute(partition).then(results => {
const rows = results[0].map(row => row.toJSON());
row_count += rows.length;
})
);
});
Promise.all(promises)
.then(() => {
console.log(
`Successfully received ${row_count} from executed partitions.`
);
transaction.close();
})
.then(() => {
database.close();
});
createReadPartitions(options, callbackopt) → {Promise.<CreateReadPartitionsResponse>}
Creates a set of read partitions that can be used to execute a read operation in parallel. Partitions become invalid when the transaction used to create them is closed.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
options |
ReadRequestOptions |
Configuration object, describing what to read from. |
|
callback |
CreateReadPartitionsCallback |
<optional> |
Callback function. |
Returns:
Type | Description |
---|---|
Promise.<CreateReadPartitionsResponse> |
createReadStream(table, query) → {ReadableStream}
Create a readable object stream to receive rows from the database using key lookups and scans.
Wrapper around v1.SpannerClient#streamingRead.
Parameters:
Name | Type | Description |
---|---|---|
table |
string |
The table to read from. |
query |
ReadRequest |
Configuration object. See official
|
Returns:
Type | Description |
---|---|
ReadableStream |
A readable stream that emits rows. |
- Inherited From:
- See:
Fires:
- PartialResultStream#event:response
- PartialResultStream#event:stats
Examples
```
transaction.createReadStream('Singers', {
keys: ['1'],
columns: ['SingerId', 'name']
})
.on('error', function(err) {})
.on('data', function(row) {
// row = [
// {
// name: 'SingerId',
// value: '1'
// },
// {
// name: 'Name',
// value: 'Eddie Wilson'
// }
// ]
})
.on('end', function() {
// All results retrieved.
});
```
Provide an array for `query.keys` to read with a
composite key.
```
const query = {
keys: [
[
'Id1',
'Name1'
],
[
'Id2',
'Name2'
]
],
// ...
};
```
Rows are returned as an array of object arrays. Each
object has a `name` and `value` property. To get a serialized object, call
`toJSON()`.
```
transaction.createReadStream('Singers', {
keys: ['1'],
columns: ['SingerId', 'name']
})
.on('error', function(err) {})
.on('data', function(row) {
// row.toJSON() = {
// SingerId: '1',
// Name: 'Eddie Wilson'
// }
})
.on('end', function() {
// All results retrieved.
});
```
Alternatively, set `query.json` to `true`, and this step
will perform automatically.
```
transaction.createReadStream('Singers', {
keys: ['1'],
columns: ['SingerId', 'name'],
json: true,
})
.on('error', function(err) {})
.on('data', function(row) {
// row = {
// SingerId: '1',
// Name: 'Eddie Wilson'
// }
})
.on('end', function() {
// All results retrieved.
});
```
If you anticipate many results, you can end a stream
early to prevent unnecessary processing and API requests.
```
transaction.createReadStream('Singers', {
keys: ['1'],
columns: ['SingerId', 'name']
})
.on('data', function(row) {
this.end();
});
```
end()
Let the client know you're done with a particular transaction. This should mainly be called for Snapshot objects, however in certain cases you may want to call them for Transaction objects as well.
- Inherited From:
Examples
Calling `end` on a read only snapshot
```
database.getSnapshot((err, transaction) => {
if (err) {
// Error handling omitted.
}
transaction.run('SELECT * FROM Singers', (err, rows) => {
if (err) {
// Error handling omitted.
}
// End the snapshot.
transaction.end();
});
});
```
Calling `end` on a read/write transaction
```
database.runTransaction((err, transaction) => {
if (err) {
// Error handling omitted.
}
const query = 'UPDATE Account SET Balance = 1000 WHERE Key = 1';
transaction.runUpdate(query, err => {
if (err) {
// In the event of an error, there would be nothing to rollback,
so
// instead of continuing, discard the
transaction. transaction.end(); return;
}
transaction.commit(err => {});
});
});
```
execute(partition, callbackopt) → {Promise.<RunResponse>|Promise.<TransactionRequestReadResponse>}
Executes partition.
Parameters:
Name | Type | Attributes | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
partition |
ReadPartition | QueryParition |
The partition object. Properties
|
|||||||||
callback |
TransactionRequestReadCallback | RunCallback |
<optional> |
Callback function. |
Returns:
Type | Description |
---|---|
Promise.<RunResponse> | Promise.<TransactionRequestReadResponse> |
- See:
-
- Transaction#read when using ReadPartition.
- Transaction#run when using QueryParition.
Example
// Imports the Google Cloud client library
const {Spanner} = require('@google-cloud/spanner');
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const databaseId = 'my-database';
// const identifier = {};
// const partition = {};
// Creates a client
const spanner = new Spanner({
projectId: projectId,
});
// Gets a reference to a Cloud Spanner instance and database
const instance = spanner.instance(instanceId);
const database = instance.database(databaseId);
const transaction = database.batchTransaction(identifier);
const [rows] = await transaction.execute(partition);
console.log(`Successfully received ${rows.length} from executed partition.`);
executeStream(partition) → {ReadableStream}
Executes partition in streaming mode.
Parameters:
Name | Type | Description |
---|---|---|
partition |
ReadPartition | QueryPartition |
The partition object. |
Returns:
Type | Description |
---|---|
ReadableStream |
A readable stream that emits rows. |
- See:
-
- Transaction#createReadStream when using ReadPartition.
- Transaction#runStream when using QueryPartition.
Example
```
const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner();
const instance = spanner.instance('my-instance');
const database = instance.database('my-database');
database.createBatchTransaction(function(err, transaction) {
if (err) {
// Error handling omitted.
}
transaction.createReadPartitions(options, function(err, partitions) {
const partition = partitions[0];
transaction
.executeStream(partition)
.on('error', function(err) {})
.on('data', function(row) {
// row = [
// {
// name: 'SingerId',
// value: '1'
// },
// {
// name: 'Name',
// value: 'Eddie Wilson'
// }
// ]
})
.on('end', function() {
// All results retrieved
});
});
});
```
identifier() → {TransactionIdentifier}
Creates a transaction identifier used to reference the transaction in workers.
Returns:
Type | Description |
---|---|
TransactionIdentifier |
Example
```
const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner();
const instance = spanner.instance('my-instance');
const database = instance.database('my-database');
database.createBatchTransaction(function(err, transaction) {
const identifier = transaction.identifier();
});
```
runStream(query) → {ReadableStream}
Create a readable object stream to receive resulting rows from a SQL statement.
Wrapper around v1.SpannerClient#executeStreamingSql.
Parameters:
Name | Type | Description |
---|---|---|
query |
string | ExecuteSqlRequest |
A SQL query or ExecuteSqlRequest object. |
Returns:
Type | Description |
---|---|
ReadableStream |
- Inherited From:
- See:
Fires:
- PartialResultStream#event:response
- PartialResultStream#event:stats
Examples
```
const query = 'SELECT * FROM Singers';
transaction.runStream(query)
.on('error', function(err) {})
.on('data', function(row) {
// row = {
// SingerId: '1',
// Name: 'Eddie Wilson'
// }
})
.on('end', function() {
// All results retrieved.
});
```
The SQL query string can contain parameter placeholders.
A parameter placeholder consists of '@' followed by the parameter name.
```
const query = {
sql: 'SELECT * FROM Singers WHERE name = @name',
params: {
name: 'Eddie Wilson'
}
};
transaction.runStream(query)
.on('error', function(err) {})
.on('data', function(row) {})
.on('end', function() {});
```
If you anticipate many results, you can end a stream
early to prevent unnecessary processing and API requests.
```
transaction.runStream(query)
.on('data', function(row) {
this.end();
});
```