Transaction

Transaction

This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.

Calling either Transaction#commit or Transaction#rollback signals that the transaction is finished and no further requests will be made. If for some reason you decide not to call one of the aformentioned methods, call Transaction#end to release the underlying Session.

Running a transaction via Database#runTransaction or Database#runTransactionAsync automatically re-runs the transaction on ABORTED errors.

Database#getTransaction returns a plain Transaction object, requiring the user to retry manually.

Constructor

new Transaction(session)

Parameters:
Name Type Description
session Session

The parent Session object.

Examples
```
const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner();

const instance = spanner.instance('my-instance');
const database = instance.database('my-database');

database.runTransaction(function(err, transaction) {
  // The `transaction` object is ready for use.
});

```
To manually control retrying the transaction, use the
`getTransaction` method.
```
database.getTransaction(function(err, transaction) {
  // The `transaction` object is ready for use.
});
```

Extends

Members

commitTimestamp

Timestamp at which the transaction was committed. Will be populated once Transaction#commit is called.

commitTimestampProto

The protobuf version of Transaction#commitTimestamp. This is useful if you require microsecond precision.

ended

Whether or not the transaction has ended. If true, make no further requests, and discard the transaction.

Overrides:

id

The transaction ID.

Overrides:

metadata

The raw transaction response object. It is populated after Snapshot#begin is called.

Overrides:

readTimestamp

Snapshot only The timestamp at which all reads are performed.

Overrides:

readTimestampProto

Snapshot only The protobuf version of Snapshot#readTimestamp. This is useful if you require microsecond precision.

Overrides:

runUpdate

Execute a DML statement and get the affected row count.

See:
  • Transaction#run
Example
```
const query = 'UPDATE Account SET Balance = 1000 WHERE Key = 1';

transaction.runUpdate(query, (err, rowCount) => {
  if (err) {
    // Error handling omitted.
  }
});
```

Methods

createReadStream(table, query) → {ReadableStream}

Create a readable object stream to receive rows from the database using key lookups and scans.

Wrapper around v1.SpannerClient#streamingRead.

Parameters:
Name Type Description
table string

The table to read from.

query ReadRequest

Configuration object. See official ReadRequest. API documentation.

Returns:
Type Description
ReadableStream

A readable stream that emits rows.

Overrides:
See:
Fires:
  • PartialResultStream#event:response
  • PartialResultStream#event:stats
Examples
```
transaction.createReadStream('Singers', {
    keys: ['1'],
    columns: ['SingerId', 'name']
  })
  .on('error', function(err) {})
  .on('data', function(row) {
    // row = [
    //   {
    //     name: 'SingerId',
    //     value: '1'
    //   },
    //   {
    //     name: 'Name',
    //     value: 'Eddie Wilson'
    //   }
    // ]
  })
  .on('end', function() {
    // All results retrieved.
  });

```
Provide an array for `query.keys` to read with a
composite key.
```
const query = {
  keys: [
    [
      'Id1',
      'Name1'
    ],
    [
      'Id2',
      'Name2'
    ]
  ],
  // ...
};
```
Rows are returned as an array of object arrays. Each
object has a `name` and `value` property. To get a serialized object, call
`toJSON()`.
```
transaction.createReadStream('Singers', {
    keys: ['1'],
    columns: ['SingerId', 'name']
  })
  .on('error', function(err) {})
  .on('data', function(row) {
    // row.toJSON() = {
    //   SingerId: '1',
    //   Name: 'Eddie Wilson'
    // }
  })
  .on('end', function() {
    // All results retrieved.
  });
```
Alternatively, set `query.json` to `true`, and this step
will perform automatically.
```
transaction.createReadStream('Singers', {
    keys: ['1'],
    columns: ['SingerId', 'name'],
    json: true,
  })
  .on('error', function(err) {})
  .on('data', function(row) {
    // row = {
    //   SingerId: '1',
    //   Name: 'Eddie Wilson'
    // }
  })
  .on('end', function() {
    // All results retrieved.
  });
```
If you anticipate many results, you can end a stream
early to prevent unnecessary processing and API requests.
```
transaction.createReadStream('Singers', {
    keys: ['1'],
    columns: ['SingerId', 'name']
  })
  .on('data', function(row) {
    this.end();
  });
```

deleteRows(table, keys)

Delete rows from a table.

Parameters:
Name Type Description
table string

The name of the table.

keys array

The keys for the rows to delete. If using a composite key, provide an array within this array. See the example below.

See:
Examples
```
const keys = ['Id1', 'Id2', 'Id3'];

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue this mutation until later calling `commit`.
  // Note that a callback is not passed to `deleteRows`.
  transaction.deleteRows('Singers', keys);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The rows were deleted successfully.
    }
  });
});

```
Provide an array for `keys` to delete rows with a
composite key.
```
const keys = [
  [
    'Id1',
    'Name1'
  ],
  [
    'Id2',
    'Name2'
  ]
];
```

end()

Let the client know you're done with a particular transaction. This should mainly be called for Snapshot objects, however in certain cases you may want to call them for Transaction objects as well.

Overrides:
Examples
Calling `end` on a read only snapshot
```
database.getSnapshot((err, transaction) => {
  if (err) {
    // Error handling omitted.
  }

  transaction.run('SELECT * FROM Singers', (err, rows) => {
    if (err) {
      // Error handling omitted.
    }

    // End the snapshot.
    transaction.end();
  });
});
```
Calling `end` on a read/write transaction
```
database.runTransaction((err, transaction) => {
  if (err) {
    // Error handling omitted.
  }

  const query = 'UPDATE Account SET Balance = 1000 WHERE Key = 1';

  transaction.runUpdate(query, err => {
    if (err) {
      // In the event of an error, there would be nothing to rollback,
so
      // instead of continuing, discard the
transaction. transaction.end(); return;
    }

    transaction.commit(err => {});
  });
});
```

excludeTxnFromChangeStreams()

Use option excludeTxnFromChangeStreams to exclude read/write transactions from being tracked in change streams.

Enabling this options to true will effectively disable change stream tracking for a specified transaction, allowing read/write transaction to operate without being included in change streams.

insert(table, rows)

Insert rows of data into this table.

Parameters:
Name Type Description
table string

The name of the table.

rows object | Array.<object>

A map of names to values of data to insert into this table.

See:
Examples
```
const row = {
  SingerId: 'Id3',
  Name: 'Eddie Wilson'
};

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue this mutation until later calling `commit`.
  // Note that a callback is not passed to `insert`.
  transaction.insert('Singers', row);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The row was inserted successfully.
    }
  });
});

```
Multiple rows can be inserted at once.
```
const row2 = {
  SingerId: 'Id3b',
  Name: 'Joe West'
};

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue multiple mutations until later calling `commit`.
  // Note that a callback is not passed to `insert`.
  transaction.insert('Singers', [
    row,
    row2
  ]);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The rows were inserted successfully.
    }
  });
});
```

replace(table, rows)

Replace rows of data within a table.

Parameters:
Name Type Description
table string

The table to read from.

rows object | Array.<object>

A map of names to values of data to insert into this table.

See:
Example
```
const row = {
  SingerId: 'Id3',
  Name: 'Joe West'
};

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue this mutation until later calling `commit`.
  // Note that a callback is not passed to `replace`.
  transaction.replace('Singers', row);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The row was replaced successfully.
    }
  });
});
```

runStream(query) → {ReadableStream}

Create a readable object stream to receive resulting rows from a SQL statement.

Wrapper around v1.SpannerClient#executeStreamingSql.

Parameters:
Name Type Description
query string | ExecuteSqlRequest

A SQL query or ExecuteSqlRequest object.

Returns:
Type Description
ReadableStream
Overrides:
See:
Fires:
  • PartialResultStream#event:response
  • PartialResultStream#event:stats
Examples
```
const query = 'SELECT * FROM Singers';

transaction.runStream(query)
  .on('error', function(err) {})
  .on('data', function(row) {
    // row = {
    //   SingerId: '1',
    //   Name: 'Eddie Wilson'
    // }
  })
  .on('end', function() {
    // All results retrieved.
  });

```
The SQL query string can contain parameter placeholders.
A parameter placeholder consists of '@' followed by the parameter name.
```
const query = {
  sql: 'SELECT * FROM Singers WHERE name = @name',
  params: {
    name: 'Eddie Wilson'
  }
};

transaction.runStream(query)
  .on('error', function(err) {})
  .on('data', function(row) {})
  .on('end', function() {});
```
If you anticipate many results, you can end a stream
early to prevent unnecessary processing and API requests.
```
transaction.runStream(query)
  .on('data', function(row) {
    this.end();
  });
```

setQueuedMutations(mutationopt)

This method updates the _queuedMutations property of the transaction.

Parameters:
Name Type Attributes Description
mutation Array.<spannerClient.spanner.v1.Mutation> <optional>

update(table, rows)

Update rows of data within a table.

Parameters:
Name Type Description
table string

The table to read from.

rows object | Array.<object>

A map of names to values of data to insert into this table.

See:
Example
```
const row = {
  SingerId: 'Id3',
  Name: 'Joe West'
};

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue this mutation until later calling `commit`.
  // Note that a callback is not passed to `update`.
  transaction.update('Singers', row);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The row was updated successfully.
    }
  });
});
```

upsert(table, rows)

Insert or update rows of data within a table.

Parameters:
Name Type Description
table string

The table to read from.

rows object | Array.<object>

A map of names to values of data to insert into this table.

See:
Example
```
const row = {
  SingerId: 'Id3',
  Name: 'Joe West'
};

database.runTransaction(function(err, transaction) {
  if (err) {
    // Error handling omitted.
  }

  // Queue this mutation until later calling `commit`.
  // Note that a callback is not passed to `upsert`.
  transaction.upsert('Singers', row);

  // Commit the transaction.
  transaction.commit(function(err) {
    if (!err) {
      // The row was updated or inserted successfully.
    }
  });
});
```

useInRunner()

Mark transaction as started from the runner.

useOptimisticLock()

Use optimistic concurrency control for the transaction.

In this concurrency mode, operations during the execution phase, i.e., reads and queries, are performed without acquiring locks, and transactional consistency is ensured by running a validation process in the commit phase (when any needed locks are acquired). The validation process succeeds only if there are no conflicting committed transactions (that committed mutations to the read data at a commit timestamp after the read timestamp).