As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Table Async

Note

It is generally not recommended to use the async client in an otherwise synchronous codebase. To make use of asyncio’s performance benefits, the codebase should be designed to be async from the ground up.

class google.cloud.bigtable.data._async.client.TableAsync(client: BigtableDataClientAsync, instance_id: str, table_id: str, app_profile_id: str | None = None, *, default_read_rows_operation_timeout: float = 600, default_read_rows_attempt_timeout: float | None = 20, default_mutate_rows_operation_timeout: float = 600, default_mutate_rows_attempt_timeout: float | None = 60, default_operation_timeout: float = 60, default_attempt_timeout: float | None = 20, default_read_rows_retryable_errors: Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>, <class 'google.api_core.exceptions.Aborted'>), default_mutate_rows_retryable_errors: Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>), default_retryable_errors: Sequence[type[Exception]] = (<class 'google.api_core.exceptions.DeadlineExceeded'>, <class 'google.api_core.exceptions.ServiceUnavailable'>))[source]

Bases: object

Main Data API surface

Table object maintains table_id, and app_profile_id context, and passes them with each call

Initialize a Table instance

Must be created within an async context (running event loop)

Parameters
  • instance_id – The Bigtable instance ID to associate with this client. instance_id is combined with the client’s project to fully specify the instance

  • table_id – The ID of the table. table_id is combined with the instance_id and the client’s project to fully specify the table

  • app_profile_id – The app profile to associate with requests. https://cloud.google.com/bigtable/docs/app-profiles

  • default_read_rows_operation_timeout – The default timeout for read rows operations, in seconds. If not set, defaults to 600 seconds (10 minutes)

  • default_read_rows_attempt_timeout – The default timeout for individual read rows rpc requests, in seconds. If not set, defaults to 20 seconds

  • default_mutate_rows_operation_timeout – The default timeout for mutate rows operations, in seconds. If not set, defaults to 600 seconds (10 minutes)

  • default_mutate_rows_attempt_timeout – The default timeout for individual mutate rows rpc requests, in seconds. If not set, defaults to 60 seconds

  • default_operation_timeout – The default timeout for all other operations, in seconds. If not set, defaults to 60 seconds

  • default_attempt_timeout – The default timeout for all other individual rpc requests, in seconds. If not set, defaults to 20 seconds

  • default_read_rows_retryable_errors – a list of errors that will be retried if encountered during read_rows and related operations. Defaults to 4 (DeadlineExceeded), 14 (ServiceUnavailable), and 10 (Aborted)

  • default_mutate_rows_retryable_errors – a list of errors that will be retried if encountered during mutate_rows and related operations. Defaults to 4 (DeadlineExceeded) and 14 (ServiceUnavailable)

  • default_retryable_errors – a list of errors that will be retried if encountered during all other operations. Defaults to 4 (DeadlineExceeded) and 14 (ServiceUnavailable)

Raises

RuntimeError – if called outside of an async context (no running event loop)

async __aenter__()[source]

Implement async context manager protocol

Ensure registration task has time to run, so that grpc channels will be warmed for the specified instance

async __aexit__(exc_type, exc_val, exc_tb)[source]

Implement async context manager protocol

Unregister this instance with the client, so that grpc channels will no longer be warmed

async bulk_mutate_rows(mutation_entries: list[RowMutationEntry], *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS)[source]

Applies mutations for multiple rows in a single batched request.

Each individual RowMutationEntry is applied atomically, but separate entries may be applied in arbitrary order (even for entries targetting the same row) In total, the row_mutations can contain at most 100000 individual mutations across all entries

Idempotent entries (i.e., entries with mutations with explicit timestamps) will be retried on failure. Non-idempotent will not, and will reported in a raised exception group

Parameters
  • mutation_entries – the batches of mutations to apply Each entry will be applied atomically, but entries will be applied in arbitrary order

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_mutate_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_mutate_rows_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_mutate_rows_retryable_errors

Raises
  • MutationsExceptionGroup – if one or more mutations fails Contains details about any failed entries in .exceptions

  • ValueError – if invalid arguments are provided

async check_and_mutate_row(row_key: str | bytes, predicate: RowFilter | None, *, true_case_mutations: Mutation | list[Mutation] | None = None, false_case_mutations: Mutation | list[Mutation] | None = None, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT) bool[source]

Mutates a row atomically based on the output of a predicate filter

Non-idempotent operation: will not be retried

Parameters
  • row_key – the key of the row to mutate

  • predicate – the filter to be applied to the contents of the specified row. Depending on whether or not any results are yielded, either true_case_mutations or false_case_mutations will be executed. If None, checks that the row contains any values at all.

  • true_case_mutations – Changes to be atomically applied to the specified row if predicate yields at least one cell when applied to row_key. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if false_case_mutations is empty, and at most 100000.

  • false_case_mutations – Changes to be atomically applied to the specified row if predicate_filter does not yield any cells when applied to row_key. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if true_case_mutations is empty, and at most 100000.

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will not be retried. Defaults to the Table’s default_operation_timeout

Returns

bool indicating whether the predicate was true or false

Raises

google.api_core.exceptions.GoogleAPIError – exceptions from grpc call

async close()[source]

Called to close the Table instance and release any resources held by it.

async mutate_row(row_key: str | bytes, mutations: list[Mutation] | Mutation, *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT)[source]

Mutates a row atomically.

Cells already present in the row are left unchanged unless explicitly changed by mutation.

Idempotent operations (i.e, all mutations have an explicit timestamp) will be retried on server failure. Non-idempotent operations will not.

Parameters
  • row_key – the row to apply mutations to

  • mutations – the set of mutations to apply to the row

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Only idempotent mutations will be retried. Defaults to the Table’s default_retryable_errors.

Raises
mutations_batcher(*, flush_interval: float | None = 5, flush_limit_mutation_count: int | None = 1000, flush_limit_bytes: int = 20971520, flow_control_max_mutation_count: int = 100000, flow_control_max_bytes: int = 104857600, batch_operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS, batch_attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS, batch_retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.MUTATE_ROWS) MutationsBatcherAsync[source]

Returns a new mutations batcher instance.

Can be used to iteratively add mutations that are flushed as a group, to avoid excess network calls

Parameters
  • flush_interval – Automatically flush every flush_interval seconds. If None, a table default will be used

  • flush_limit_mutation_count – Flush immediately after flush_limit_mutation_count mutations are added across all entries. If None, this limit is ignored.

  • flush_limit_bytes – Flush immediately after flush_limit_bytes bytes are added.

  • flow_control_max_mutation_count – Maximum number of inflight mutations.

  • flow_control_max_bytes – Maximum number of inflight bytes.

  • batch_operation_timeout – timeout for each mutate_rows operation, in seconds. Defaults to the Table’s default_mutate_rows_operation_timeout

  • batch_attempt_timeout – timeout for each individual request, in seconds. Defaults to the Table’s default_mutate_rows_attempt_timeout. If None, defaults to batch_operation_timeout.

  • batch_retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_mutate_rows_retryable_errors.

Returns

a MutationsBatcherAsync context manager that can batch requests

Return type

MutationsBatcherAsync

async read_modify_write_row(row_key: str | bytes, rules: ReadModifyWriteRule | list[ReadModifyWriteRule], *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT) Row[source]

Reads and modifies a row atomically according to input ReadModifyWriteRules, and returns the contents of all modified cells

The new value for the timestamp is the greater of the existing timestamp or the current server time.

Non-idempotent operation: will not be retried

Parameters
  • row_key – the key of the row to apply read/modify/write rules to

  • rules – A rule or set of rules to apply to the row. Rules are applied in order, meaning that earlier rules will affect the results of later ones.

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will not be retried. Defaults to the Table’s default_operation_timeout.

Returns

a Row containing cell data that was modified as part of the operation

Return type

Row

Raises
async read_row(row_key: str | bytes, *, row_filter: RowFilter | None = None, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS) Row | None[source]

Read a single row from the table, based on the specified key.

Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.

Parameters
  • query – contains details about which rows to return

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_read_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_read_rows_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_read_rows_retryable_errors.

Returns

a Row object if the row exists, otherwise None

Return type

Row | None

Raises
async read_rows(query: ReadRowsQuery, *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS) list[Row][source]

Read a set of rows from the table, based on the specified query. Retruns results as a list of Row objects when the request is complete. For streamed results, use read_rows_stream.

Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.

Parameters
  • query – contains details about which rows to return

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_read_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_read_rows_attempt_timeout. If None, defaults to operation_timeout. If None, defaults to the Table’s default_read_rows_attempt_timeout, or the operation_timeout if that is also None.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_read_rows_retryable_errors.

Returns

a list of Rows returned by the query

Return type

list[Row]

Raises
async read_rows_sharded(sharded_query: ShardedQuery, *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS) list[Row][source]

Runs a sharded query in parallel, then return the results in a single list. Results will be returned in the order of the input queries.

This function is intended to be run on the results on a query.shard() call. For example:

table_shard_keys = await table.sample_row_keys()
query = ReadRowsQuery(...)
shard_queries = query.shard(table_shard_keys)
results = await table.read_rows_sharded(shard_queries)
Parameters
  • sharded_query – a sharded query to execute

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_read_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_read_rows_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_read_rows_retryable_errors.

Returns

a list of Rows returned by the query

Return type

list[Row]

Raises
async read_rows_stream(query: ReadRowsQuery, *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS) AsyncIterable[Row][source]

Read a set of rows from the table, based on the specified query. Returns an iterator to asynchronously stream back row data.

Failed requests within operation_timeout will be retried based on the retryable_errors list until operation_timeout is reached.

Parameters
  • query – contains details about which rows to return

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_read_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_read_rows_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_read_rows_retryable_errors

Returns

an asynchronous iterator that yields rows returned by the query

Return type

AsyncIterable[Row]

Raises
async row_exists(row_key: str | bytes, *, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS) bool[source]

Return a boolean indicating whether the specified row exists in the table. uses the filters: chain(limit cells per row = 1, strip value)

Parameters
  • row_key – the key of the row to check

  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget. Defaults to the Table’s default_read_rows_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_read_rows_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_read_rows_retryable_errors.

Returns

a bool indicating whether the row exists

Return type

bool

Raises
async sample_row_keys(*, operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT, attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT, retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT) RowKeySamples[source]

Return a set of RowKeySamples that delimit contiguous sections of the table of approximately equal size

RowKeySamples output can be used with ReadRowsQuery.shard() to create a sharded query that can be parallelized across multiple backend nodes read_rows and read_rows_stream requests will call sample_row_keys internally for this purpose when sharding is enabled

RowKeySamples is simply a type alias for list[tuple[bytes, int]]; a list of row_keys, along with offset positions in the table

Parameters
  • operation_timeout – the time budget for the entire operation, in seconds. Failed requests will be retried within the budget.i Defaults to the Table’s default_operation_timeout

  • attempt_timeout – the time budget for an individual network request, in seconds. If it takes longer than this time to complete, the request will be cancelled with a DeadlineExceeded exception, and a retry will be attempted. Defaults to the Table’s default_attempt_timeout. If None, defaults to operation_timeout.

  • retryable_errors – a list of errors that will be retried if encountered. Defaults to the Table’s default_retryable_errors.

Returns

a set of RowKeySamples the delimit contiguous sections of the table

Return type

RowKeySamples

Raises