Database API¶
User-friendly container for Cloud Spanner Database.
- class google.cloud.spanner_v1.database.BatchCheckout(database, request_options=None, max_commit_delay=None, exclude_txn_from_change_streams=False)[source]¶
Bases:
object
Context manager for using a batch from a database.
Inside the context manager, checks out a session from the database, creates a batch from it, making the batch available.
Caller must not use the batch to perform API requests outside the scope of the context manager.
- Parameters
database (
Database
) – database to userequest_options (
google.cloud.spanner_v1.types.RequestOptions
) – (Optional) Common options for the commit request. If a dict is provided, it must be of the same form as the protobuf messageRequestOptions
.max_commit_delay (
datetime.timedelta
) – (Optional) The amount of latency this request is willing to incur in order to improve throughput.
- class google.cloud.spanner_v1.database.BatchSnapshot(database, read_timestamp=None, exact_staleness=None, session_id=None, transaction_id=None)[source]¶
Bases:
object
Wrapper for generating and processing read / query batches.
- Parameters
database (
Database
) – database to useread_timestamp (
datetime.datetime
) – Execute all reads at the given timestamp.exact_staleness (
datetime.timedelta
) – Execute all reads at a timestamp that isexact_staleness
old.
- close()[source]¶
Clean up underlying session.
Note
If the transaction has been shared across multiple machines, calling this on any machine would invalidate the transaction everywhere. Ideally this would be called when data has been read from all the partitions.
- execute_sql(*args, **kw)[source]¶
Convenience method: perform query operation via snapshot.
See
execute_sql()
.
- classmethod from_dict(database, mapping)[source]¶
Reconstruct an instance from a mapping.
- Parameters
database (
Database
) – database to usemapping (mapping) – serialized state of the instance
- Return type
- generate_query_batches(sql, params=None, param_types=None, partition_size_bytes=None, max_partitions=None, query_options=None, data_boost_enabled=False, directed_read_options=None, *, retry=_MethodDefault._DEFAULT_VALUE, timeout=_MethodDefault._DEFAULT_VALUE)[source]¶
Start a partitioned query operation.
Uses the
PartitionQuery
API request to start a partitioned query operation. Returns a list of batch information needed to perform the actual queries.- Parameters
sql (str) – SQL query statement
params (dict, {str -> column value}) – values for parameter replacement. Keys must match the names used in
sql
.param_types (dict[str -> Union[dict, .types.Type]]) – (Optional) maps explicit types for one or more param values; required if parameters are passed.
partition_size_bytes (int) – (Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ.
max_partitions (int) – (Optional) desired maximum number of partitions generated. The service uses this as a hint, the actual number of partitions may differ.
query_options (
QueryOptions
ordict
) – (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf messageQueryOptions
data_boost_enabled – (Optional) If this is for a partitioned query and this field is set
true
, the request will be executed via offline access.directed_read_options (
DirectedReadOptions
ordict
) – (Optional) Request level option used to set the directed_read_options for ExecuteSqlRequests that indicates which replicas or regions should be used for non-transactional queries.retry (
Retry
) – (Optional) The retry settings for this request.timeout (float) – (Optional) The timeout for this request.
- Return type
iterable of dict
- Returns
mappings of information used perform actual partitioned reads via
process_read_batch()
.
- generate_read_batches(table, columns, keyset, index='', partition_size_bytes=None, max_partitions=None, data_boost_enabled=False, directed_read_options=None, *, retry=_MethodDefault._DEFAULT_VALUE, timeout=_MethodDefault._DEFAULT_VALUE)[source]¶
Start a partitioned batch read operation.
Uses the
PartitionRead
API request to initiate the partitioned read. Returns a list of batch information needed to perform the actual reads.- Parameters
table (str) – name of the table from which to fetch data
columns (list of str) – names of columns to be retrieved
keyset (
KeySet
) – keys / ranges identifying rows to be retrievedindex (str) – (Optional) name of index to use, rather than the table’s primary key
partition_size_bytes (int) – (Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ.
max_partitions (int) – (Optional) desired maximum number of partitions generated. The service uses this as a hint, the actual number of partitions may differ.
data_boost_enabled – (Optional) If this is for a partitioned read and this field is set
true
, the request will be executed via offline access.directed_read_options (
DirectedReadOptions
ordict
) – (Optional) Request level option used to set the directed_read_options for ReadRequests that indicates which replicas or regions should be used for non-transactional reads.retry (
Retry
) – (Optional) The retry settings for this request.timeout (float) – (Optional) The timeout for this request.
- Return type
iterable of dict
- Returns
mappings of information used perform actual partitioned reads via
process_read_batch()
.
- process(batch)[source]¶
Process a single, partitioned query or read.
- Parameters
batch (mapping) – one of the mappings returned from an earlier call to
generate_query_batches()
.- Return type
- Returns
a result set instance which can be used to consume rows.
- Raises
ValueError – if batch does not contain either ‘read’ or ‘query’
- process_query_batch(batch, *, retry=_MethodDefault._DEFAULT_VALUE, timeout=_MethodDefault._DEFAULT_VALUE)[source]¶
Process a single, partitioned query.
- Parameters
batch (mapping) – one of the mappings returned from an earlier call to
generate_query_batches()
.retry (
Retry
) – (Optional) The retry settings for this request.timeout (float) – (Optional) The timeout for this request.
- Return type
- Returns
a result set instance which can be used to consume rows.
- process_read_batch(batch, *, retry=_MethodDefault._DEFAULT_VALUE, timeout=_MethodDefault._DEFAULT_VALUE)[source]¶
Process a single, partitioned read.
- Parameters
batch (mapping) – one of the mappings returned from an earlier call to
generate_read_batches()
.retry (
Retry
) – (Optional) The retry settings for this request.timeout (float) – (Optional) The timeout for this request.
- Return type
- Returns
a result set instance which can be used to consume rows.
- run_partitioned_query(sql, params=None, param_types=None, partition_size_bytes=None, max_partitions=None, query_options=None, data_boost_enabled=False)[source]¶
Start a partitioned query operation to get list of partitions and then executes each partition on a separate thread
- Parameters
sql (str) – SQL query statement
params (dict, {str -> column value}) – values for parameter replacement. Keys must match the names used in
sql
.param_types (dict[str -> Union[dict, .types.Type]]) – (Optional) maps explicit types for one or more param values; required if parameters are passed.
partition_size_bytes (int) – (Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ.
max_partitions (int) – (Optional) desired maximum number of partitions generated. The service uses this as a hint, the actual number of partitions may differ.
query_options (
QueryOptions
ordict
) – (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf messageQueryOptions
data_boost_enabled – (Optional) If this is for a partitioned query and this field is set
true
, the request will be executed using data boost. Please see https://cloud.google.com/spanner/docs/databoost/databoost-overview
- Return type
MergedResultSet
- Returns
a result set instance which can be used to consume rows.
- to_dict()[source]¶
Return state as a dictionary.
Result can be used to serialize the instance and reconstitute it later using
from_dict()
.- Return type
- class google.cloud.spanner_v1.database.Database(database_id, instance, ddl_statements=(), pool=None, logger=None, encryption_config=None, database_dialect=DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, database_role=None, enable_drop_protection=False, proto_descriptors=None)[source]¶
Bases:
object
Representation of a Cloud Spanner Database.
We can use a
Database
to:- Parameters
database_id (str) – The ID of the database.
instance (
Instance
) – The instance that owns the database.ddl_statements (list of string) – (Optional) DDL statements, excluding the CREATE DATABASE statement.
pool (concrete subclass of
AbstractSessionPool
.) – (Optional) session pool to be used by database. If not passed, the database will construct an instance ofBurstyPool
.logger (
logging.Logger
) – (Optional) a custom logger that is used if log_commit_stats is True to log commit statistics. If not passed, a logger will be created when needed that will log the commit statistics to stdout.encryption_config (
EncryptionConfig
orRestoreDatabaseEncryptionConfig
ordict
) – (Optional) Encryption configuration for the database. If a dict is provided, it must be of the same form as either of the protobuf messagesEncryptionConfig
orRestoreDatabaseEncryptionConfig
database_dialect (
DatabaseDialect
) – (Optional) database dialect for the databasedatabase_role (str or None) – (Optional) user-assigned database_role for the session.
enable_drop_protection (boolean) – (Optional) Represents whether the database has drop protection enabled or not.
proto_descriptors (bytes) – (Optional) Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements in ‘ddl_statements’ above.
- batch(request_options=None, max_commit_delay=None, exclude_txn_from_change_streams=False)[source]¶
Return an object which wraps a batch.
The wrapper must be used as a context manager, with the batch as the value returned by the wrapper.
- Parameters
request_options (
google.cloud.spanner_v1.types.RequestOptions
) – (Optional) Common options for the commit request. If a dict is provided, it must be of the same form as the protobuf messageRequestOptions
.max_commit_delay (
datetime.timedelta
) – (Optional) The amount of latency this request is willing to incur in order to improve throughput. Value must be between 0ms and 500ms.exclude_txn_from_change_streams (bool) – (Optional) If true, instructs the transaction to be excluded from being recorded in change streams with the DDL option allow_txn_exclusion=true. This does not exclude the transaction from being recorded in the change streams with the DDL option allow_txn_exclusion being false or unset.
- Return type
- Returns
new wrapper
- batch_snapshot(read_timestamp=None, exact_staleness=None, session_id=None, transaction_id=None)[source]¶
Return an object which wraps a batch read / query.
- Parameters
read_timestamp (
datetime.datetime
) – Execute all reads at the given timestamp.exact_staleness (
datetime.timedelta
) – Execute all reads at a timestamp that isexact_staleness
old.session_id (str) – id of the session used in transaction
transaction_id (str) – id of the transaction
- Return type
- Returns
new wrapper
- create()[source]¶
Create this database within its instance
Includes any configured schema assigned to
ddl_statements
.- Return type
- Returns
a future used to poll the status of the create request
- Raises
Conflict – if the database already exists
NotFound – if the instance owning the database does not exist
- property create_time¶
Create time of this database.
- Return type
- Returns
a datetime object representing the create time of this database
- property database_dialect¶
DDL Statements used to define database schema.
See cloud.google.com/spanner/docs/data-definition-language
- Return type
google.cloud.spanner_admin_database_v1.types.DatabaseDialect
- Returns
the dialect of the database
- property database_role¶
User-assigned database_role for sessions created by the pool. :rtype: str :returns: a str with the name of the database role.
- property ddl_statements¶
DDL Statements used to define database schema.
See cloud.google.com/spanner/docs/data-definition-language
- Return type
sequence of string
- Returns
the statements
- property default_leader¶
The read-write region which contains the database’s leader replicas.
- Return type
- Returns
a string representing the read-write region
- property default_schema_name¶
Default schema name for this database.
- Return type
- Returns
“” for GoogleSQL and “public” for PostgreSQL
- property earliest_version_time¶
The earliest time at which older versions of the data can be read.
- Return type
- Returns
a datetime object representing the earliest version time
- property enable_drop_protection¶
Whether the database has drop protection enabled.
- Return type
boolean
- Returns
a boolean representing whether the database has drop protection enabled
- property encryption_config¶
Encryption config for this database. :rtype:
EncryptionConfig
:returns: an object representing the encryption config for this database
- property encryption_info¶
Encryption info for this database. :rtype: a list of
EncryptionInfo
:returns: a list of objects representing encryption info for this database
- execute_partitioned_dml(dml, params=None, param_types=None, query_options=None, request_options=None, exclude_txn_from_change_streams=False)[source]¶
Execute a partitionable DML statement.
- Parameters
dml (str) – DML statement
params (dict, {str -> column value}) – values for parameter replacement. Keys must match the names used in
dml
.param_types (dict[str -> Union[dict, .types.Type]]) – (Optional) maps explicit types for one or more param values; required if parameters are passed.
query_options (
QueryOptions
ordict
) – (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf messageQueryOptions
request_options (
google.cloud.spanner_v1.types.RequestOptions
) – (Optional) Common options for this request. If a dict is provided, it must be of the same form as the protobuf messageRequestOptions
. Please note, the transactionTag setting will be ignored as it is not supported for partitioned DML.exclude_txn_from_change_streams (bool) – (Optional) If true, instructs the transaction to be excluded from being recorded in change streams with the DDL option allow_txn_exclusion=true. This does not exclude the transaction from being recorded in the change streams with the DDL option allow_txn_exclusion being false or unset.
- Return type
- Returns
Count of rows affected by the DML statement.
- exists()[source]¶
Test whether this database exists.
- Return type
- Returns
True if the database exists, else false.
- classmethod from_pb(database_pb, instance, pool=None)[source]¶
Creates an instance of this class from a protobuf.
- Parameters
database_pb (
Instance
) – A instance protobuf object.instance (
Instance
) – The instance that owns the database.pool (concrete subclass of
AbstractSessionPool
.) – (Optional) session pool to be used by database.
- Return type
- Returns
The database parsed from the protobuf response.
- Raises
ValueError – if the instance name does not match the expected format or if the parsed project ID does not match the project ID on the instance’s client, or if the parsed instance ID does not match the instance’s ID.
- get_iam_policy(policy_version=None)[source]¶
Gets the access control policy for a database resource.
- Parameters
policy_version (int) – (Optional) the maximum policy version that will be used to format the policy. Valid values are 0, 1 ,3.
- Return type
Policy
- Returns
returns an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.
- is_optimized()[source]¶
Test whether this database has finished optimizing.
- Return type
- Returns
True if the database state is READY, else False.
- is_ready()[source]¶
Test whether this database is ready for use.
- Return type
- Returns
True if the database state is READY_OPTIMIZING or READY, else False.
- list_database_operations(filter_='', page_size=None)[source]¶
List database operations for the database.
- Parameters
- Type
- Returns
Iterator of
Operation
resources within the current instance.
- list_database_roles(page_size=None)[source]¶
Lists Cloud Spanner database roles.
- Parameters
page_size (int) – Optional. The maximum number of database roles in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API.
- Type
Iterable
- Returns
Iterable of
DatabaseRole
resources within the current database.
- property logger¶
Logger used by the database.
The default logger will log commit stats at the log level INFO using sys.stderr.
- Return type
logging.Logger
or None- Returns
the logger
- mutation_groups()[source]¶
Return an object which wraps a mutation_group.
The wrapper must be used as a context manager, with the mutation group as the value returned by the wrapper.
- Return type
- Returns
new wrapper
- property name¶
Database name used in requests.
Note
This property will not change if
database_id
does not, but the return value is not cached.The database name is of the form
"projects/../instances/../databases/{database_id}"
- Return type
- Returns
The database name.
- property proto_descriptors¶
Proto Descriptors for this database. :rtype: bytes :returns: bytes representing the proto descriptors for this database
- property reconciling¶
Whether the database is currently reconciling.
- Return type
boolean
- Returns
a boolean representing whether the database is reconciling
- reload()[source]¶
Reload this database.
Refresh any configured schema into
ddl_statements
.- Raises
NotFound – if the database does not exist
- restore(source)[source]¶
Restore from a backup to this database.
- Parameters
source (
Backup
) – the path of the source being restored from.- Return type
- Returns
a future used to poll the status of the create request
- Raises
Conflict – if the database already exists
NotFound – if the instance owning the database does not exist, or if the backup being restored from does not exist
ValueError – if backup is not set
- property restore_info¶
Restore info for this database.
- Return type
RestoreInfo
- Returns
an object representing the restore info for this database
- run_in_transaction(func, *args, **kw)[source]¶
Perform a unit of work in a transaction, retrying on abort.
- Parameters
func (callable) – takes a required positional argument, the transaction, and additional positional / keyword arguments as supplied by the caller.
args (tuple) – additional positional arguments to be passed to
func
.kw (dict) – (Optional) keyword arguments to be passed to
func
. If passed, “timeout_secs” will be removed and used to override the default retry timeout which defines maximum timestamp to continue retrying the transaction. “max_commit_delay” will be removed and used to set the max_commit_delay for the request. Value must be between 0ms and 500ms. “exclude_txn_from_change_streams” if true, instructs the transaction to be excluded from being recorded in change streams with the DDL option allow_txn_exclusion=true. This does not exclude the transaction from being recorded in the change streams with the DDL option allow_txn_exclusion being false or unset.
- Return type
Any
- Returns
The return value of
func
.- Raises
Exception – reraises any non-ABORT exceptions raised by
func
.
- set_iam_policy(policy)[source]¶
Sets the access control policy on a database resource. Replaces any existing policy.
- Parameters
policy_version – the complete policy to be applied to the resource.
- Return type
Policy
- Returns
returns the new Identity and Access Management (IAM) policy.
- snapshot(**kw)[source]¶
Return an object which wraps a snapshot.
The wrapper must be used as a context manager, with the snapshot as the value returned by the wrapper.
- Parameters
- Return type
- Returns
new wrapper
- property spanner_api¶
Helper for session-related API calls.
- property state¶
State of this database.
- Return type
- Returns
an enum describing the state of the database
- table(table_id)[source]¶
Factory to create a table object within this database.
Note: This method does not create a table in Cloud Spanner, but it can be used to check if a table exists.
my_table = database.table("my_table") if my_table.exists(): print("Table with ID 'my_table' exists.") else: print("Table with ID 'my_table' does not exist.")
- update(fields)[source]¶
Update this database.
Note
Updates the specified fields of a Cloud Spanner database. Currently, only the enable_drop_protection field supports updates. To change this value before updating, set it via
database.enable_drop_protection = True
before calling
update()
.- Parameters
fields (Sequence[str]) – a list of fields to update
- Return type
- Returns
an operation instance
- Raises
NotFound – if the database does not exist
- update_ddl(ddl_statements, operation_id='', proto_descriptors=None)[source]¶
Update DDL for this database.
Apply any configured schema from
ddl_statements
.- Parameters
- Return type
- Returns
an operation instance
- Raises
NotFound – if the database does not exist
- class google.cloud.spanner_v1.database.MutationGroupsCheckout(database)[source]¶
Bases:
object
Context manager for using mutation groups from a database.
Inside the context manager, checks out a session from the database, creates mutation groups from it, making the groups available.
Caller must not use the object to perform API requests outside the scope of the context manager.
- Parameters
database (
Database
) – database to use
- class google.cloud.spanner_v1.database.SnapshotCheckout(database, **kw)[source]¶
Bases:
object
Context manager for using a snapshot from a database.
Inside the context manager, checks out a session from the database, creates a snapshot from it, making the snapshot available.
Caller must not use the snapshot to perform API requests outside the scope of the context manager.
- Parameters