Class: Google::Apis::CloudassetV1::BigQueryDestination
- Inherits:
-
Object
- Object
- Google::Apis::CloudassetV1::BigQueryDestination
- Includes:
- Google::Apis::Core::Hashable, Google::Apis::Core::JsonObjectSupport
- Defined in:
- lib/google/apis/cloudasset_v1/classes.rb,
lib/google/apis/cloudasset_v1/representations.rb,
lib/google/apis/cloudasset_v1/representations.rb
Overview
A BigQuery destination for exporting assets to.
Instance Attribute Summary collapse
-
#dataset ⇒ String
Required.
-
#force ⇒ Boolean
(also: #force?)
If the destination table already exists and this flag is
TRUE, the table will be overwritten by the contents of assets snapshot. -
#partition_spec ⇒ Google::Apis::CloudassetV1::PartitionSpec
Specifications of BigQuery partitioned table as export destination.
-
#separate_tables_per_asset_type ⇒ Boolean
(also: #separate_tables_per_asset_type?)
If this flag is
TRUE, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type. -
#table ⇒ String
Required.
Instance Method Summary collapse
-
#initialize(**args) ⇒ BigQueryDestination
constructor
A new instance of BigQueryDestination.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ BigQueryDestination
Returns a new instance of BigQueryDestination.
548 549 550 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 548 def initialize(**args) update!(**args) end |
Instance Attribute Details
#dataset ⇒ String
Required. The BigQuery dataset in format "projects/projectId/datasets/
datasetId", to which the snapshot result should be exported. If this dataset
does not exist, the export call returns an INVALID_ARGUMENT error. Setting the
contentType for exportAssets determines the schema of the BigQuery table. Setting
separateTablesPerAssetType to TRUE also influences the schema.
Corresponds to the JSON property dataset
500 501 502 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 500 def dataset @dataset end |
#force ⇒ Boolean Also known as: force?
If the destination table already exists and this flag is TRUE, the table
will be overwritten by the contents of assets snapshot. If the flag is FALSE
or unset and the destination table already exists, the export call returns an
INVALID_ARGUMEMT error.
Corresponds to the JSON property force
508 509 510 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 508 def force @force end |
#partition_spec ⇒ Google::Apis::CloudassetV1::PartitionSpec
Specifications of BigQuery partitioned table as export destination.
Corresponds to the JSON property partitionSpec
514 515 516 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 514 def partition_spec @partition_spec end |
#separate_tables_per_asset_type ⇒ Boolean Also known as: separate_tables_per_asset_type?
If this flag is TRUE, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The [force]
and [partition_spec] fields will apply to each of them. Field [table] will be
concatenated with "" and the asset type names (see https://cloud.google.com/
asset-inventory/docs/supported-asset-types for supported asset types) to
construct per-asset-type table names, in which all non-alphanumeric characters
like "." and "/" will be substituted by "". Example: if field [table] is "
mytable" and snapshot results contain "storage.googleapis.com/Bucket" assets,
the corresponding table name will be "mytable_storage_googleapis_com_Bucket".
If any of these tables does not exist, a new table with the concatenated name
will be created. When [content_type] in the ExportAssetsRequest is RESOURCE,
the schema of each table will include RECORD-type columns mapped to the nested
fields in the Asset.resource.data field of that asset type (up to the 15
nested level BigQuery supports (https://cloud.google.com/bigquery/docs/nested-
repeated#limitations)). The fields in >15 nested levels will be stored in JSON
format string as a child column of its parent RECORD column. If error occurs
when exporting to any table, the whole export call will return an error but
the export results that already succeed will persist. Example: if exporting to
table_type_A succeeds when exporting to table_type_B fails during one export
call, the results in table_type_A will persist and there will not be partial
results persisting in a table.
Corresponds to the JSON property separateTablesPerAssetType
539 540 541 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 539 def separate_tables_per_asset_type @separate_tables_per_asset_type end |
#table ⇒ String
Required. The BigQuery table to which the snapshot result should be written.
If this table does not exist, a new table with the given name will be created.
Corresponds to the JSON property table
546 547 548 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 546 def table @table end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
553 554 555 556 557 558 559 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 553 def update!(**args) @dataset = args[:dataset] if args.key?(:dataset) @force = args[:force] if args.key?(:force) @partition_spec = args[:partition_spec] if args.key?(:partition_spec) @separate_tables_per_asset_type = args[:separate_tables_per_asset_type] if args.key?(:separate_tables_per_asset_type) @table = args[:table] if args.key?(:table) end |