Class: Google::Apis::CloudassetV1::BigQueryDestination
- Inherits:
-
Object
- Object
- Google::Apis::CloudassetV1::BigQueryDestination
- Includes:
- Google::Apis::Core::Hashable, Google::Apis::Core::JsonObjectSupport
- Defined in:
- lib/google/apis/cloudasset_v1/classes.rb,
lib/google/apis/cloudasset_v1/representations.rb,
lib/google/apis/cloudasset_v1/representations.rb
Overview
A BigQuery destination for exporting assets to.
Instance Attribute Summary collapse
-
#dataset ⇒ String
Required.
-
#force ⇒ Boolean
(also: #force?)
If the destination table already exists and this flag is
TRUE
, the table will be overwritten by the contents of assets snapshot. -
#partition_spec ⇒ Google::Apis::CloudassetV1::PartitionSpec
Specifications of BigQuery partitioned table as export destination.
-
#separate_tables_per_asset_type ⇒ Boolean
(also: #separate_tables_per_asset_type?)
If this flag is
TRUE
, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type. -
#table ⇒ String
Required.
Instance Method Summary collapse
-
#initialize(**args) ⇒ BigQueryDestination
constructor
A new instance of BigQueryDestination.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ BigQueryDestination
Returns a new instance of BigQueryDestination.
750 751 752 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 750 def initialize(**args) update!(**args) end |
Instance Attribute Details
#dataset ⇒ String
Required. The BigQuery dataset in format "projects/projectId/datasets/
datasetId", to which the snapshot result should be exported. If this dataset
does not exist, the export call returns an INVALID_ARGUMENT error. Setting the
contentType
for exportAssets
determines the schema of the BigQuery table. Setting
separateTablesPerAssetType
to TRUE
also influences the schema.
Corresponds to the JSON property dataset
702 703 704 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 702 def dataset @dataset end |
#force ⇒ Boolean Also known as: force?
If the destination table already exists and this flag is TRUE
, the table
will be overwritten by the contents of assets snapshot. If the flag is FALSE
or unset and the destination table already exists, the export call returns an
INVALID_ARGUMEMT error.
Corresponds to the JSON property force
710 711 712 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 710 def force @force end |
#partition_spec ⇒ Google::Apis::CloudassetV1::PartitionSpec
Specifications of BigQuery partitioned table as export destination.
Corresponds to the JSON property partitionSpec
716 717 718 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 716 def partition_spec @partition_spec end |
#separate_tables_per_asset_type ⇒ Boolean Also known as: separate_tables_per_asset_type?
If this flag is TRUE
, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The [force]
and [partition_spec] fields will apply to each of them. Field [table] will be
concatenated with "" and the asset type names (see https://cloud.google.com/
asset-inventory/docs/supported-asset-types for supported asset types) to
construct per-asset-type table names, in which all non-alphanumeric characters
like "." and "/" will be substituted by "". Example: if field [table] is "
mytable" and snapshot results contain "storage.googleapis.com/Bucket" assets,
the corresponding table name will be "mytable_storage_googleapis_com_Bucket".
If any of these tables does not exist, a new table with the concatenated name
will be created. When [content_type] in the ExportAssetsRequest is RESOURCE
,
the schema of each table will include RECORD-type columns mapped to the nested
fields in the Asset.resource.data field of that asset type (up to the 15
nested level BigQuery supports (https://cloud.google.com/bigquery/docs/nested-
repeated#limitations)). The fields in >15 nested levels will be stored in JSON
format string as a child column of its parent RECORD column. If error occurs
when exporting to any table, the whole export call will return an error but
the export results that already succeed will persist. Example: if exporting to
table_type_A succeeds when exporting to table_type_B fails during one export
call, the results in table_type_A will persist and there will not be partial
results persisting in a table.
Corresponds to the JSON property separateTablesPerAssetType
741 742 743 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 741 def separate_tables_per_asset_type @separate_tables_per_asset_type end |
#table ⇒ String
Required. The BigQuery table to which the snapshot result should be written.
If this table does not exist, a new table with the given name will be created.
Corresponds to the JSON property table
748 749 750 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 748 def table @table end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
755 756 757 758 759 760 761 |
# File 'lib/google/apis/cloudasset_v1/classes.rb', line 755 def update!(**args) @dataset = args[:dataset] if args.key?(:dataset) @force = args[:force] if args.key?(:force) @partition_spec = args[:partition_spec] if args.key?(:partition_spec) @separate_tables_per_asset_type = args[:separate_tables_per_asset_type] if args.key?(:separate_tables_per_asset_type) @table = args[:table] if args.key?(:table) end |