Class: Google::Apis::DataflowV1b3::Environment
- Inherits:
-
Object
- Object
- Google::Apis::DataflowV1b3::Environment
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataflow_v1b3/classes.rb,
lib/google/apis/dataflow_v1b3/representations.rb,
lib/google/apis/dataflow_v1b3/representations.rb
Overview
Describes the environment in which a Dataflow Job runs.
Instance Attribute Summary collapse
-
#cluster_manager_api_service ⇒ String
The type of cluster manager API to use.
-
#dataset ⇒ String
Optional.
-
#debug_options ⇒ Google::Apis::DataflowV1b3::DebugOptions
Describes any options that have an effect on the debugging of pipelines.
-
#experiments ⇒ Array<String>
The list of experiments to enable.
-
#flex_resource_scheduling_goal ⇒ String
Optional.
-
#internal_experiments ⇒ Hash<String,Object>
Experimental settings.
-
#sdk_pipeline_options ⇒ Hash<String,Object>
The Cloud Dataflow SDK pipeline options specified by the user.
-
#service_account_email ⇒ String
Optional.
-
#service_kms_key_name ⇒ String
Optional.
-
#service_options ⇒ Array<String>
Optional.
-
#shuffle_mode ⇒ String
Output only.
-
#streaming_mode ⇒ String
Optional.
-
#temp_storage_prefix ⇒ String
The prefix of the resources the system should use for temporary storage.
-
#use_streaming_engine_resource_based_billing ⇒ Boolean
(also: #use_streaming_engine_resource_based_billing?)
Output only.
-
#user_agent ⇒ Hash<String,Object>
A description of the process that generated the request.
-
#version ⇒ Hash<String,Object>
A structure describing which components and their versions of the service are required in order to run the job.
-
#worker_pools ⇒ Array<Google::Apis::DataflowV1b3::WorkerPool>
The worker pools.
-
#worker_region ⇒ String
Optional.
-
#worker_zone ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Environment
constructor
A new instance of Environment.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Environment
Returns a new instance of Environment.
1581 1582 1583 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1581 def initialize(**args) update!(**args) end |
Instance Attribute Details
#cluster_manager_api_service ⇒ String
The type of cluster manager API to use. If unknown or unspecified, the service
will attempt to choose a reasonable default. This should be in the form of the
API service name, e.g. "compute.googleapis.com".
Corresponds to the JSON property clusterManagerApiService
1457 1458 1459 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1457 def cluster_manager_api_service @cluster_manager_api_service end |
#dataset ⇒ String
Optional. The dataset for the current project where various workflow related
tables are stored. The supported resource type is: Google BigQuery: bigquery.
googleapis.com/dataset
Corresponds to the JSON property dataset
1464 1465 1466 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1464 def dataset @dataset end |
#debug_options ⇒ Google::Apis::DataflowV1b3::DebugOptions
Describes any options that have an effect on the debugging of pipelines.
Corresponds to the JSON property debugOptions
1469 1470 1471 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1469 def @debug_options end |
#experiments ⇒ Array<String>
The list of experiments to enable. This field should be used for SDK related
experiments and not for service related experiments. The proper field for
service related experiments is service_options.
Corresponds to the JSON property experiments
1476 1477 1478 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1476 def experiments @experiments end |
#flex_resource_scheduling_goal ⇒ String
Optional. Which Flexible Resource Scheduling mode to run in.
Corresponds to the JSON property flexResourceSchedulingGoal
1481 1482 1483 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1481 def flex_resource_scheduling_goal @flex_resource_scheduling_goal end |
#internal_experiments ⇒ Hash<String,Object>
Experimental settings.
Corresponds to the JSON property internalExperiments
1486 1487 1488 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1486 def internal_experiments @internal_experiments end |
#sdk_pipeline_options ⇒ Hash<String,Object>
The Cloud Dataflow SDK pipeline options specified by the user. These options
are passed through the service and are used to recreate the SDK pipeline
options on the worker in a language agnostic and platform independent way.
Corresponds to the JSON property sdkPipelineOptions
1493 1494 1495 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1493 def @sdk_pipeline_options end |
#service_account_email ⇒ String
Optional. Identity to run virtual machines as. Defaults to the default account.
Corresponds to the JSON property serviceAccountEmail
1498 1499 1500 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1498 def service_account_email @service_account_email end |
#service_kms_key_name ⇒ String
Optional. If set, contains the Cloud KMS key identifier used to encrypt data
at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/
PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
Corresponds to the JSON property serviceKmsKeyName
1505 1506 1507 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1505 def service_kms_key_name @service_kms_key_name end |
#service_options ⇒ Array<String>
Optional. The list of service options to enable. This field should be used for
service related experiments only. These experiments, when graduating to GA,
should be replaced by dedicated fields or become default (i.e. always on).
Corresponds to the JSON property serviceOptions
1512 1513 1514 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1512 def @service_options end |
#shuffle_mode ⇒ String
Output only. The shuffle mode used for the job.
Corresponds to the JSON property shuffleMode
1517 1518 1519 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1517 def shuffle_mode @shuffle_mode end |
#streaming_mode ⇒ String
Optional. Specifies the Streaming Engine message processing guarantees.
Reduces cost and latency but might result in duplicate messages committed to
storage. Designed to run simple mapping streaming ETL jobs at the lowest cost.
For example, Change Data Capture (CDC) to BigQuery is a canonical use case.
For more information, see Set the pipeline streaming mode.
Corresponds to the JSON property streamingMode
1527 1528 1529 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1527 def streaming_mode @streaming_mode end |
#temp_storage_prefix ⇒ String
The prefix of the resources the system should use for temporary storage. The
system will append the suffix "/temp-JOBNAME to this resource prefix, where
JOBNAME is the value of the job_name field. The resulting bucket and object
prefix is used as the prefix of the resources used to store temporary data
needed during the job execution. NOTE: This will override the value in
taskrunner_settings. The supported resource type is: Google Cloud Storage:
storage.googleapis.com/bucket/object bucket.storage.googleapis.com/object
Corresponds to the JSON property tempStoragePrefix
1538 1539 1540 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1538 def temp_storage_prefix @temp_storage_prefix end |
#use_streaming_engine_resource_based_billing ⇒ Boolean Also known as: use_streaming_engine_resource_based_billing?
Output only. Whether the job uses the Streaming Engine resource-based billing
model.
Corresponds to the JSON property useStreamingEngineResourceBasedBilling
1544 1545 1546 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1544 def use_streaming_engine_resource_based_billing @use_streaming_engine_resource_based_billing end |
#user_agent ⇒ Hash<String,Object>
A description of the process that generated the request.
Corresponds to the JSON property userAgent
1550 1551 1552 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1550 def user_agent @user_agent end |
#version ⇒ Hash<String,Object>
A structure describing which components and their versions of the service are
required in order to run the job.
Corresponds to the JSON property version
1556 1557 1558 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1556 def version @version end |
#worker_pools ⇒ Array<Google::Apis::DataflowV1b3::WorkerPool>
The worker pools. At least one "harness" worker pool must be specified in
order for the job to have workers.
Corresponds to the JSON property workerPools
1562 1563 1564 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1562 def worker_pools @worker_pools end |
#worker_region ⇒ String
Optional. The Compute Engine region (https://cloud.google.com/compute/docs/
regions-zones/regions-zones) in which worker processing should occur, e.g. "us-
west1". Mutually exclusive with worker_zone. If neither worker_region nor
worker_zone is specified, default to the control plane's region.
Corresponds to the JSON property workerRegion
1570 1571 1572 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1570 def worker_region @worker_region end |
#worker_zone ⇒ String
Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/
regions-zones/regions-zones) in which worker processing should occur, e.g. "us-
west1-a". Mutually exclusive with worker_region. If neither worker_region nor
worker_zone is specified, a zone in the control plane's region is chosen based
on available capacity.
Corresponds to the JSON property workerZone
1579 1580 1581 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1579 def worker_zone @worker_zone end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1586 def update!(**args) @cluster_manager_api_service = args[:cluster_manager_api_service] if args.key?(:cluster_manager_api_service) @dataset = args[:dataset] if args.key?(:dataset) @debug_options = args[:debug_options] if args.key?(:debug_options) @experiments = args[:experiments] if args.key?(:experiments) @flex_resource_scheduling_goal = args[:flex_resource_scheduling_goal] if args.key?(:flex_resource_scheduling_goal) @internal_experiments = args[:internal_experiments] if args.key?(:internal_experiments) @sdk_pipeline_options = args[:sdk_pipeline_options] if args.key?(:sdk_pipeline_options) @service_account_email = args[:service_account_email] if args.key?(:service_account_email) @service_kms_key_name = args[:service_kms_key_name] if args.key?(:service_kms_key_name) @service_options = args[:service_options] if args.key?(:service_options) @shuffle_mode = args[:shuffle_mode] if args.key?(:shuffle_mode) @streaming_mode = args[:streaming_mode] if args.key?(:streaming_mode) @temp_storage_prefix = args[:temp_storage_prefix] if args.key?(:temp_storage_prefix) @use_streaming_engine_resource_based_billing = args[:use_streaming_engine_resource_based_billing] if args.key?(:use_streaming_engine_resource_based_billing) @user_agent = args[:user_agent] if args.key?(:user_agent) @version = args[:version] if args.key?(:version) @worker_pools = args[:worker_pools] if args.key?(:worker_pools) @worker_region = args[:worker_region] if args.key?(:worker_region) @worker_zone = args[:worker_zone] if args.key?(:worker_zone) end |