Types for Google Cloud Dataflow v1beta3 API¶
- class google.cloud.dataflow_v1beta3.types.AutoscalingAlgorithm(value)[source]¶
Bases:
proto.enums.Enum
Specifies the algorithm used to determine the number of worker processes to run at any given point in time, based on the amount of data left to process, the number of workers, and how quickly existing workers are processing data.
- Values:
- AUTOSCALING_ALGORITHM_UNKNOWN (0):
The algorithm is unknown, or unspecified.
- AUTOSCALING_ALGORITHM_NONE (1):
Disable autoscaling.
- AUTOSCALING_ALGORITHM_BASIC (2):
Increase worker count over time to reduce job execution time.
- class google.cloud.dataflow_v1beta3.types.AutoscalingEvent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A structured message reporting an autoscaling decision made by the Dataflow service.
- event_type¶
The type of autoscaling event to report.
- description¶
A message describing why the system decided to adjust the current number of workers, why it failed, or why the system decided to not make any changes to the number of workers.
- time¶
The time this event was emitted to indicate a new target or current num_workers value.
- class AutoscalingEventType(value)[source]¶
Bases:
proto.enums.Enum
Indicates the type of autoscaling event.
- Values:
- TYPE_UNKNOWN (0):
Default type for the enum. Value should never be returned.
- TARGET_NUM_WORKERS_CHANGED (1):
The TARGET_NUM_WORKERS_CHANGED type should be used when the target worker pool size has changed at the start of an actuation. An event should always be specified as TARGET_NUM_WORKERS_CHANGED if it reflects a change in the target_num_workers.
- CURRENT_NUM_WORKERS_CHANGED (2):
The CURRENT_NUM_WORKERS_CHANGED type should be used when actual worker pool size has been changed, but the target_num_workers has not changed.
- ACTUATION_FAILURE (3):
The ACTUATION_FAILURE type should be used when we want to report an error to the user indicating why the current number of workers in the pool could not be changed. Displayed in the current status and history widgets.
- NO_CHANGE (4):
Used when we want to report to the user a reason why we are not currently adjusting the number of workers. Should specify both target_num_workers, current_num_workers and a decision_message.
- class google.cloud.dataflow_v1beta3.types.AutoscalingSettings(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Settings for WorkerPool autoscaling.
- algorithm¶
The algorithm to use for autoscaling.
- class google.cloud.dataflow_v1beta3.types.BigQueryIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a BigQuery connector used by the job.
- class google.cloud.dataflow_v1beta3.types.BigTableIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a Cloud Bigtable connector used by the job.
- class google.cloud.dataflow_v1beta3.types.CheckActiveJobsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to check is active jobs exists for a project
- class google.cloud.dataflow_v1beta3.types.CheckActiveJobsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response for CheckActiveJobsRequest.
- class google.cloud.dataflow_v1beta3.types.ComputationTopology(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
All configuration data for a particular Computation.
- key_ranges¶
The key ranges processed by the computation.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.KeyRangeLocation]
- inputs¶
The inputs to the computation.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.StreamLocation]
- outputs¶
The outputs from the computation.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.StreamLocation]
- state_families¶
The state family values.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.StateFamilyConfig]
- class google.cloud.dataflow_v1beta3.types.ContainerSpec(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Container Spec.
- metadata¶
Metadata describing a template including description and validation rules.
- sdk_info¶
Required. SDK info of the Flex Template.
- default_environment¶
Default runtime environment for the job.
- class google.cloud.dataflow_v1beta3.types.CreateJobFromTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a Cloud Dataflow job from a template.
- gcs_path¶
Required. A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with
gs://
.This field is a member of oneof
template
.- Type
- environment¶
The runtime environment for the job.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Type
- class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.CreateJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a Cloud Dataflow job.
- job¶
The job to create.
- view¶
The level of information requested in response.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Type
- class google.cloud.dataflow_v1beta3.types.CustomSourceLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifies the location of a custom souce.
- class google.cloud.dataflow_v1beta3.types.DataDiskAssignment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Data disk assignment for a given VM instance.
- vm_instance¶
VM instance name the data disks mounted to, for example “myproject-1014-104817-4c2-harness-0”.
- Type
- data_disks¶
Mounted data disks. The order is important a data disk’s 0-based index in this list defines which persistent directory the disk is mounted to, for example the list of { “myproject-1014-104817-4c2-harness-0-disk-0” }, { “myproject-1014-104817-4c2-harness-0-disk-1” }.
- Type
MutableSequence[str]
- class google.cloud.dataflow_v1beta3.types.DatastoreIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a Datastore connector used by the job.
- class google.cloud.dataflow_v1beta3.types.DebugOptions(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes any options that have an effect on the debugging of pipelines.
- class google.cloud.dataflow_v1beta3.types.DefaultPackageSet(value)[source]¶
Bases:
proto.enums.Enum
The default set of packages to be staged on a pool of workers.
- Values:
- DEFAULT_PACKAGE_SET_UNKNOWN (0):
The default set of packages to stage is unknown, or unspecified.
- DEFAULT_PACKAGE_SET_NONE (1):
Indicates that no packages should be staged at the worker unless explicitly specified by the job.
- DEFAULT_PACKAGE_SET_JAVA (2):
Stage packages typically useful to workers written in Java.
- DEFAULT_PACKAGE_SET_PYTHON (3):
Stage packages typically useful to workers written in Python.
- class google.cloud.dataflow_v1beta3.types.DeleteSnapshotRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete a snapshot.
- class google.cloud.dataflow_v1beta3.types.DeleteSnapshotResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from deleting a snapshot.
- class google.cloud.dataflow_v1beta3.types.Disk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the data disk used by a workflow job.
- size_gb¶
Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- Type
- disk_type¶
Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default.
For example, the standard persistent disk type is a resource name typically ending in “pd-standard”. If SSD persistent disks are available, the resource name typically ends with “pd-ssd”. The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone.
Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this:
compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- Type
- class google.cloud.dataflow_v1beta3.types.DisplayData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Data provided with a pipeline or transform to provide descriptive info.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- key¶
The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Type
- namespace¶
The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- Type
- str_value¶
Contains value if the data is of string type.
This field is a member of oneof
Value
.- Type
- int64_value¶
Contains value if the data is of int64 type.
This field is a member of oneof
Value
.- Type
- float_value¶
Contains value if the data is of float type.
This field is a member of oneof
Value
.- Type
- java_class_value¶
Contains value if the data is of java class type.
This field is a member of oneof
Value
.- Type
- timestamp_value¶
Contains value if the data is of timestamp type.
This field is a member of oneof
Value
.
- duration_value¶
Contains value if the data is of duration type.
This field is a member of oneof
Value
.
- bool_value¶
Contains value if the data is of a boolean type.
This field is a member of oneof
Value
.- Type
- short_str_value¶
A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- Type
- class google.cloud.dataflow_v1beta3.types.DynamicTemplateLaunchParams(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Params which should be passed when launching a dynamic template.
- gcs_path¶
Path to dynamic template spec file on Cloud Storage. The file must be a Json serialized DynamicTemplateFieSpec object.
- Type
- class google.cloud.dataflow_v1beta3.types.Environment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the environment in which a Dataflow Job runs.
- temp_storage_prefix¶
The prefix of the resources the system should use for temporary storage. The system will append the suffix “/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is:
Google Cloud Storage:
storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Type
- cluster_manager_api_service¶
The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. “compute.googleapis.com”.
- Type
- experiments¶
The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- Type
MutableSequence[str]
- service_options¶
The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- Type
MutableSequence[str]
- service_kms_key_name¶
If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK).
Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Type
- worker_pools¶
The worker pools. At least one “harness” worker pool must be specified in order for the job to have workers.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.WorkerPool]
- user_agent¶
A description of the process that generated the request.
- version¶
A structure describing which components and their versions of the service are required in order to run the job.
- dataset¶
The dataset for the current project where various workflow related tables are stored.
The supported resource type is:
Google BigQuery:
bigquery.googleapis.com/{dataset}
- Type
- sdk_pipeline_options¶
The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- internal_experiments¶
Experimental settings.
- service_account_email¶
Identity to run virtual machines as. Defaults to the default account.
- Type
- flex_resource_scheduling_goal¶
Which Flexible Resource Scheduling mode to run in.
- worker_region¶
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1”. Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane’s region.
- Type
- worker_zone¶
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1-a”. Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane’s region is chosen based on available capacity.
- Type
- shuffle_mode¶
Output only. The shuffle mode used for the job.
- debug_options¶
Any debugging options to be supplied to the job.
- class google.cloud.dataflow_v1beta3.types.ExecutionStageState(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A message describing the state of a particular execution stage.
- execution_stage_state¶
Executions stage states allow the same set of values as JobState.
- current_state_time¶
The time at which the stage transitioned to this state.
- class google.cloud.dataflow_v1beta3.types.ExecutionStageSummary(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.
- kind¶
Type of transform this stage is executing.
- input_source¶
Input sources for this stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageSummary.StageSource]
- output_source¶
Output sources for this stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageSummary.StageSource]
- prerequisite_stage¶
Other stages that must complete before this stage can run.
- Type
MutableSequence[str]
- component_transform¶
Transforms that comprise this execution stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageSummary.ComponentTransform]
- component_source¶
Collections produced and consumed by component transforms of this stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageSummary.ComponentSource]
- class ComponentSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Description of an interstitial value between transforms in an execution stage.
- class ComponentTransform(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Description of a transform executed as part of an execution stage.
- class StageSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Description of an input or output of an execution stage.
- original_transform_or_collection¶
User name for the original user transform or collection with which this source is most closely associated.
- Type
- class google.cloud.dataflow_v1beta3.types.ExecutionState(value)[source]¶
Bases:
proto.enums.Enum
The state of some component of job execution.
- Values:
- EXECUTION_STATE_UNKNOWN (0):
The component state is unknown or unspecified.
- EXECUTION_STATE_NOT_STARTED (1):
The component is not yet running.
- EXECUTION_STATE_RUNNING (2):
The component is currently running.
- EXECUTION_STATE_SUCCEEDED (3):
The component succeeded.
- EXECUTION_STATE_FAILED (4):
The component failed.
- EXECUTION_STATE_CANCELLED (5):
Execution of the component was cancelled.
- class google.cloud.dataflow_v1beta3.types.FailedLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Indicates which [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) failed to respond to a request for data.
- name¶
The name of the [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that failed to respond.
- Type
- class google.cloud.dataflow_v1beta3.types.FileIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a File connector used by the job.
- class google.cloud.dataflow_v1beta3.types.FlexResourceSchedulingGoal(value)[source]¶
Bases:
proto.enums.Enum
Specifies the resource to optimize for in Flexible Resource Scheduling.
- Values:
- FLEXRS_UNSPECIFIED (0):
Run in the default mode.
- FLEXRS_SPEED_OPTIMIZED (1):
Optimize for lower execution time.
- FLEXRS_COST_OPTIMIZED (2):
Optimize for lower cost.
- class google.cloud.dataflow_v1beta3.types.FlexTemplateRuntimeEnvironment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The environment values to be set at runtime for flex template.
- max_workers¶
The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Type
- zone¶
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Type
- temp_location¶
The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Type
- machine_type¶
The machine type to use for the job. Defaults to the value from the template if not specified.
- Type
- network¶
Network to which VMs will be assigned. If empty or unspecified, the service will use the network “default”.
- Type
- subnetwork¶
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form “https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK” or “regions/REGION/subnetworks/SUBNETWORK”. If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Type
- additional_user_labels¶
Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the labeling restrictions page. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1kg”, “count”: “3” }.
- kms_key_name¶
Name for the Cloud KMS key for the job. Key format is:
projects/<project>/locations/<location>/keyRings/<keyring>/cryptoKeys/<key>
- Type
- ip_configuration¶
Configuration for VM IPs.
- worker_region¶
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1”. Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane’s region.
- Type
- worker_zone¶
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1-a”. Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane’s region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Type
- flexrs_goal¶
Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
- staging_location¶
The Cloud Storage path for staging local files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Type
- sdk_container_image¶
Docker registry location of container image to use for the ‘worker harness. Default is the container for the version of the SDK. Note this field is only valid for portable pipelines.
- Type
- autoscaling_algorithm¶
The algorithm to use for autoscaling
- dump_heap_on_oom¶
If true, save a heap dump before killing a thread or process which is GC thrashing or out of memory. The location of the heap file will either be echoed back to the user, or the user will be given the opportunity to download the heap file.
- Type
- save_heap_dumps_to_gcs_path¶
Cloud Storage bucket (directory) to upload heap dumps to the given location. Enabling this implies that heap dumps should be generated on OOM (dump_heap_on_oom is set to true).
- Type
- launcher_machine_type¶
The machine type to use for launching the job. The default is n1-standard-1.
- Type
- class AdditionalUserLabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.GetJobExecutionDetailsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get job execution details.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains the job specified by job_id.
- Type
- page_size¶
If specified, determines the maximum number of stages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.
- Type
- class google.cloud.dataflow_v1beta3.types.GetJobMetricsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get job metrics.
- start_time¶
Return only metric data that has changed since this time. Default is to return all information about all metrics for the job.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains the job specified by job_id.
- Type
- class google.cloud.dataflow_v1beta3.types.GetJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get the state of a Cloud Dataflow job.
- view¶
The level of information requested in response.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Type
- class google.cloud.dataflow_v1beta3.types.GetSnapshotRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get information about a snapshot
- class google.cloud.dataflow_v1beta3.types.GetStageExecutionDetailsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get information about a particular execution stage of a job. Currently only tracked for Batch jobs.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains the job specified by job_id.
- Type
- page_size¶
If specified, determines the maximum number of work items to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.
- Type
- page_token¶
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.
- Type
- start_time¶
Lower time bound of work items to include, by start time.
- end_time¶
Upper time bound of work items to include, by start time.
- class google.cloud.dataflow_v1beta3.types.GetTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to retrieve a Cloud Dataflow job template.
- gcs_path¶
Required. A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with ‘gs://’.
This field is a member of oneof
template
.- Type
- view¶
The view to retrieve. Defaults to METADATA_ONLY.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Type
- class TemplateView(value)[source]¶
Bases:
proto.enums.Enum
The various views of a template that may be retrieved.
- Values:
- METADATA_ONLY (0):
Template view that retrieves only the metadata associated with the template.
- class google.cloud.dataflow_v1beta3.types.GetTemplateResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response to a GetTemplate request.
- status¶
The status of the get template request. Any problems with the request will be indicated in the error_details.
- Type
google.rpc.status_pb2.Status
- metadata¶
The template metadata describing the template name, available parameters, etc.
- template_type¶
Template Type.
- runtime_metadata¶
Describes the runtime metadata with SDKInfo and available parameters.
- class TemplateType(value)[source]¶
Bases:
proto.enums.Enum
Template Type.
- Values:
- UNKNOWN (0):
Unknown Template Type.
- LEGACY (1):
Legacy Template.
- FLEX (2):
Flex Template.
- class google.cloud.dataflow_v1beta3.types.InvalidTemplateParameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Used in the error_details field of a google.rpc.Status message, this indicates problems with the template parameter.
- parameter_violations¶
Describes all parameter violations in a template request.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.InvalidTemplateParameters.ParameterViolation]
- class google.cloud.dataflow_v1beta3.types.Job(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines a job to be run by the Cloud Dataflow service.
- id¶
The unique ID of this job.
This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- Type
- name¶
The user-specified Cloud Dataflow job name.
Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job.
The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- Type
- type_¶
The type of Cloud Dataflow job.
- environment¶
The environment for the job.
- steps¶
Exactly one of step or steps_location should be specified.
The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.Step]
- current_state¶
The current state of the job.
Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified.A job in the
JOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made.This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- current_state_time¶
The timestamp associated with the current state.
- requested_state¶
The job’s requested state.
UpdateJob
may be used to switch between theJOB_STATE_STOPPED
andJOB_STATE_RUNNING
states, by setting requested_state.UpdateJob
may also be used to directly set a job’s requested state toJOB_STATE_CANCELLED
orJOB_STATE_DONE
, irrevocably terminating the job if it has not already reached a terminal state.
- execution_info¶
Deprecated.
- create_time¶
The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- replace_job_id¶
If this job is an update of an existing job, this field is the job ID of the job it replaced.
When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.- Type
- transform_name_mapping¶
The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- client_request_id¶
The client’s unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client’s ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- Type
- replaced_by_job_id¶
If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job.- Type
- temp_files¶
A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported.
The supported files are:
Google Cloud Storage:
storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Type
MutableSequence[str]
- labels¶
User-defined labels for this job.
The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
Keys must conform to regexp: [p{Ll}p{Lo}][p{Ll}p{Lo}p{N}_-]{0,62}
Values must conform to regexp: [p{Ll}p{Lo}p{N}_-]{0,63}
Both keys and values are additionally constrained to be <= 128 bytes in size.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Type
- pipeline_description¶
Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- stage_states¶
This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageState]
- job_metadata¶
This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- start_time¶
The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- created_from_snapshot_id¶
If this is specified, the job’s initial state is populated from the given snapshot.
- Type
- satisfies_pzs¶
Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Type
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class TransformNameMappingEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.JobExecutionDetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about the execution of a job.
- stages¶
The stages of the job execution.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.StageSummary]
- class google.cloud.dataflow_v1beta3.types.JobExecutionInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Additional information about how a Cloud Dataflow job will be executed that isn’t contained in the submitted job.
- stages¶
A mapping from each stage to the information about that stage.
- Type
MutableMapping[str, google.cloud.dataflow_v1beta3.types.JobExecutionStageInfo]
- class StagesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.JobExecutionStageInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Contains information about how a particular [google.dataflow.v1beta3.Step][google.dataflow.v1beta3.Step] will be executed.
- class google.cloud.dataflow_v1beta3.types.JobMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A particular message pertaining to a Dataflow job.
- time¶
The timestamp of the message.
- message_importance¶
Importance level of the message.
- class google.cloud.dataflow_v1beta3.types.JobMessageImportance(value)[source]¶
Bases:
proto.enums.Enum
Indicates the importance of the message.
- Values:
- JOB_MESSAGE_IMPORTANCE_UNKNOWN (0):
The message importance isn’t specified, or is unknown.
- JOB_MESSAGE_DEBUG (1):
The message is at the ‘debug’ level: typically only useful for software engineers working on the code the job is running. Typically, Dataflow pipeline runners do not display log messages at this level by default.
- JOB_MESSAGE_DETAILED (2):
The message is at the ‘detailed’ level: somewhat verbose, but potentially useful to users. Typically, Dataflow pipeline runners do not display log messages at this level by default. These messages are displayed by default in the Dataflow monitoring UI.
- JOB_MESSAGE_BASIC (5):
The message is at the ‘basic’ level: useful for keeping track of the execution of a Dataflow pipeline. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
- JOB_MESSAGE_WARNING (3):
The message is at the ‘warning’ level: indicating a condition pertaining to a job which may require human intervention. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
- JOB_MESSAGE_ERROR (4):
The message is at the ‘error’ level: indicating a condition preventing a job from succeeding. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI.
- class google.cloud.dataflow_v1beta3.types.JobMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.
- sdk_version¶
The SDK version used to run the job.
- spanner_details¶
Identification of a Spanner source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.SpannerIODetails]
- bigquery_details¶
Identification of a BigQuery source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.BigQueryIODetails]
- big_table_details¶
Identification of a Cloud Bigtable source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.BigTableIODetails]
- pubsub_details¶
Identification of a Pub/Sub source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.PubSubIODetails]
- file_details¶
Identification of a File source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.FileIODetails]
- datastore_details¶
Identification of a Datastore source used in the Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.DatastoreIODetails]
- class google.cloud.dataflow_v1beta3.types.JobMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
JobMetrics contains a collection of metrics describing the detailed progress of a Dataflow job. Metrics correspond to user-defined and system-defined metrics in the job.
This resource captures only the most recent values of each metric; time-series data can be queried for them (under the same metric names) from Cloud Monitoring.
- metric_time¶
Timestamp as of which metric values are current.
- metrics¶
All metrics for this job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.MetricUpdate]
- class google.cloud.dataflow_v1beta3.types.JobState(value)[source]¶
Bases:
proto.enums.Enum
Describes the overall state of a [google.dataflow.v1beta3.Job][google.dataflow.v1beta3.Job].
- Values:
- JOB_STATE_UNKNOWN (0):
The job’s run state isn’t specified.
- JOB_STATE_STOPPED (1):
JOB_STATE_STOPPED
indicates that the job has not yet started to run.- JOB_STATE_RUNNING (2):
JOB_STATE_RUNNING
indicates that the job is currently running.- JOB_STATE_DONE (3):
JOB_STATE_DONE
indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING
. It may also be set via a Cloud DataflowUpdateJob
call, if the job has not yet reached a terminal state.- JOB_STATE_FAILED (4):
JOB_STATE_FAILED
indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING
.- JOB_STATE_CANCELLED (5):
JOB_STATE_CANCELLED
indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJob
call, and only if the job has not yet reached another terminal state.- JOB_STATE_UPDATED (6):
JOB_STATE_UPDATED
indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING
.- JOB_STATE_DRAINING (7):
JOB_STATE_DRAINING
indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJob
call, but only as a transition fromJOB_STATE_RUNNING
. Jobs that are draining may only transition toJOB_STATE_DRAINED
,JOB_STATE_CANCELLED
, orJOB_STATE_FAILED
.- JOB_STATE_DRAINED (8):
JOB_STATE_DRAINED
indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING
.- JOB_STATE_PENDING (9):
JOB_STATE_PENDING
indicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING
, orJOB_STATE_FAILED
.- JOB_STATE_CANCELLING (10):
JOB_STATE_CANCELLING
indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLED
orJOB_STATE_FAILED
.- JOB_STATE_QUEUED (11):
JOB_STATE_QUEUED
indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDING
orJOB_STATE_CANCELLED
.- JOB_STATE_RESOURCE_CLEANING_UP (12):
JOB_STATE_RESOURCE_CLEANING_UP
indicates that the batch job’s associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- class google.cloud.dataflow_v1beta3.types.JobType(value)[source]¶
Bases:
proto.enums.Enum
Specifies the processing model used by a [google.dataflow.v1beta3.Job], which determines the way the Job is managed by the Cloud Dataflow service (how workers are scheduled, how inputs are sharded, etc).
- Values:
- JOB_TYPE_UNKNOWN (0):
The type of the job is unspecified, or unknown.
- JOB_TYPE_BATCH (1):
A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JOB_TYPE_STREAMING (2):
A continuously streaming job with no end: data is read, processed, and written continuously.
- class google.cloud.dataflow_v1beta3.types.JobView(value)[source]¶
Bases:
proto.enums.Enum
Selector for how much information is returned in Job responses.
- Values:
- JOB_VIEW_UNKNOWN (0):
The job view to return isn’t specified, or is unknown. Responses will contain at least the
JOB_VIEW_SUMMARY
information, and may contain additional information.- JOB_VIEW_SUMMARY (1):
Request summary information only:
Project ID, Job ID, job name, job type, job status, start/end time, and Cloud SDK version details.
- JOB_VIEW_ALL (2):
Request all information available for this job.
- JOB_VIEW_DESCRIPTION (3):
Request summary info and limited job description data for steps, labels and environment.
- class google.cloud.dataflow_v1beta3.types.KeyRangeDataDiskAssignment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Data disk assignment information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.
- class google.cloud.dataflow_v1beta3.types.KeyRangeLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Location information for a specific key-range of a sharded computation. Currently we only support UTF-8 character splits to simplify encoding into JSON.
- delivery_endpoint¶
The physical location of this range assignment to be used for streaming computation cross-worker message delivery.
- Type
- data_disk¶
The name of the data disk where data for this range is stored. This name is local to the Google Cloud Platform project and uniquely identifies the disk within that project, for example “myproject-1014-104817-4c2-harness-0-disk-1”.
- Type
- class google.cloud.dataflow_v1beta3.types.KindType(value)[source]¶
Bases:
proto.enums.Enum
Type of transform or stage operation.
- Values:
- UNKNOWN_KIND (0):
Unrecognized transform type.
- PAR_DO_KIND (1):
ParDo transform.
- GROUP_BY_KEY_KIND (2):
Group By Key transform.
- FLATTEN_KIND (3):
Flatten transform.
- READ_KIND (4):
Read transform.
- WRITE_KIND (5):
Write transform.
- CONSTANT_KIND (6):
Constructs from a constant value, such as with Create.of.
- SINGLETON_KIND (7):
Creates a Singleton view of a collection.
- SHUFFLE_KIND (8):
Opening or closing a shuffle session, often as part of a GroupByKey.
- class google.cloud.dataflow_v1beta3.types.LaunchFlexTemplateParameter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Launch FlexTemplate Parameter.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- job_name¶
Required. The job name to use for the created job. For update job request, job name should be same as the existing running job.
- Type
- container_spec_gcs_path¶
Cloud Storage path to a file with json serialized ContainerSpec as content.
This field is a member of oneof
template
.- Type
- launch_options¶
Launch options for this flex template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
- environment¶
The runtime environment for the FlexTemplate job
- update¶
Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
- Type
- transform_name_mappings¶
Use this to pass transform_name_mappings for streaming update jobs. Ex:{“oldTransformName”:”newTransformName”,…}’
- class LaunchOptionsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class TransformNameMappingsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.LaunchFlexTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to launch a Cloud Dataflow job from a FlexTemplate.
- launch_parameter¶
Required. Parameter to launch a job form Flex Template.
- location¶
Required. The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. E.g., us-central1, us-west1.
- Type
- class google.cloud.dataflow_v1beta3.types.LaunchFlexTemplateResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response to the request to launch a job from Flex Template.
- job¶
The job that was launched, if the request was not a dry run and the job was successfully launched.
- class google.cloud.dataflow_v1beta3.types.LaunchTemplateParameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Parameters to provide to the template being launched.
- environment¶
The runtime environment for the job.
- update¶
If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
- Type
- transform_name_mapping¶
Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class TransformNameMappingEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.LaunchTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to launch a template.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- validate_only¶
If true, the request is validated but not actually executed. Defaults to false.
- Type
- gcs_path¶
A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with ‘gs://’.
This field is a member of oneof
template
.- Type
- dynamic_template¶
Params for launching a dynamic template.
This field is a member of oneof
template
.
- launch_parameters¶
The parameters of the template to launch. This should be part of the body of the POST request.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
- Type
- class google.cloud.dataflow_v1beta3.types.LaunchTemplateResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response to the request to launch a template.
- job¶
The job that was launched, if the request was not a dry run and the job was successfully launched.
- class google.cloud.dataflow_v1beta3.types.ListJobMessagesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to list job messages. Up to max_results messages will be returned in the time range specified starting with the oldest messages first. If no time range is specified the results with start with the oldest message.
- minimum_importance¶
Filter to only get messages with importance >= level
- page_size¶
If specified, determines the maximum number of messages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.
- Type
- page_token¶
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.
- Type
- start_time¶
If specified, return only messages with timestamps >= start_time. The default is the job creation time (i.e. beginning of messages).
- end_time¶
Return only messages with timestamps < end_time. The default is now (i.e. return up to the latest messages available).
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains the job specified by job_id.
- Type
- class google.cloud.dataflow_v1beta3.types.ListJobMessagesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response to a request to list job messages.
- job_messages¶
Messages in ascending timestamp order.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.JobMessage]
- autoscaling_events¶
Autoscaling events in ascending timestamp order.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.AutoscalingEvent]
- class google.cloud.dataflow_v1beta3.types.ListJobsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to list Cloud Dataflow jobs.
- filter¶
The kind of filter to use.
- view¶
Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews.
- page_size¶
If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit.
- Type
- page_token¶
Set this to the ‘next_page_token’ field of a previous response to request additional results in a long list.
- Type
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Type
- class Filter(value)[source]¶
Bases:
proto.enums.Enum
This field filters out and returns jobs in the specified job state. The order of data returned is determined by the filter used, and is subject to change.
- Values:
- UNKNOWN (0):
The filter isn’t specified, or is unknown. This returns all jobs ordered on descending
JobUuid
.- ALL (1):
Returns all running jobs first ordered on creation timestamp, then returns all terminated jobs ordered on the termination timestamp.
- TERMINATED (2):
Filters the jobs that have a terminated state, ordered on the termination timestamp. Example terminated states:
JOB_STATE_STOPPED
,JOB_STATE_UPDATED
,JOB_STATE_DRAINED
, etc.- ACTIVE (3):
Filters the jobs that are running ordered on the creation timestamp.
- class google.cloud.dataflow_v1beta3.types.ListJobsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response to a request to list Cloud Dataflow jobs in a project. This might be a partial response, depending on the page size in the ListJobsRequest. However, if the project does not have any jobs, an instance of ListJobsResponse is not returned and the requests’s response body is empty {}.
- jobs¶
A subset of the requested job information.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.Job]
- failed_location¶
Zero or more messages describing the [regional endpoints] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that failed to respond.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.FailedLocation]
- class google.cloud.dataflow_v1beta3.types.ListSnapshotsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to list snapshots.
- class google.cloud.dataflow_v1beta3.types.ListSnapshotsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
List of snapshots.
- snapshots¶
Returned snapshots.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.Snapshot]
- class google.cloud.dataflow_v1beta3.types.MetricStructuredName(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifies a metric, by describing the source which generated the metric.
- origin¶
Origin (namespace) of metric name. May be blank for user-define metrics; will be “dataflow” for metrics defined by the Dataflow service or SDK.
- Type
- context¶
Zero or more labeled fields which identify the part of the job this metric is associated with, such as the name of a step or collection.
For example, built-in counters associated with steps will have context[‘step’] = . Counters associated with PCollections in the SDK will have context[‘pcollection’] = .
- class ContextEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.MetricUpdate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the state of a metric.
- name¶
Name of the metric.
- kind¶
Metric aggregation kind. The possible metric aggregation kinds are “Sum”, “Max”, “Min”, “Mean”, “Set”, “And”, “Or”, and “Distribution”. The specified aggregation kind is case-insensitive.
If omitted, this is not an aggregated value but instead a single metric sample value.
- Type
- cumulative¶
True if this metric is reported as the total cumulative aggregate value accumulated since the worker started working on this WorkItem. By default this is false, indicating that this metric is reported as a delta that is not associated with any WorkItem.
- Type
- scalar¶
Worker-computed aggregate value for aggregation kinds “Sum”, “Max”, “Min”, “And”, and “Or”. The possible value types are Long, Double, and Boolean.
- mean_sum¶
Worker-computed aggregate value for the “Mean” aggregation kind. This holds the sum of the aggregated values and is used in combination with mean_count below to obtain the actual mean aggregate value. The only possible value types are Long and Double.
- mean_count¶
Worker-computed aggregate value for the “Mean” aggregation kind. This holds the count of the aggregated values and is used in combination with mean_sum above to obtain the actual mean aggregate value. The only possible value type is Long.
- set_¶
Worker-computed aggregate value for the “Set” aggregation kind. The only possible value type is a list of Values whose type can be Long, Double, or String, according to the metric’s type. All Values in the list must be of the same type.
- distribution¶
A struct value describing properties of a distribution of numeric values.
- gauge¶
A struct value describing properties of a Gauge. Metrics of gauge type show the value of a metric across time, and is aggregated based on the newest value.
- internal¶
Worker-computed aggregate value for internal use by the Dataflow service.
- update_time¶
Timestamp associated with the metric value. Optional when workers are reporting work progress; it will be filled in responses from the metrics API.
- class google.cloud.dataflow_v1beta3.types.MountedDataDisk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes mounted data disk.
- class google.cloud.dataflow_v1beta3.types.Package(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.
This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user’s code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.
- class google.cloud.dataflow_v1beta3.types.ParameterMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a specific parameter.
- param_type¶
Optional. The type of the parameter. Used for selecting input picker.
- custom_metadata¶
Optional. Additional metadata for describing this parameter.
- class CustomMetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.ParameterType(value)[source]¶
Bases:
proto.enums.Enum
ParameterType specifies what kind of input we need for this parameter.
- Values:
- DEFAULT (0):
Default input type.
- TEXT (1):
The parameter specifies generic text input.
- GCS_READ_BUCKET (2):
The parameter specifies a Cloud Storage Bucket to read from.
- GCS_WRITE_BUCKET (3):
The parameter specifies a Cloud Storage Bucket to write to.
- GCS_READ_FILE (4):
The parameter specifies a Cloud Storage file path to read from.
- GCS_WRITE_FILE (5):
The parameter specifies a Cloud Storage file path to write to.
- GCS_READ_FOLDER (6):
The parameter specifies a Cloud Storage folder path to read from.
- GCS_WRITE_FOLDER (7):
The parameter specifies a Cloud Storage folder to write to.
- PUBSUB_TOPIC (8):
The parameter specifies a Pub/Sub Topic.
- PUBSUB_SUBSCRIPTION (9):
The parameter specifies a Pub/Sub Subscription.
- class google.cloud.dataflow_v1beta3.types.PipelineDescription(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.
- original_pipeline_transform¶
Description of each transform in the pipeline and collections between them.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.TransformSummary]
- execution_pipeline_stage¶
Description of each stage of execution of the pipeline.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ExecutionStageSummary]
- display_data¶
Pipeline level display data.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.DisplayData]
- class google.cloud.dataflow_v1beta3.types.ProgressTimeseries(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about the progress of some component of job execution.
- data_points¶
History of progress for the component.
Points are sorted by time.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ProgressTimeseries.Point]
- class Point(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A point in the timeseries.
- time¶
The timestamp of the point.
- class google.cloud.dataflow_v1beta3.types.PubSubIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a Pub/Sub connector used by the job.
- class google.cloud.dataflow_v1beta3.types.PubsubLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifies a pubsub location to use for transferring data into or out of a streaming Dataflow job.
- topic¶
A pubsub topic, in the form of “pubsub.googleapis.com/topics/<project-id>/<topic-name>”.
- Type
- subscription¶
A pubsub subscription, in the form of “pubsub.googleapis.com/subscriptions/<project-id>/<subscription-name>”.
- Type
- timestamp_label¶
If set, contains a pubsub label from which to extract record timestamps. If left empty, record timestamps will be generated upon arrival.
- Type
- id_label¶
If set, contains a pubsub label from which to extract record ids. If left empty, record deduplication will be strictly best effort.
- Type
- tracking_subscription¶
If set, specifies the pubsub subscription that will be used for tracking custom time timestamps for watermark estimation.
- Type
- class google.cloud.dataflow_v1beta3.types.PubsubSnapshotMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Represents a Pubsub snapshot.
- expire_time¶
The expire time of the Pubsub snapshot.
- class google.cloud.dataflow_v1beta3.types.RuntimeEnvironment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The environment values to set at runtime.
- max_workers¶
The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
- Type
- zone¶
The Compute Engine availability zone for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
- Type
- temp_location¶
The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with
gs://
.- Type
- bypass_temp_dir_validation¶
Whether to bypass the safety checks for the job’s temporary directory. Use with caution.
- Type
- machine_type¶
The machine type to use for the job. Defaults to the value from the template if not specified.
- Type
- additional_experiments¶
Additional experiment flags for the job, specified with the
--experiments
option.- Type
MutableSequence[str]
- network¶
Network to which VMs will be assigned. If empty or unspecified, the service will use the network “default”.
- Type
- subnetwork¶
Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form “https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK” or “regions/REGION/subnetworks/SUBNETWORK”. If the subnetwork is located in a Shared VPC network, you must use the complete URL.
- Type
- additional_user_labels¶
Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1kg”, “count”: “3” }.
- kms_key_name¶
Name for the Cloud KMS key for the job. Key format is:
projects/<project>/locations/<location>/keyRings/<keyring>/cryptoKeys/<key>
- Type
- ip_configuration¶
Configuration for VM IPs.
- worker_region¶
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1”. Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane’s region.
- Type
- worker_zone¶
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. “us-west1-a”. Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane’s region is chosen based on available capacity. If both
worker_zone
andzone
are set,worker_zone
takes precedence.- Type
- class AdditionalUserLabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.RuntimeMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
RuntimeMetadata describing a runtime environment.
- sdk_info¶
SDK Info for the template.
- parameters¶
The parameters for the template.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ParameterMetadata]
- class google.cloud.dataflow_v1beta3.types.SDKInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
SDK Information.
- language¶
Required. The SDK Language.
- class Language(value)[source]¶
Bases:
proto.enums.Enum
SDK Language.
- Values:
- UNKNOWN (0):
UNKNOWN Language.
- JAVA (1):
Java.
- PYTHON (2):
Python.
- class google.cloud.dataflow_v1beta3.types.SdkHarnessContainerImage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines a SDK harness container for executing Dataflow pipelines.
- use_single_core_per_container¶
If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- Type
- environment_id¶
Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- Type
- capabilities¶
The set of capabilities enumerated in the above Environment proto. See also https://github.com/apache/beam/blob/master/model/pipeline/src/main/proto/beam_runner_api.proto
- Type
MutableSequence[str]
- class google.cloud.dataflow_v1beta3.types.SdkVersion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The version of the SDK used to run the job.
- sdk_support_status¶
The support status for this SDK version.
- class SdkSupportStatus(value)[source]¶
Bases:
proto.enums.Enum
The support status of the SDK used to run the job.
- Values:
- UNKNOWN (0):
Cloud Dataflow is unaware of this version.
- SUPPORTED (1):
This is a known version of an SDK, and is supported.
- STALE (2):
A newer version of the SDK family exists, and an update is recommended.
- DEPRECATED (3):
This version of the SDK is deprecated and will eventually be unsupported.
- UNSUPPORTED (4):
Support for this SDK version has ended and it should no longer be used.
- class google.cloud.dataflow_v1beta3.types.ShuffleMode(value)[source]¶
Bases:
proto.enums.Enum
Specifies the shuffle mode used by a [google.dataflow.v1beta3.Job], which determines the approach data is shuffled during processing. More details in: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#dataflow-shuffle
- Values:
- SHUFFLE_MODE_UNSPECIFIED (0):
Shuffle mode information is not available.
- VM_BASED (1):
Shuffle is done on the worker VMs.
- SERVICE_BASED (2):
Shuffle is done on the service side.
- class google.cloud.dataflow_v1beta3.types.Snapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Represents a snapshot of a job.
- creation_time¶
The time this snapshot was created.
- ttl¶
The time after which this snapshot will be automatically deleted.
- state¶
State of the snapshot.
- pubsub_metadata¶
Pub/Sub snapshot metadata.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.PubsubSnapshotMetadata]
- disk_size_bytes¶
The disk byte size of the snapshot. Only available for snapshots in READY state.
- Type
- class google.cloud.dataflow_v1beta3.types.SnapshotJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a snapshot of a job.
- ttl¶
TTL for the snapshot.
- class google.cloud.dataflow_v1beta3.types.SnapshotState(value)[source]¶
Bases:
proto.enums.Enum
Snapshot state.
- Values:
- UNKNOWN_SNAPSHOT_STATE (0):
Unknown state.
- PENDING (1):
Snapshot intent to create has been persisted, snapshotting of state has not yet started.
- RUNNING (2):
Snapshotting is being performed.
- READY (3):
Snapshot has been created and is ready to be used.
- FAILED (4):
Snapshot failed to be created.
- DELETED (5):
Snapshot has been deleted.
- class google.cloud.dataflow_v1beta3.types.SpannerIODetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a Spanner connector used by the job.
- class google.cloud.dataflow_v1beta3.types.StageExecutionDetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about the workers and work items within a stage.
- workers¶
Workers that have done work on the stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.WorkerDetails]
- class google.cloud.dataflow_v1beta3.types.StageSummary(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about a particular execution stage of a job.
- state¶
State of this stage.
- start_time¶
Start time of this stage.
- end_time¶
End time of this stage.
If the work item is completed, this is the actual end time of the stage. Otherwise, it is the predicted end time.
- progress¶
Progress for this stage. Only applicable to Batch jobs.
- metrics¶
Metrics for this stage.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.MetricUpdate]
- class google.cloud.dataflow_v1beta3.types.StateFamilyConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
State family configuration.
- class google.cloud.dataflow_v1beta3.types.Step(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines a particular step within a Cloud Dataflow job.
A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.
Here’s an example of a sequence of steps which together implement a Map-Reduce job:
Read a collection of data from some source, parsing the collection’s elements.
Validate the elements.
Apply a user-defined function to map each element to some value and extract an element-specific key value.
Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.
Write the elements out to some data sink.
Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.
- name¶
The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Type
- properties¶
Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- class google.cloud.dataflow_v1beta3.types.StreamLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes a stream of data, either as input to be processed or as output of a streaming Dataflow job.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- streaming_stage_location¶
The stream is part of another computation within the current streaming Dataflow job.
This field is a member of oneof
location
.
- side_input_location¶
The stream is a streaming side input.
This field is a member of oneof
location
.
- class google.cloud.dataflow_v1beta3.types.StreamingApplianceSnapshotConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Streaming appliance snapshot configuration.
- class google.cloud.dataflow_v1beta3.types.StreamingComputationRanges(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes full or partial data disk assignment information of the computation ranges.
- range_assignments¶
Data disk assignments for ranges from this computation.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.KeyRangeDataDiskAssignment]
- class google.cloud.dataflow_v1beta3.types.StreamingSideInputLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifies the location of a streaming side input.
- class google.cloud.dataflow_v1beta3.types.StreamingStageLocation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifies the location of a streaming computation stage, for stage-to-stage communication.
- class google.cloud.dataflow_v1beta3.types.StructuredMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A rich message format, including a human readable string, a key for identifying the message, and structured data associated with the message for programmatic consumption.
- message_key¶
Identifier for this message type. Used by external systems to internationalize or personalize message.
- Type
- parameters¶
The structured data associated with this message.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.StructuredMessage.Parameter]
- class Parameter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Structured data associated with this message.
- value¶
Value for this parameter.
- class google.cloud.dataflow_v1beta3.types.TaskRunnerSettings(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Taskrunner configuration settings.
- task_user¶
The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. “root”.
- Type
- task_group¶
The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. “wheel”.
- Type
- oauth_scopes¶
The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- Type
MutableSequence[str]
- base_url¶
The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, “Relative Uniform Resource Locators”.
If not specified, the default value is “http://www.googleapis.com/”.
- Type
- parallel_worker_settings¶
The settings to pass to the parallel worker harness.
- log_to_serialconsole¶
Whether to send taskrunner log info to Google Compute Engine VM serial console.
- Type
- log_upload_location¶
Indicates where to put logs. If this is not specified, the logs will not be uploaded.
The supported resource type is:
Google Cloud Storage:
storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Type
- temp_storage_prefix¶
The prefix of the resources the taskrunner should use for temporary storage.
The supported resource type is:
Google Cloud Storage:
storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Type
- class google.cloud.dataflow_v1beta3.types.TeardownPolicy(value)[source]¶
Bases:
proto.enums.Enum
Specifies what happens to a resource when a Cloud Dataflow [google.dataflow.v1beta3.Job][google.dataflow.v1beta3.Job] has completed.
- Values:
- TEARDOWN_POLICY_UNKNOWN (0):
The teardown policy isn’t specified, or is unknown.
- TEARDOWN_ALWAYS (1):
Always teardown the resource.
- TEARDOWN_ON_SUCCESS (2):
Teardown the resource on success. This is useful for debugging failures.
- TEARDOWN_NEVER (3):
Never teardown the resource. This is useful for debugging and development.
- class google.cloud.dataflow_v1beta3.types.TemplateMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata describing a template.
- parameters¶
The parameters for the template.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ParameterMetadata]
- class google.cloud.dataflow_v1beta3.types.TopologyConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Global topology of the streaming Dataflow job, including all computations and their sharded locations.
- computations¶
The computations associated with a streaming Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.ComputationTopology]
- data_disk_assignments¶
The disks assigned to a streaming Dataflow job.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.DataDiskAssignment]
- user_stage_to_computation_name_map¶
Maps user stage names to stable computation names.
- class UserStageToComputationNameMapEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.TransformSummary(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Description of the type, names/ids, and input/outputs for a transform.
- kind¶
Type of transform.
- display_data¶
Transform-specific display data.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.DisplayData]
- class google.cloud.dataflow_v1beta3.types.UpdateJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update a Cloud Dataflow job.
- job¶
The updated job. Only the job state is updatable; other fields will be ignored.
- location¶
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Type
- class google.cloud.dataflow_v1beta3.types.WorkItemDetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about an individual work item execution.
- start_time¶
Start time of this work item attempt.
- end_time¶
End time of this work item attempt.
If the work item is completed, this is the actual end time of the work item. Otherwise, it is the predicted end time.
- state¶
State of this work item.
- progress¶
Progress of this work item.
- metrics¶
Metrics for this work item.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.MetricUpdate]
- class google.cloud.dataflow_v1beta3.types.WorkerDetails(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about a worker
- work_items¶
Work items processed by this worker, sorted by time.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.WorkItemDetails]
- class google.cloud.dataflow_v1beta3.types.WorkerIPAddressConfiguration(value)[source]¶
Bases:
proto.enums.Enum
Specifies how IP addresses should be allocated to the worker machines.
- Values:
- WORKER_IP_UNSPECIFIED (0):
The configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC (1):
Workers should have public IP addresses.
- WORKER_IP_PRIVATE (2):
Workers should have private IP addresses.
- class google.cloud.dataflow_v1beta3.types.WorkerPool(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.
- num_workers¶
Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- Type
- packages¶
Packages to be installed on workers.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.Package]
- default_package_set¶
The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- machine_type¶
Machine type (e.g. “n1-standard-1”). If empty or unspecified, the service will attempt to choose a reasonable default.
- Type
- teardown_policy¶
Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down.If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user’s project until they are explicitly terminated by the user. Because of this, Google recommends using the
TEARDOWN_ALWAYS
policy except for small, manually supervised test jobs.If unknown or unspecified, the service will attempt to choose a reasonable default.
- disk_size_gb¶
Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- Type
- disk_type¶
Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- Type
- zone¶
Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- Type
- taskrunner_settings¶
Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- on_host_maintenance¶
The action to take on host maintenance, as defined by the Google Compute Engine API.
- Type
- data_disks¶
Data disks that are used by a VM in this workflow.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.Disk]
- autoscaling_settings¶
Settings for autoscaling of this WorkerPool.
- pool_args¶
Extra arguments for this worker pool.
- network¶
Network to which VMs will be assigned. If empty or unspecified, the service will use the network “default”.
- Type
- subnetwork¶
Subnetwork to which VMs will be assigned, if desired. Expected to be of the form “regions/REGION/subnetworks/SUBNETWORK”.
- Type
- worker_harness_container_image¶
Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry.
Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- Type
- num_threads_per_worker¶
The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- Type
- ip_configuration¶
Configuration for VM IPs.
- sdk_harness_container_images¶
Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Type
MutableSequence[google.cloud.dataflow_v1beta3.types.SdkHarnessContainerImage]
- class MetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataflow_v1beta3.types.WorkerSettings(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Provides data to pass through to the worker harness.
- base_url¶
The base URL for accessing Google Cloud APIs.
When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, “Relative Uniform Resource Locators”.
If not specified, the default value is “http://www.googleapis.com/”.
- Type
- service_path¶
The Cloud Dataflow service path relative to the root URL, for example, “dataflow/v1b3/projects”.
- Type
- shuffle_service_path¶
The Shuffle service path relative to the root URL, for example, “shuffle/v1beta1”.
- Type