Class: Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputDirectory

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/remotebuildexecution_v1/classes.rb,
generated/google/apis/remotebuildexecution_v1/representations.rb,
generated/google/apis/remotebuildexecution_v1/representations.rb

Overview

An OutputDirectory is the output in an ActionResult corresponding to a directory's full contents rather than a single file.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ BuildBazelRemoteExecutionV2OutputDirectory

Returns a new instance of BuildBazelRemoteExecutionV2OutputDirectory



1090
1091
1092
# File 'generated/google/apis/remotebuildexecution_v1/classes.rb', line 1090

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#pathString

The full path of the directory relative to the working directory. The path separator is a forward slash /. Since this is a relative path, it MUST NOT begin with a leading forward slash. The empty string value is allowed, and it denotes the entire working directory. Corresponds to the JSON property path

Returns:

  • (String)


1057
1058
1059
# File 'generated/google/apis/remotebuildexecution_v1/classes.rb', line 1057

def path
  @path
end

#tree_digestGoogle::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest

A content digest. A digest for a given blob consists of the size of the blob and its hash. The hash algorithm to use is defined by the server, but servers SHOULD use SHA-256. The size is considered to be an integral part of the digest and cannot be separated. That is, even if the hash field is correctly specified but size_bytes is not, the server MUST reject the request. The reason for including the size in the digest is as follows: in a great many cases, the server needs to know the size of the blob it is about to work with prior to starting an operation with it, such as flattening Merkle tree structures or streaming it to a worker. Technically, the server could implement a separate metadata store, but this results in a significantly more complicated implementation as opposed to having the client specify the size up-front (or storing the size along with the digest in every message where digests are embedded). This does mean that the API leaks some implementation details of (what we consider to be) a reasonable server implementation, but we consider this to be a worthwhile tradeoff. When a Digest is used to refer to a proto message, it always refers to the message in binary encoded form. To ensure consistent hashing, clients and servers MUST ensure that they serialize messages according to the following rules, even if there are alternate valid encodings for the same message:

  • Fields are serialized in tag order.
  • There are no unknown fields.
  • There are no duplicate fields.
  • Fields are serialized according to the default semantics for their type. Most protocol buffer implementations will always follow these rules when serializing, but care should be taken to avoid shortcuts. For instance, concatenating two messages to merge them may produce duplicate fields. Corresponds to the JSON property treeDigest


1088
1089
1090
# File 'generated/google/apis/remotebuildexecution_v1/classes.rb', line 1088

def tree_digest
  @tree_digest
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1095
1096
1097
1098
# File 'generated/google/apis/remotebuildexecution_v1/classes.rb', line 1095

def update!(**args)
  @path = args[:path] if args.key?(:path)
  @tree_digest = args[:tree_digest] if args.key?(:tree_digest)
end