Class: Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/remotebuildexecution_v2/classes.rb,
generated/google/apis/remotebuildexecution_v2/representations.rb,
generated/google/apis/remotebuildexecution_v2/representations.rb

Overview

A response corresponding to a single blob that the client tried to upload.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse

Returns a new instance of BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse.



638
639
640
# File 'generated/google/apis/remotebuildexecution_v2/classes.rb', line 638

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#digestGoogle::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest

A content digest. A digest for a given blob consists of the size of the blob and its hash. The hash algorithm to use is defined by the server, but servers SHOULD use SHA-256. The size is considered to be an integral part of the digest and cannot be separated. That is, even if the hash field is correctly specified but size_bytes is not, the server MUST reject the request. The reason for including the size in the digest is as follows: in a great many cases, the server needs to know the size of the blob it is about to work with prior to starting an operation with it, such as flattening Merkle tree structures or streaming it to a worker. Technically, the server could implement a separate metadata store, but this results in a significantly more complicated implementation as opposed to having the client specify the size up-front (or storing the size along with the digest in every message where digests are embedded). This does mean that the API leaks some implementation details of (what we consider to be) a reasonable server implementation, but we consider this to be a worthwhile tradeoff. When a Digest is used to refer to a proto message, it always refers to the message in binary encoded form. To ensure consistent hashing, clients and servers MUST ensure that they serialize messages according to the following rules, even if there are alternate valid encodings for the same message:

  • Fields are serialized in tag order.
  • There are no unknown fields.
  • There are no duplicate fields.
  • Fields are serialized according to the default semantics for their type. Most protocol buffer implementations will always follow these rules when serializing, but care should be taken to avoid shortcuts. For instance, concatenating two messages to merge them may produce duplicate fields. Corresponds to the JSON property digest


626
627
628
# File 'generated/google/apis/remotebuildexecution_v2/classes.rb', line 626

def digest
  @digest
end

#statusGoogle::Apis::RemotebuildexecutionV2::GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. Corresponds to the JSON property status



636
637
638
# File 'generated/google/apis/remotebuildexecution_v2/classes.rb', line 636

def status
  @status
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



643
644
645
646
# File 'generated/google/apis/remotebuildexecution_v2/classes.rb', line 643

def update!(**args)
  @digest = args[:digest] if args.key?(:digest)
  @status = args[:status] if args.key?(:status)
end