Class: Google::Cloud::Storage::Bucket

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/storage/bucket.rb,
lib/google/cloud/storage/bucket/acl.rb,
lib/google/cloud/storage/bucket/cors.rb,
lib/google/cloud/storage/bucket/list.rb,
lib/google/cloud/storage/bucket/lifecycle.rb

Overview

Bucket

Represents a Storage bucket. Belongs to a Project and has many Files.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"
file = bucket.file "path/to/my-file.ext"

Direct Known Subclasses

Updater

Defined Under Namespace

Classes: Acl, Cors, DefaultAcl, Lifecycle, List, Updater

Instance Attribute Summary collapse

Instance Method Summary collapse

Instance Attribute Details

#user_projectObject

A boolean value or a project ID string to indicate the project to be billed for operations on the bucket and its files. If this attribute is set to true, transit costs for operations on the bucket will be billed to the current project for this client. (See Project#project for the ID of the current project.) If this attribute is set to a project ID, and that project is authorized for the currently authenticated service account, transit costs will be billed to that project. This attribute is required with requester pays-enabled buckets. The default is nil.

In general, this attribute should be set when first retrieving the bucket by providing the user_project option to Project#bucket.

See also #requester_pays= and #requester_pays.

Examples:

Setting a non-default project:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "other-project-bucket", user_project: true
files = bucket.files # Billed to current project
bucket.user_project = "my-other-project"
files = bucket.files # Billed to "my-other-project"


83
84
85
# File 'lib/google/cloud/storage/bucket.rb', line 83

def user_project
  @user_project
end

Instance Method Details

#aclBucket::Acl

The Acl instance used to control access to the bucket.

A bucket has owners, writers, and readers. Permissions can be granted to an individual user's email address, a group's email address, as well as many predefined lists.

Examples:

Grant access to a user by prepending "user-" to an email:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

email = "heidi@example.net"
bucket.acl.add_reader "user-#{email}"

Grant access to a group by prepending "group-" to an email:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

email = "authors@example.net"
bucket.acl.add_reader "group-#{email}"

Or, grant access via a predefined permissions list:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

bucket.acl.public!

Returns:

See Also:



2659
2660
2661
# File 'lib/google/cloud/storage/bucket.rb', line 2659

def acl
  @acl ||= Bucket::Acl.new self
end

#api_urlString

A URL that can be used to access the bucket using the REST API.

Returns:

  • (String)


144
145
146
# File 'lib/google/cloud/storage/bucket.rb', line 144

def api_url
  @gapi.self_link
end

#autoclassGoogle::Apis::StorageV1::Bucket::Autoclass

The autoclass configuration of the bucket

Returns:

  • (Google::Apis::StorageV1::Bucket::Autoclass)


117
118
119
# File 'lib/google/cloud/storage/bucket.rb', line 117

def autoclass
  @gapi.autoclass
end

#autoclass_enabledBoolean

Whether Autoclass is enabled for the bucket.

Returns:

  • (Boolean)


433
434
435
# File 'lib/google/cloud/storage/bucket.rb', line 433

def autoclass_enabled
  @gapi.autoclass&.enabled?
end

#autoclass_enabled=(toggle) ⇒ Object

Updates bucket's autoclass configuration. This defines the default class for objects in the bucket and down/up-grades the storage class of objects based on the access patterns. Accepted values are :false, and :true.

For more information, see Storage Classes.

Parameters:

  • toggle (Boolean)

    for autoclass configuration of the bucket.



474
475
476
477
478
# File 'lib/google/cloud/storage/bucket.rb', line 474

def autoclass_enabled= toggle
  @gapi.autoclass ||= API::Bucket::Autoclass.new
  @gapi.autoclass.enabled = toggle
  patch_gapi! :autoclass
end

#autoclass_terminal_storage_classString

Terminal Storage class of the autoclass

Returns:

  • (String)


451
452
453
# File 'lib/google/cloud/storage/bucket.rb', line 451

def autoclass_terminal_storage_class
  @gapi.autoclass&.terminal_storage_class
end

#autoclass_terminal_storage_class_update_timeDateTime

Update time at which the autoclass terminal storage class was last modified

Returns:

  • (DateTime)


460
461
462
# File 'lib/google/cloud/storage/bucket.rb', line 460

def autoclass_terminal_storage_class_update_time
  @gapi.autoclass&.terminal_storage_class_update_time
end

#autoclass_toggle_timeDateTime

Toggle time of the autoclass

Returns:

  • (DateTime)


442
443
444
# File 'lib/google/cloud/storage/bucket.rb', line 442

def autoclass_toggle_time
  @gapi.autoclass&.toggle_time
end

#compose(sources, destination, acl: nil, encryption_key: nil, if_source_generation_match: nil, if_generation_match: nil, if_metageneration_match: nil) {|file| ... } ⇒ Google::Cloud::Storage::File Also known as: compose_file, combine

Concatenates a list of existing files in the bucket into a new file in the bucket. There is a limit (currently 32) to the number of files that can be composed in a single operation.

To compose files encrypted with a customer-supplied encryption key, use the encryption_key option. All source files must have been encrypted with the same key, and the resulting destination file will also be encrypted with the same key.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"]

new_file = bucket.compose sources, "path/to/new-file.ext"

Set the properties of the new file in a block:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"]

new_file = bucket.compose sources, "path/to/new-file.ext" do |f|
  f.cache_control = "private, max-age=0, no-cache"
  f.content_disposition = "inline; filename=filename.ext"
  f.content_encoding = "deflate"
  f.content_language = "de"
  f.content_type = "application/json"
end

Specify the generation of source files (but skip retrieval):

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

file_1 = bucket.file "path/to/my-file-1.ext",
                     generation: 1490390259479000, skip_lookup: true
file_2 = bucket.file "path/to/my-file-2.ext",
                     generation: 1490310974144000, skip_lookup: true

new_file = bucket.compose [file_1, file_2], "path/to/new-file.ext"

Parameters:

  • sources (Array<String, Google::Cloud::Storage::File>)

    The list of source file names or objects that will be concatenated into a single file.

  • destination (String)

    The name of the new file.

  • acl (String) (defaults to: nil)

    A predefined set of access controls to apply to this file.

    Acceptable values are:

    • auth, auth_read, authenticated, authenticated_read, authenticatedRead - File owner gets OWNER access, and allAuthenticatedUsers get READER access.
    • owner_full, bucketOwnerFullControl - File owner gets OWNER access, and project team owners get OWNER access.
    • owner_read, bucketOwnerRead - File owner gets OWNER access, and project team owners get READER access.
    • private - File owner gets OWNER access.
    • project_private, projectPrivate - File owner gets OWNER access, and project team members get access according to their roles.
    • public, public_read, publicRead - File owner gets OWNER access, and allUsers get READER access.
  • encryption_key (String, nil) (defaults to: nil)

    Optional. The customer-supplied, AES-256 encryption key used to encrypt the source files, if one was used. All source files must have been encrypted with the same key, and the resulting destination file will also be encrypted with the key.

  • if_source_generation_match (Array<Integer>) (defaults to: nil)

    Makes the operation conditional on whether the source files' current generations match the given values. The list must match sources item-to-item.

  • if_generation_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the destination file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.

  • if_metageneration_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the destination file's current metageneration matches the given value.

Yields:

  • (file)

    A block yielding a delegate file object for setting the properties of the destination file.

Returns:



2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
# File 'lib/google/cloud/storage/bucket.rb', line 2006

def compose sources,
            destination,
            acl: nil,
            encryption_key: nil,
            if_source_generation_match: nil,
            if_generation_match: nil,
            if_metageneration_match: nil
  ensure_service!
  sources = Array sources
  if sources.size < 2
    raise ArgumentError, "must provide at least two source files"
  end

  destination_gapi = nil
  if block_given?
    destination_gapi = API::Object.new
    updater = File::Updater.new destination_gapi
    yield updater
    updater.check_for_changed_metadata!
  end

  acl_rule = File::Acl.predefined_rule_for acl
  gapi = service.compose_file name,
                              sources,
                              destination,
                              destination_gapi,
                              acl: acl_rule,
                              key: encryption_key,
                              if_source_generation_match: if_source_generation_match,
                              if_generation_match: if_generation_match,
                              if_metageneration_match: if_metageneration_match,
                              user_project: user_project
  File.from_gapi gapi, service, user_project: user_project
end

#cors {|cors| ... } ⇒ Bucket::Cors

Returns the current CORS configuration for a static website served from the bucket.

The return value is a frozen (unmodifiable) array of hashes containing the attributes specified for the Bucket resource field cors.

This method also accepts a block for updating the bucket's CORS rules. See Cors for details.

Examples:

Retrieving the bucket's CORS rules.

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
bucket.cors.size #=> 2
rule = bucket.cors.first
rule.origin #=> ["http://example.org"]
rule.methods #=> ["GET","POST","DELETE"]
rule.headers #=> ["X-My-Custom-Header"]
rule.max_age #=> 3600

Updating the bucket's CORS rules inside a block.

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-todo-app"

bucket.update do |b|
  b.cors do |c|
    c.add_rule ["http://example.org", "https://example.org"],
               "*",
               headers: ["X-My-Custom-Header"],
               max_age: 3600
  end
end

Yields:

  • (cors)

    a block for setting CORS rules

Yield Parameters:

Returns:

See Also:



213
214
215
216
217
218
219
220
221
222
223
# File 'lib/google/cloud/storage/bucket.rb', line 213

def cors
  cors_builder = Bucket::Cors.from_gapi @gapi.cors_configurations
  if block_given?
    yield cors_builder
    if cors_builder.changed?
      @gapi.cors_configurations = cors_builder.to_gapi
      patch_gapi! :cors_configurations
    end
  end
  cors_builder.freeze # always return frozen objects
end

#create_file(file, path = nil, acl: nil, cache_control: nil, content_disposition: nil, content_encoding: nil, content_language: nil, content_type: nil, custom_time: nil, checksum: nil, crc32c: nil, md5: nil, metadata: nil, storage_class: nil, encryption_key: nil, kms_key: nil, temporary_hold: nil, event_based_hold: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil) ⇒ Google::Cloud::Storage::File Also known as: upload_file, new_file

Creates a new File object by providing a path to a local file (or any File-like object such as StringIO) to upload, along with the path at which to store it in the bucket.

Customer-supplied encryption keys

By default, Google Cloud Storage manages server-side encryption keys on your behalf. However, a customer-supplied encryption key can be provided with the encryption_key option. If given, the same key must be provided to subsequently download or copy the file. If you use customer-supplied encryption keys, you must securely manage your keys and ensure that they are not lost. Also, please note that file metadata is not encrypted, with the exception of the CRC32C checksum and MD5 hash. The names of files and buckets are also not encrypted, and you can read or update the metadata of an encrypted file without providing the encryption key.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.create_file "path/to/local.file.ext"

Specifying a destination path:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.create_file "path/to/local.file.ext",
                   "destination/path/file.ext"

Providing a customer-supplied encryption key:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

# Key generation shown for example purposes only. Write your own.
cipher = OpenSSL::Cipher.new "aes-256-cfb"
cipher.encrypt
key = cipher.random_key

bucket.create_file "path/to/local.file.ext",
                   "destination/path/file.ext",
                   encryption_key: key

# Store your key and hash securely for later use.
file = bucket.file "destination/path/file.ext",
                   encryption_key: key

Providing a customer-managed Cloud KMS encryption key:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

# KMS key ring must use the same location as the bucket.
kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d"

bucket.create_file "path/to/local.file.ext",
                   "destination/path/file.ext",
                   kms_key: kms_key_name

file = bucket.file "destination/path/file.ext"
file.kms_key #=> kms_key_name

Create a file with gzip-encoded data.

require "zlib"
require "google/cloud/storage"

storage = Google::Cloud::Storage.new

gz = StringIO.new ""
z = Zlib::GzipWriter.new gz
z.write "Hello world!"
z.close
data = StringIO.new gz.string

bucket = storage.bucket "my-bucket"

bucket.create_file data, "path/to/gzipped.txt",
                   content_encoding: "gzip"

file = bucket.file "path/to/gzipped.txt"

# The downloaded data is decompressed by default.
file.download "path/to/downloaded/hello.txt"

# The downloaded data remains compressed with skip_decompress.
file.download "path/to/downloaded/gzipped.txt",
              skip_decompress: true

Parameters:

  • file (String, ::File)

    Path of the file on the filesystem to upload. Can be an File object, or File-like object such as StringIO. (If the object does not have path, a path argument must be also be provided.)

  • path (String) (defaults to: nil)

    Path to store the file in Google Cloud Storage.

  • acl (String) (defaults to: nil)

    A predefined set of access controls to apply to this file.

    Acceptable values are:

    • auth, auth_read, authenticated, authenticated_read, authenticatedRead - File owner gets OWNER access, and allAuthenticatedUsers get READER access.
    • owner_full, bucketOwnerFullControl - File owner gets OWNER access, and project team owners get OWNER access.
    • owner_read, bucketOwnerRead - File owner gets OWNER access, and project team owners get READER access.
    • private - File owner gets OWNER access.
    • project_private, projectPrivate - File owner gets OWNER access, and project team members get access according to their roles.
    • public, public_read, publicRead - File owner gets OWNER access, and allUsers get READER access.
  • cache_control (String) (defaults to: nil)

    The Cache-Control response header to be returned when the file is downloaded.

  • content_disposition (String) (defaults to: nil)

    The Content-Disposition response header to be returned when the file is downloaded.

  • content_encoding (String) (defaults to: nil)

    The Content-Encoding response header to be returned when the file is downloaded. For example, content_encoding: "gzip" can indicate to clients that the uploaded data is gzip-compressed. However, there is no check to guarantee the specified Content-Encoding has actually been applied to the file data, and incorrectly specifying the file's encoding could lead to unintended behavior on subsequent download requests.

  • content_language (String) (defaults to: nil)

    The Content-Language response header to be returned when the file is downloaded.

  • content_type (String) (defaults to: nil)

    The Content-Type response header to be returned when the file is downloaded.

  • custom_time (DateTime) (defaults to: nil)

    A custom time specified by the user for the file. Once set, custom_time can't be unset, and it can only be changed to a time in the future. If custom_time must be unset, you must either perform a rewrite operation, or upload the data again and create a new file.

  • checksum (Symbol, nil) (defaults to: nil)

    The type of checksum for the client to automatically calculate and send with the create request to verify the integrity of the object. If provided, Cloud Storage will only create the file if the value calculated by the client matches the value calculated by the service.

    Acceptable values are:

    • md5 - Calculate and provide a checksum using the MD5 hash.
    • crc32c - Calculate and provide a checksum using the CRC32c hash.
    • all - Calculate and provide checksums for all available verifications.

    Optional. The default is nil. Do not provide if also providing a corresponding crc32c or md5 argument. See Validation for more information.

  • crc32c (String) (defaults to: nil)

    The CRC32c checksum of the file data, as described in RFC 4960, Appendix B. If provided, Cloud Storage will only create the file if the value matches the value calculated by the service. Do not provide if also providing a checksum: :crc32c or checksum: :all argument. See Validation for more information.

  • md5 (String) (defaults to: nil)

    The MD5 hash of the file data. If provided, Cloud Storage will only create the file if the value matches the value calculated by the service. Do not provide if also providing a checksum: :md5 or checksum: :all argument. See Validation for more information.

  • metadata (Hash) (defaults to: nil)

    A hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.

  • storage_class (Symbol, String) (defaults to: nil)

    Storage class of the file. Determines how the file is stored and determines the SLA and the cost of storage. Accepted values include :standard, :nearline, :coldline, and :archive, as well as the equivalent strings returned by #storage_class. :multi_regional, :regional, and durable_reduced_availability are accepted legacy storage classes. For more information, see Storage Classes and Per-Object Storage Class. The default value is the default storage class for the bucket.

  • encryption_key (String) (defaults to: nil)

    Optional. A customer-supplied, AES-256 encryption key that will be used to encrypt the file. Do not provide if kms_key is used.

  • kms_key (String) (defaults to: nil)

    Optional. Resource name of the Cloud KMS key, of the form projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide if encryption_key is used.

  • if_generation_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.

  • if_generation_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.

  • if_metageneration_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current metageneration matches the given value.

  • if_metageneration_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current metageneration does not match the given value.

Returns:

Raises:

  • (ArgumentError)


1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
# File 'lib/google/cloud/storage/bucket.rb', line 1781

def create_file file,
                path = nil,
                acl: nil,
                cache_control: nil,
                content_disposition: nil,
                content_encoding: nil,
                content_language: nil,
                content_type: nil,
                custom_time: nil,
                checksum: nil,
                crc32c: nil,
                md5: nil,
                metadata: nil,
                storage_class: nil,
                encryption_key: nil,
                kms_key: nil,
                temporary_hold: nil,
                event_based_hold: nil,
                if_generation_match: nil,
                if_generation_not_match: nil,
                if_metageneration_match: nil,
                if_metageneration_not_match: nil
  ensure_service!
  ensure_io_or_file_exists! file
  path ||= file.path if file.respond_to? :path
  path ||= file if file.is_a? String
  raise ArgumentError, "must provide path" if path.nil?
  crc32c = crc32c_for file, checksum, crc32c
  md5 = md5_for file, checksum, md5

  gapi = service.insert_file name,
                             file,
                             path,
                             acl: File::Acl.predefined_rule_for(acl),
                             md5: md5,
                             cache_control: cache_control,
                             content_type: content_type,
                             custom_time: custom_time,
                             content_disposition: content_disposition,
                             crc32c: crc32c,
                             content_encoding: content_encoding,
                             metadata: ,
                             content_language: content_language,
                             key: encryption_key,
                             kms_key: kms_key,
                             storage_class: storage_class_for(storage_class),
                             temporary_hold: temporary_hold,
                             event_based_hold: event_based_hold,
                             if_generation_match: if_generation_match,
                             if_generation_not_match: if_generation_not_match,
                             if_metageneration_match: if_metageneration_match,
                             if_metageneration_not_match: if_metageneration_not_match,
                             user_project: user_project
  File.from_gapi gapi, service, user_project: user_project
end

#create_notification(topic, custom_attrs: nil, event_types: nil, prefix: nil, payload: nil) ⇒ Google::Cloud::Storage::Notification Also known as: new_notification

Creates a new Pub/Sub notification subscription for the bucket.

Examples:

require "google/cloud/pubsub"
require "google/cloud/storage"

pubsub = Google::Cloud::Pubsub.new
storage = Google::Cloud::Storage.new

topic = pubsub.create_topic "my-topic"
topic.policy do |p|
  p.add "roles/pubsub.publisher",
        "serviceAccount:#{storage.}"
end

bucket = storage.bucket "my-bucket"

notification = bucket.create_notification topic.name

Parameters:

  • topic (String)

    The name of the Cloud PubSub topic to which the notification subscription will publish.

  • custom_attrs (Hash(String => String)) (defaults to: nil)

    The custom attributes for the notification. An optional list of additional attributes to attach to each Cloud Pub/Sub message published for the notification subscription.

  • event_types (Symbol, String, Array<Symbol, String>) (defaults to: nil)

    The event types for the notification subscription. If provided, messages will only be sent for the listed event types. If empty, messages will be sent for all event types.

    Acceptable values are:

    • :finalize - Sent when a new object (or a new generation of an existing object) is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.
    • :update - Sent when the metadata of an existing object changes.
    • :delete - Sent when an object has been permanently deleted. This includes objects that are overwritten or are deleted as part of the bucket's lifecycle configuration. For buckets with object versioning enabled, this is not sent when an object is archived (see OBJECT_ARCHIVE), even if archival occurs via the File#delete method.
    • :archive - Only sent when the bucket has enabled object versioning. This event indicates that the live version of an object has become an archived version, either because it was archived or because it was overwritten by the upload of an object of the same name.
  • prefix (String) (defaults to: nil)

    The file name prefix for the notification subscription. If provided, the notification will only be applied to file names that begin with this prefix.

  • payload (Symbol, String, Boolean) (defaults to: nil)

    The desired content of the Pub/Sub message payload. Acceptable values are:

    • :json or true - The Pub/Sub message payload will be a UTF-8 string containing the resource representation of the file's metadata.
    • :none or false - No payload is included with the notification.

    The default value is :json.

Returns:

See Also:



3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
# File 'lib/google/cloud/storage/bucket.rb', line 3096

def create_notification topic, custom_attrs: nil, event_types: nil,
                        prefix: nil, payload: nil
  ensure_service!

  gapi = service.insert_notification name, topic, custom_attrs: custom_attrs,
                                                  event_types: event_types,
                                                  prefix: prefix,
                                                  payload: payload,
                                                  user_project: user_project
  Notification.from_gapi name, gapi, service, user_project: user_project
end

#created_atDateTime

Creation time of the bucket.

Returns:

  • (DateTime)


153
154
155
# File 'lib/google/cloud/storage/bucket.rb', line 153

def created_at
  @gapi.time_created
end

#data_locationsObject

See Also:



332
333
334
# File 'lib/google/cloud/storage/bucket.rb', line 332

def data_locations
  @gapi.custom_placement_config&.data_locations
end

#default_aclBucket::DefaultAcl

The DefaultAcl instance used to control access to the bucket's files.

A bucket's files have owners, writers, and readers. Permissions can be granted to an individual user's email address, a group's email address, as well as many predefined lists.

Examples:

Grant access to a user by prepending "user-" to an email:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

email = "heidi@example.net"
bucket.default_acl.add_reader "user-#{email}"

Grant access to a group by prepending "group-" to an email

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

email = "authors@example.net"
bucket.default_acl.add_reader "group-#{email}"

Or, grant access via a predefined permissions list:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

bucket.default_acl.public!

Returns:

See Also:



2705
2706
2707
# File 'lib/google/cloud/storage/bucket.rb', line 2705

def default_acl
  @default_acl ||= Bucket::DefaultAcl.new self
end

#default_event_based_hold=(new_default_event_based_hold) ⇒ Object

Updates the default event-based hold field for the bucket. This field controls the initial state of the event_based_hold field for newly-created files in the bucket.

See File#event_based_hold? and File#set_event_based_hold!.

To pass metageneration preconditions, call this method within a block passed to #update.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.update do |b|
  b.retention_period = 2592000 # 30 days in seconds
  b.default_event_based_hold = true
end

file = bucket.create_file "path/to/local.file.ext"
file.event_based_hold? # true
file.delete # raises Google::Cloud::PermissionDeniedError
file.release_event_based_hold!

# The end of the retention period is calculated from the time that
# the event-based hold was released.
file.retention_expires_at

Parameters:

  • new_default_event_based_hold (Boolean)

    The default event-based hold field for the bucket.



878
879
880
881
# File 'lib/google/cloud/storage/bucket.rb', line 878

def default_event_based_hold= new_default_event_based_hold
  @gapi.default_event_based_hold = new_default_event_based_hold
  patch_gapi! :default_event_based_hold
end

#default_event_based_hold?Boolean

Whether the event_based_hold field for newly-created files in the bucket will be initially set to true. See #default_event_based_hold=, File#event_based_hold? and File#set_event_based_hold!.

Returns:

  • (Boolean)

    Returns true if the event_based_hold field for newly-created files in the bucket will be initially set to true, otherwise false.



840
841
842
# File 'lib/google/cloud/storage/bucket.rb', line 840

def default_event_based_hold?
  !@gapi.default_event_based_hold.nil? && @gapi.default_event_based_hold
end

#default_kms_keyString?

The Cloud KMS encryption key that will be used to protect files. For example: projects/a/locations/b/keyRings/c/cryptoKeys/d

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

# KMS key ring must use the same location as the bucket.
kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d"
bucket.default_kms_key = kms_key_name

bucket.default_kms_key #=> kms_key_name

Returns:

  • (String, nil)

    A Cloud KMS encryption key, or nil if none has been configured.



679
680
681
# File 'lib/google/cloud/storage/bucket.rb', line 679

def default_kms_key
  @gapi.encryption&.default_kms_key_name
end

#default_kms_key=(new_default_kms_key) ⇒ Object

Set the Cloud KMS encryption key that will be used to protect files. For example: projects/a/locations/b/keyRings/c/cryptoKeys/d

To pass metageneration preconditions, call this method within a block passed to #update.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

# KMS key ring must use the same location as the bucket.
kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d"

bucket.default_kms_key = kms_key_name

Delete the default Cloud KMS encryption key:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.default_kms_key = nil

Parameters:

  • new_default_kms_key (String, nil)

    New Cloud KMS key name, or nil to delete the Cloud KMS encryption key.



714
715
716
717
718
# File 'lib/google/cloud/storage/bucket.rb', line 714

def default_kms_key= new_default_kms_key
  @gapi.encryption = API::Bucket::Encryption.new \
    default_kms_key_name: new_default_kms_key
  patch_gapi! :encryption
end

#delete(if_metageneration_match: nil, if_metageneration_not_match: nil) ⇒ Boolean

Permanently deletes the bucket. The bucket must be empty before it can be deleted.

The API call to delete the bucket may be retried under certain conditions. See Google::Cloud#storage to control this behavior.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"
bucket.delete

Parameters:

  • if_metageneration_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the bucket's current metageneration matches the given value.

  • if_metageneration_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the bucket's current metageneration does not match the given value.

Returns:

  • (Boolean)

    Returns true if the bucket was deleted.



1405
1406
1407
1408
1409
1410
1411
# File 'lib/google/cloud/storage/bucket.rb', line 1405

def delete if_metageneration_match: nil, if_metageneration_not_match: nil
  ensure_service!
  service.delete_bucket name,
                        if_metageneration_match: if_metageneration_match,
                        if_metageneration_not_match: if_metageneration_not_match,
                        user_project: user_project
end

#exists?Boolean

Determines whether the bucket exists in the Storage service.

Returns:

  • (Boolean)

    true if the bucket exists in the Storage service.



3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
# File 'lib/google/cloud/storage/bucket.rb', line 3126

def exists?
  # Always true if we have a grpc object
  return true unless lazy?
  # If we have a value, return it
  return @exists unless @exists.nil?
  ensure_gapi!
  @exists = true
rescue Google::Cloud::NotFoundError
  @exists = false
end

#file(path, generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, skip_lookup: nil, encryption_key: nil, soft_deleted: nil) ⇒ Google::Cloud::Storage::File? Also known as: find_file

Retrieves a file matching the path.

If a customer-supplied encryption key was used with #create_file, the encryption_key option must be provided or else the file's CRC32C checksum and MD5 hash will not be returned.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

file = bucket.file "path/to/my-file.ext"
puts file.name

Parameters:

  • path (String)

    Name (path) of the file.

  • generation (Integer) (defaults to: nil)

    When present, selects a specific revision of this object. Default is the latest version.

  • if_generation_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.

  • if_generation_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.

  • if_metageneration_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current metageneration matches the given value.

  • if_metageneration_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the file's current metageneration does not match the given value.

  • skip_lookup (Boolean) (defaults to: nil)

    Optionally create a Bucket object without verifying the bucket resource exists on the Storage service. Calls made on this object will raise errors if the bucket resource does not exist. Default is false.

  • encryption_key (String) (defaults to: nil)

    Optional. The customer-supplied, AES-256 encryption key used to encrypt the file, if one was provided to #create_file. (Not used if skip_lookup is also set.)

  • soft_deleted (Boolean) (defaults to: nil)

    Optional. If true, only soft-deleted object versions will be listed. The default is false.

Returns:



1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
# File 'lib/google/cloud/storage/bucket.rb', line 1536

def file path,
         generation: nil,
         if_generation_match: nil,
         if_generation_not_match: nil,
         if_metageneration_match: nil,
         if_metageneration_not_match: nil,
         skip_lookup: nil,
         encryption_key: nil,
         soft_deleted: nil
  ensure_service!
  if skip_lookup
    return File.new_lazy name, path, service,
                         generation: generation,
                         user_project: user_project
  end
  gapi = service.get_file name, path, generation: generation,
                                      if_generation_match: if_generation_match,
                                      if_generation_not_match: if_generation_not_match,
                                      if_metageneration_match: if_metageneration_match,
                                      if_metageneration_not_match: if_metageneration_not_match,
                                      key: encryption_key,
                                      user_project: user_project,
                                      soft_deleted: soft_deleted
  File.from_gapi gapi, service, user_project: user_project
rescue Google::Cloud::NotFoundError
  nil
end

#files(prefix: nil, delimiter: nil, token: nil, max: nil, versions: nil, match_glob: nil, include_folders_as_prefixes: nil, soft_deleted: nil) ⇒ Array<Google::Cloud::Storage::File> Also known as: find_files

Retrieves a list of files matching the criteria.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"
files = bucket.files
files.each do |file|
  puts file.name
end

Retrieve all files: (See File::List#all)

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"
files = bucket.files
files.all do |file|
  puts file.name
end

Parameters:

  • prefix (String) (defaults to: nil)

    Filter results to files whose names begin with this prefix.

  • delimiter (String) (defaults to: nil)

    Returns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.

  • token (String) (defaults to: nil)

    A previously-returned page token representing part of the larger set of results to view.

  • match_glob (String) (defaults to: nil)

    A glob pattern used to filter results returned in items (e.g. foo*bar). The string value must be UTF-8 encoded. See: https://cloud.google.com/storage/docs/json_api/v1/objects/list#list-object-glob

  • max (Integer) (defaults to: nil)

    Maximum number of items plus prefixes to return. As duplicate prefixes are omitted, fewer total results may be returned than requested. The default value of this parameter is 1,000 items.

  • versions (Boolean) (defaults to: nil)

    If true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning .

  • include_folders_as_prefixes (Boolean) (defaults to: nil)

    If true, will also include folders and managed folders, besides objects, in the returned prefixes. Only applicable if delimiter is set to '/'.

  • soft_deleted (Boolean) (defaults to: nil)

    If true, only soft-deleted object versions will be listed. The default is false.

Returns:



1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
# File 'lib/google/cloud/storage/bucket.rb', line 1468

def files prefix: nil, delimiter: nil, token: nil, max: nil,
          versions: nil, match_glob: nil, include_folders_as_prefixes: nil,
          soft_deleted: nil
  ensure_service!
  gapi = service.list_files name, prefix: prefix, delimiter: delimiter,
                                  token: token, max: max,
                                  versions: versions,
                                  user_project: user_project,
                                  match_glob: match_glob,
                                  include_folders_as_prefixes: include_folders_as_prefixes,
                                  soft_deleted: soft_deleted
  File::List.from_gapi gapi, service, name, prefix, delimiter, max,
                       versions,
                       user_project: user_project,
                       match_glob: match_glob,
                       include_folders_as_prefixes: include_folders_as_prefixes,
                       soft_deleted: soft_deleted
end

#generate_signed_post_policy_v4(path, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil, expires: nil, fields: nil, conditions: nil, scheme: "https", virtual_hosted_style: nil, bucket_bound_hostname: nil) ⇒ PostObject

Generate a PostObject that includes the fields and URL to upload objects via HTML forms. The resulting PostObject is based on a policy document created from the method arguments. This policy provides authorization to ensure that the HTML form can upload files into the bucket. See Signatures - Policy document.

Generating a PostObject requires service account credentials, either by connecting with a service account when calling Google::Cloud.storage, or by passing in the service account issuer and signing_key values. Although the private key can be passed as a string for convenience, creating and storing an instance of OpenSSL::PKey::RSA is more efficient when making multiple calls to generate_signed_post_policy_v4.

A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"

conditions = [["starts-with", "$acl","public"]]
post = bucket.generate_signed_post_policy_v4 "avatars/heidi/400x400.png",
                                             expires:    10,
                                             conditions: conditions

post.url #=> "https://storage.googleapis.com/my-todo-app/"
post.fields["key"] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields["policy"] #=> "ABC...XYZ"
post.fields["x-goog-algorithm"] #=> "GOOG4-RSA-SHA256"
post.fields["x-goog-credential"] #=> "cred@pid.iam.gserviceaccount.com/20200123/auto/storage/goog4_request"
post.fields["x-goog-date"] #=> "20200128T000000Z"
post.fields["x-goog-signature"] #=> "4893a0e...cd82"

Using Cloud IAMCredentials signBlob to create the signature:

require "google/cloud/storage"
require "google/apis/iamcredentials_v1"
require "googleauth"

# Issuer is the service account email that the Signed URL will be signed with
# and any permission granted in the Signed URL must be granted to the
# Google Service Account.
issuer = "service-account@project-id.iam.gserviceaccount.com"

# Create a lambda that accepts the string_to_sign
signer = lambda do |string_to_sign|
  IAMCredentials = Google::Apis::IamcredentialsV1
  iam_client = IAMCredentials::IAMCredentialsService.new

  # Get the environment configured authorization
  scopes = ["https://www.googleapis.com/auth/iam"]
  iam_client.authorization = Google::Auth.get_application_default scopes

  request = Google::Apis::IamcredentialsV1::SignBlobRequest.new(
    payload: string_to_sign
  )
  resource = "projects/-/serviceAccounts/#{issuer}"
  response = iam_client. resource, request
  response.signed_blob
end

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
conditions = [["starts-with", "$acl","public"]]
post = bucket.generate_signed_post_policy_v4 "avatars/heidi/400x400.png",
                                             expires:    10,
                                             conditions: conditions,
                                             issuer:     issuer,
                                             signer:     signer

post.url #=> "https://storage.googleapis.com/my-todo-app/"
post.fields["key"] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields["policy"] #=> "ABC...XYZ"
post.fields["x-goog-algorithm"] #=> "GOOG4-RSA-SHA256"
post.fields["x-goog-credential"] #=> "cred@pid.iam.gserviceaccount.com/20200123/auto/storage/goog4_request"
post.fields["x-goog-date"] #=> "20200128T000000Z"
post.fields["x-goog-signature"] #=> "4893a0e...cd82"

Parameters:

  • path (String)

    Path to the file in Google Cloud Storage.

  • issuer (String) (defaults to: nil)

    Service Account's Client Email.

  • client_email (String) (defaults to: nil)

    Service Account's Client Email.

  • signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

    When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.

  • expires (Integer) (defaults to: nil)

    The number of seconds until the URL expires. The default is 604800 (7 days).

  • fields (Hash{String => String}) (defaults to: nil)

    User-supplied form fields such as acl, cache-control, success_action_status, and success_action_redirect. Optional. See Upload an object with HTML forms - Form fields.

  • conditions (Array<Hash{String => String}|Array<String>>) (defaults to: nil)

    An array of policy conditions that every upload must satisfy. For example: [["eq", "$Content-Type", "image/jpeg"]]. Optional. See Signatures - Policy document.

  • scheme (String) (defaults to: "https")

    The URL scheme. The default value is HTTPS.

  • virtual_hosted_style (Boolean) (defaults to: nil)

    Whether to use a virtual hosted-style hostname, which adds the bucket into the host portion of the URI rather than the path, e.g. https://mybucket.storage.googleapis.com/.... The default value of false uses the form of https://storage.googleapis.com/mybucket.

  • bucket_bound_hostname (String) (defaults to: nil)

    Use a bucket-bound hostname, which replaces the storage.googleapis.com host with the name of a CNAME bucket, e.g. a bucket named gcs-subdomain.my.domain.tld, or a Google Cloud Load Balancer which routes to a bucket you own, e.g. my-load-balancer-domain.tld.

Returns:

  • (PostObject)

    An object containing the URL, fields, and values needed to upload files via HTML forms.

Raises:

See Also:



2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
# File 'lib/google/cloud/storage/bucket.rb', line 2591

def generate_signed_post_policy_v4 path,
                                   issuer: nil,
                                   client_email: nil,
                                   signing_key: nil,
                                   private_key: nil,
                                   signer: nil,
                                   expires: nil,
                                   fields: nil,
                                   conditions: nil,
                                   scheme: "https",
                                   virtual_hosted_style: nil,
                                   bucket_bound_hostname: nil
  ensure_service!
  sign = File::SignerV4.from_bucket self, path
  sign.post_object issuer: issuer,
                   client_email: client_email,
                   signing_key: signing_key,
                   private_key: private_key,
                   signer: signer,
                   expires: expires,
                   fields: fields,
                   conditions: conditions,
                   scheme: scheme,
                   virtual_hosted_style: virtual_hosted_style,
                   bucket_bound_hostname: bucket_bound_hostname
end

#hierarchical_namespaceGoogle::Apis::StorageV1::Bucket::HierarchicalNamespace

The bucket's hierarchical namespace (Folders) configuration. This value can be modified by calling #hierarchical_namespace=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.hierarchical_namespace

Returns:

  • (Google::Apis::StorageV1::Bucket::HierarchicalNamespace)


1271
1272
1273
# File 'lib/google/cloud/storage/bucket.rb', line 1271

def hierarchical_namespace
  @gapi.hierarchical_namespace
end

#hierarchical_namespace=(new_hierarchical_namespace) ⇒ Object

Sets the value of Hierarchical Namespace (Folders) for the bucket. This can only be enabled at bucket create time. If this is enabled, Uniform Bucket-Level Access must also be enabled. This value can be queried by calling #hierarchical_namespace.

Examples:

Enabled Hierarchical Namespace using HierarchicalNamespace class:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

hierarchical_namespace = Google::Apis::StorageV1::Bucket::HierarchicalNamespace.new
hierarchical_namespace.enabled = true

bucket.hierarchical_namespace = hierarchical_namespace

Disable Hierarchical Namespace using Hash:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

hierarchical_namespace = { enabled: false }
bucket.hierarchical_namespace = hierarchical_namespace

Parameters:

  • new_hierarchical_namespace (Google::Apis::StorageV1::Bucket::HierarchicalNamespace, Hash(String => String))

    The bucket's new Hierarchical Namespace Configuration.



1307
1308
1309
1310
# File 'lib/google/cloud/storage/bucket.rb', line 1307

def hierarchical_namespace= new_hierarchical_namespace
  @gapi.hierarchical_namespace = new_hierarchical_namespace || {}
  patch_gapi! :hierarchical_namespace
end

#idString

The ID of the bucket.

Returns:

  • (String)


108
109
110
# File 'lib/google/cloud/storage/bucket.rb', line 108

def id
  @gapi.id
end

#kindString

The kind of item this is. For buckets, this is always storage#bucket.

Returns:

  • (String)


99
100
101
# File 'lib/google/cloud/storage/bucket.rb', line 99

def kind
  @gapi.kind
end

#labelsHash(String => String)

A hash of user-provided labels. The hash is frozen and changes are not allowed.

Returns:

  • (Hash(String => String))


579
580
581
582
583
# File 'lib/google/cloud/storage/bucket.rb', line 579

def labels
  m = @gapi.labels
  m = m.to_h if m.respond_to? :to_h
  m.dup.freeze
end

#labels=(labels) ⇒ Object

Updates the hash of user-provided labels.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • labels (Hash(String => String))

    The user-provided labels.



593
594
595
596
# File 'lib/google/cloud/storage/bucket.rb', line 593

def labels= labels
  @gapi.labels = labels
  patch_gapi! :labels
end

#lifecycle {|lifecycle| ... } ⇒ Bucket::Lifecycle

Returns the current Object Lifecycle Management rules configuration for the bucket.

This method also accepts a block for updating the bucket's Object Lifecycle Management rules. See Lifecycle for details.

Examples:

Retrieving a bucket's lifecycle management rules.

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

bucket.lifecycle.size #=> 2
rule = bucket.lifecycle.first
rule.action #=> "SetStorageClass"
rule.storage_class #=> "COLDLINE"
rule.age #=> 10
rule.matches_storage_class #=> ["STANDARD", "NEARLINE"]

Updating the bucket's lifecycle management rules in a block.

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

bucket.update do |b|
  b.lifecycle do |l|
    # Remove the last rule from the array
    l.pop
    # Remove rules with the given condition
    l.delete_if do |r|
      r.matches_storage_class.include? "NEARLINE"
    end
    # Update rules
    l.each do |r|
      r.age = 90 if r.action == "Delete"
    end
    # Add a rule
    l.add_set_storage_class_rule "COLDLINE", age: 10
  end
end

Yields:

  • (lifecycle)

    a block for setting Object Lifecycle Management rules

Yield Parameters:

  • lifecycle (Bucket::Lifecycle)

    the object accepting Object Lifecycle Management rules

Returns:

See Also:



280
281
282
283
284
285
286
287
288
289
290
# File 'lib/google/cloud/storage/bucket.rb', line 280

def lifecycle
  lifecycle_builder = Bucket::Lifecycle.from_gapi @gapi.lifecycle
  if block_given?
    yield lifecycle_builder
    if lifecycle_builder.changed?
      @gapi.lifecycle = lifecycle_builder.to_gapi
      patch_gapi! :lifecycle
    end
  end
  lifecycle_builder.freeze # always return frozen objects
end

#locationString

The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.



302
303
304
# File 'lib/google/cloud/storage/bucket.rb', line 302

def location
  @gapi.location
end

#location_typeString

The bucket's location type. Location type defines the geographic placement of the bucket's data and affects cost, performance, and availability. There are three possible values:

  • region - Lowest latency within a single region
  • multi-region - Highest availability across largest area
  • dual-region - High availability and low latency across 2 regions

Returns:

  • (String)

    The location type code: "region", "multi-region", or "dual-region"



318
319
320
# File 'lib/google/cloud/storage/bucket.rb', line 318

def location_type
  @gapi.location_type
end

#lock_retention_policy!Boolean

PERMANENTLY locks the retention policy (see #retention_period=) on the bucket if one exists. The policy is transitioned to a locked state in which its duration cannot be reduced.

Locked policies can be extended in duration by setting #retention_period= to a higher value. Such an extension is permanent, and it cannot later be reduced. The extended duration will apply retroactively to all files currently in the bucket.

This method also creates a lien on the resourcemanager.projects.delete permission for the project containing the bucket.

The bucket's metageneration value is required for the lock policy API call. Attempting to call this method on a bucket that was loaded with the skip_lookup: true option will result in an error.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.retention_period = 2592000 # 30 days in seconds
bucket.lock_retention_policy!
bucket.retention_policy_locked? # true

file = bucket.create_file "path/to/local.file.ext"
file.delete # raises Google::Cloud::PermissionDeniedError

# Locked policies can be extended in duration
bucket.retention_period = 7776000 # 90 days in seconds

Returns:

  • (Boolean)

    Returns true if the lock operation is successful.



921
922
923
924
925
926
# File 'lib/google/cloud/storage/bucket.rb', line 921

def lock_retention_policy!
  ensure_service!
  @gapi = service.lock_bucket_retention_policy \
    name, metageneration, user_project: user_project
  true
end

#logging_bucketString

The destination bucket name for the bucket's logs.

Returns:

  • (String)

See Also:



343
344
345
# File 'lib/google/cloud/storage/bucket.rb', line 343

def logging_bucket
  @gapi.logging&.log_bucket
end

#logging_bucket=(logging_bucket) ⇒ Object

Updates the destination bucket for the bucket's logs.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • logging_bucket (String)

    The bucket to hold the logging output

See Also:



357
358
359
360
361
# File 'lib/google/cloud/storage/bucket.rb', line 357

def logging_bucket= logging_bucket
  @gapi.logging ||= API::Bucket::Logging.new
  @gapi.logging.log_bucket = logging_bucket
  patch_gapi! :logging
end

#logging_prefixString

The logging object prefix for the bucket's logs. For more information,

Returns:

  • (String)

See Also:



370
371
372
# File 'lib/google/cloud/storage/bucket.rb', line 370

def logging_prefix
  @gapi.logging&.log_object_prefix
end

#logging_prefix=(logging_prefix) ⇒ Object

Updates the logging object prefix. This prefix will be used to create log object names for the bucket. It can be at most 900 characters and must be a valid object name. By default, the object prefix is the name of the bucket for which the logs are enabled.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • logging_prefix (String)

    The logging object prefix.

See Also:



389
390
391
392
393
# File 'lib/google/cloud/storage/bucket.rb', line 389

def logging_prefix= logging_prefix
  @gapi.logging ||= API::Bucket::Logging.new
  @gapi.logging.log_object_prefix = logging_prefix
  patch_gapi! :logging
end

#metagenerationInteger

The metadata generation of the bucket.

Returns:

  • (Integer)

    The metageneration.



162
163
164
# File 'lib/google/cloud/storage/bucket.rb', line 162

def metageneration
  @gapi.metageneration
end

#nameString

The name of the bucket.

Returns:

  • (String)


135
136
137
# File 'lib/google/cloud/storage/bucket.rb', line 135

def name
  @gapi.name
end

#notification(id) ⇒ Google::Cloud::Storage::Notification? Also known as: find_notification

Retrieves a Pub/Sub notification subscription for the bucket.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

notification = bucket.notification "1"
puts notification.id

Parameters:

  • id (String)

    The Notification ID.

Returns:

See Also:



3018
3019
3020
3021
3022
3023
3024
# File 'lib/google/cloud/storage/bucket.rb', line 3018

def notification id
  ensure_service!
  gapi = service.get_notification name, id, user_project: user_project
  Notification.from_gapi name, gapi, service, user_project: user_project
rescue Google::Cloud::NotFoundError
  nil
end

#notificationsArray<Google::Cloud::Storage::Notification> Also known as: find_notifications

Retrieves the entire list of Pub/Sub notification subscriptions for the bucket.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"
notifications = bucket.notifications
notifications.each do |notification|
  puts notification.id
end

Returns:

See Also:



2987
2988
2989
2990
2991
2992
2993
2994
# File 'lib/google/cloud/storage/bucket.rb', line 2987

def notifications
  ensure_service!
  gapi = service.list_notifications name, user_project: user_project
  Array(gapi.items).map do |gapi_object|
    Notification.from_gapi name, gapi_object, service,
                           user_project: user_project
  end
end

#object_retentionGoogle::Apis::StorageV1::Bucket::ObjectRetention

The object retention configuration of the bucket

Returns:

  • (Google::Apis::StorageV1::Bucket::ObjectRetention)


126
127
128
# File 'lib/google/cloud/storage/bucket.rb', line 126

def object_retention
  @gapi.object_retention
end

#policy(force: nil, requested_policy_version: nil) {|policy| ... } ⇒ Policy

Gets and updates the Cloud IAM access control policy for this bucket.

Examples:

Retrieving a Policy that is implicitly version 1:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

policy = bucket.policy
policy.version # 1
puts policy.roles["roles/storage.objectViewer"]

Retrieving a version 3 Policy using requested_policy_version:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

policy = bucket.policy requested_policy_version: 3
policy.version # 3
puts policy.bindings.find do |b|
  b[:role] == "roles/storage.objectViewer"
end

Updating a Policy that is implicitly version 1:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

bucket.policy do |p|
  p.version # the value is 1
  p.remove "roles/storage.admin", "user:owner@example.com"
  p.add "roles/storage.admin", "user:newowner@example.com"
  p.roles["roles/storage.objectViewer"] = ["allUsers"]
end

Updating a Policy from version 1 to version 3 by adding a condition:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

bucket.uniform_bucket_level_access = true

bucket.policy requested_policy_version: 3 do |p|
  p.version # the value is 1
  p.version = 3 # Must be explicitly set to opt-in to support for conditions.

  expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")"
  p.bindings.insert({
                      role: "roles/storage.admin",
                      members: ["user:owner@example.com"],
                      condition: {
                        title: "my-condition",
                        description: "description of condition",
                        expression: expr
                      }
                    })
end

Updating a version 3 Policy:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

bucket.uniform_bucket_level_access? # true

bucket.policy requested_policy_version: 3 do |p|
  p.version = 3 # Must be explicitly set to opt-in to support for conditions.

  expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")"
  p.bindings.insert({
                      role: "roles/storage.admin",
                      members: ["user:owner@example.com"],
                      condition: {
                        title: "my-condition",
                        description: "description of condition",
                        expression: expr
                      }
                    })
end

Parameters:

  • force (Boolean) (defaults to: nil)

    [Deprecated] Force the latest policy to be retrieved from the Storage service when true. Deprecated because the latest policy is now always retrieved. The default is nil.

  • requested_policy_version (Integer) (defaults to: nil)

    The requested syntax schema version of the policy. Optional. If 1, nil, or not provided, a PolicyV1 object is returned, which provides PolicyV1#roles and related helpers but does not provide a bindings method. If 3 is provided, a PolicyV3 object is returned, which provides PolicyV3#bindings but does not provide a roles method or related helpers. A higher version indicates that the policy contains role bindings with the newer syntax schema that is unsupported by earlier versions.

    The following requested policy versions are valid:

    • 1 - The first version of Cloud IAM policy schema. Supports binding one role to one or more members. Does not support conditional bindings.
    • 3 - Introduces the condition field in the role binding, which further constrains the role binding via context-based and attribute-based rules. See Understanding policies and Overview of Cloud IAM Conditions for more information.

Yields:

  • (policy)

    A block for updating the policy. The latest policy will be read from the service and passed to the block. After the block completes, the modified policy will be written to the service.

Yield Parameters:

  • policy (Policy)

    the current Cloud IAM Policy for this bucket

Returns:

  • (Policy)

    the current Cloud IAM Policy for this bucket

See Also:



2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
# File 'lib/google/cloud/storage/bucket.rb', line 2832

def policy force: nil, requested_policy_version: nil
  warn "DEPRECATED: 'force' in Bucket#policy" unless force.nil?
  ensure_service!
  gapi = service.get_bucket_policy name, requested_policy_version: requested_policy_version,
                                         user_project: user_project
  policy = if requested_policy_version.nil? || requested_policy_version == 1
             PolicyV1.from_gapi gapi
           else
             PolicyV3.from_gapi gapi
           end
  return policy unless block_given?
  yield policy
  update_policy policy
end

#policy_only=(new_policy_only) ⇒ Object

Deprecated.


1026
1027
1028
# File 'lib/google/cloud/storage/bucket.rb', line 1026

def policy_only= new_policy_only
  self.uniform_bucket_level_access = new_policy_only
end

#policy_only?Boolean

Deprecated.

Returns:

  • (Boolean)


1019
1020
1021
# File 'lib/google/cloud/storage/bucket.rb', line 1019

def policy_only?
  uniform_bucket_level_access?
end

#policy_only_locked_atObject

Deprecated.


1033
1034
1035
# File 'lib/google/cloud/storage/bucket.rb', line 1033

def policy_only_locked_at
  uniform_bucket_level_access_locked_at
end

#post_object(path, policy: nil, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil) ⇒ PostObject

Generate a PostObject that includes the fields and URL to upload objects via HTML forms.

Generating a PostObject requires service account credentials, either by connecting with a service account when calling Google::Cloud.storage, or by passing in the service account issuer and signing_key values. Although the private key can be passed as a string for convenience, creating and storing an instance of # OpenSSL::PKey::RSA is more efficient when making multiple calls to post_object.

A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
post = bucket.post_object "avatars/heidi/400x400.png"

post.url #=> "https://storage.googleapis.com"
post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com"
post.fields[:signature] #=> "ABC...XYZ="
post.fields[:policy] #=> "ABC...XYZ="

Using a policy to define the upload authorization:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

policy = {
  expiration: (Time.now + 3600).iso8601,
  conditions: [
    ["starts-with", "$key", ""],
    {acl: "bucket-owner-read"},
    {bucket: "travel-maps"},
    {success_action_redirect: "http://example.com/success.html"},
    ["eq", "$Content-Type", "image/jpeg"],
    ["content-length-range", 0, 1000000]
  ]
}

bucket = storage.bucket "my-todo-app"
post = bucket.post_object "avatars/heidi/400x400.png",
                           policy: policy

post.url #=> "https://storage.googleapis.com"
post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com"
post.fields[:signature] #=> "ABC...XYZ="
post.fields[:policy] #=> "ABC...XYZ="

Using the issuer and signing_key options:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
key = OpenSSL::PKey::RSA.new
post = bucket.post_object "avatars/heidi/400x400.png",
                          issuer: "service-account@gcloud.com",
                          signing_key: key

post.url #=> "https://storage.googleapis.com"
post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com"
post.fields[:signature] #=> "ABC...XYZ="
post.fields[:policy] #=> "ABC...XYZ="

Using Cloud IAMCredentials signBlob to create the signature:

require "google/cloud/storage"
require "google/apis/iamcredentials_v1"
require "googleauth"

# Issuer is the service account email that the Signed URL will be signed with
# and any permission granted in the Signed URL must be granted to the
# Google Service Account.
issuer = "service-account@project-id.iam.gserviceaccount.com"

# Create a lambda that accepts the string_to_sign
signer = lambda do |string_to_sign|
  IAMCredentials = Google::Apis::IamcredentialsV1
  iam_client = IAMCredentials::IAMCredentialsService.new

  # Get the environment configured authorization
  scopes = ["https://www.googleapis.com/auth/iam"]
  iam_client.authorization = Google::Auth.get_application_default scopes

  request = Google::Apis::IamcredentialsV1::SignBlobRequest.new(
    payload: string_to_sign
  )
  resource = "projects/-/serviceAccounts/#{issuer}"
  response = iam_client. resource, request
  response.signed_blob
end

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
post = bucket.post_object "avatars/heidi/400x400.png",
                          issuer: issuer,
                          signer: signer

post.url #=> "https://storage.googleapis.com"
post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png"
post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com"
post.fields[:signature] #=> "ABC...XYZ="
post.fields[:policy] #=> "ABC...XYZ="

Parameters:

  • path (String)

    Path to the file in Google Cloud Storage.

  • policy (Hash) (defaults to: nil)

    The security policy that describes what can and cannot be uploaded in the form. When provided, the PostObject fields will include a signature based on the JSON representation of this hash and the same policy in Base64 format.

    If you do not provide a security policy, requests are considered to be anonymous and will only work with buckets that have granted WRITE or FULL_CONTROL permission to anonymous users. See Policy Document for more information.

  • issuer (String) (defaults to: nil)

    Service Account's Client Email.

  • client_email (String) (defaults to: nil)

    Service Account's Client Email.

  • signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

    When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.

Returns:

  • (PostObject)

    An object containing the URL, fields, and values needed to upload files via HTML forms.

Raises:

See Also:



2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
# File 'lib/google/cloud/storage/bucket.rb', line 2436

def post_object path,
                policy: nil,
                issuer: nil,
                client_email: nil,
                signing_key: nil,
                private_key: nil,
                signer: nil
  ensure_service!
  sign = File::SignerV2.from_bucket self, path
  sign.post_object issuer: issuer,
                   client_email: client_email,
                   signing_key: signing_key,
                   private_key: private_key,
                   signer: signer,
                   policy: policy
end

#public_access_preventionString?

The value for Public Access Prevention in the bucket's IAM configuration. Currently, inherited and enforced are supported. When set to enforced, Public Access Prevention is enforced in the bucket's IAM configuration. This value can be modified by calling #public_access_prevention=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.public_access_prevention = :enforced
bucket.public_access_prevention #=> "enforced"

Returns:

  • (String, nil)

    Currently, inherited and enforced are supported. Returns nil if the bucket has no IAM configuration.



1055
1056
1057
# File 'lib/google/cloud/storage/bucket.rb', line 1055

def public_access_prevention
  @gapi.iam_configuration&.public_access_prevention
end

#public_access_prevention=(new_public_access_prevention) ⇒ Object

Sets the value for Public Access Prevention in the bucket's IAM configuration. This value can be queried by calling #public_access_prevention.

Examples:

Set Public Access Prevention to enforced:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.public_access_prevention = :enforced
bucket.public_access_prevention #=> "enforced"

Set Public Access Prevention to inherited:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.public_access_prevention = :inherited
bucket.public_access_prevention #=> "inherited"

Parameters:

  • new_public_access_prevention (Symbol, String)

    The bucket's new Public Access Prevention configuration. Currently, inherited and enforced are supported. When set to enforced, Public Access Prevention is enforced in the bucket's IAM configuration.



1087
1088
1089
1090
1091
# File 'lib/google/cloud/storage/bucket.rb', line 1087

def public_access_prevention= new_public_access_prevention
  @gapi.iam_configuration ||= API::Bucket::IamConfiguration.new
  @gapi.iam_configuration.public_access_prevention = new_public_access_prevention.to_s
  patch_gapi! :iam_configuration
end

#public_access_prevention_enforced?Boolean

Whether the bucket's file IAM configuration enforces Public Access Prevention. The default is false. This value can be modified by calling #public_access_prevention=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.public_access_prevention = :enforced
bucket.public_access_prevention_enforced? # true

Returns:

  • (Boolean)

    Returns false if the bucket has no IAM configuration or if Public Access Prevention is not enforced in the IAM configuration. Returns true if Public Access Prevention is enforced in the IAM configuration.



1111
1112
1113
1114
# File 'lib/google/cloud/storage/bucket.rb', line 1111

def public_access_prevention_enforced?
  return false unless @gapi.iam_configuration&.public_access_prevention
  @gapi.iam_configuration.public_access_prevention.to_s == "enforced"
end

#public_access_prevention_inherited?Boolean Also known as: public_access_prevention_unspecified?

Whether the value for Public Access Prevention in the bucket's IAM configuration is inherited. The default is false. This value can be modified by calling #public_access_prevention=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.public_access_prevention = :inherited
bucket.public_access_prevention_inherited? # true

Returns:

  • (Boolean)

    Returns false if the bucket has no IAM configuration or if Public Access Prevention is not inherited in the IAM configuration. Returns true if Public Access Prevention is inherited in the IAM configuration.



1134
1135
1136
1137
# File 'lib/google/cloud/storage/bucket.rb', line 1134

def public_access_prevention_inherited?
  return false unless @gapi.iam_configuration&.public_access_prevention
  ["inherited", "unspecified"].include? @gapi.iam_configuration.public_access_prevention.to_s
end

#reload!Object Also known as: refresh!

Reloads the bucket with current data from the Storage service.



3112
3113
3114
3115
3116
3117
3118
# File 'lib/google/cloud/storage/bucket.rb', line 3112

def reload!
  ensure_service!
  @gapi = service.get_bucket name, user_project: user_project
  # If NotFound then lazy will never be unset
  @lazy = nil
  self
end

#requester_paysBoolean? Also known as: requester_pays?

Indicates that a client accessing the bucket or a file it contains must assume the transit costs related to the access. The requester must pass the user_project option to Project#bucket and Project#buckets to indicate the project to which the access costs should be billed.

Returns:

  • (Boolean, nil)

    Returns true if requester pays is enabled for the bucket.



624
625
626
# File 'lib/google/cloud/storage/bucket.rb', line 624

def requester_pays
  @gapi.billing&.requester_pays
end

#requester_pays=(new_requester_pays) ⇒ Object

Enables requester pays for the bucket. If enabled, a client accessing the bucket or a file it contains must assume the transit costs related to the access. The requester must pass the user_project option to Project#bucket and Project#buckets to indicate the project to which the access costs should be billed.

To pass metageneration preconditions, call this method within a block passed to #update.

Examples:

Enable requester pays for a bucket:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.requester_pays = true # API call
# Other projects must now provide `user_project` option when calling
# Project#bucket or Project#buckets to access this bucket.

Parameters:

  • new_requester_pays (Boolean)

    When set to true, requester pays is enabled for the bucket.



653
654
655
656
657
# File 'lib/google/cloud/storage/bucket.rb', line 653

def requester_pays= new_requester_pays
  @gapi.billing ||= API::Bucket::Billing.new
  @gapi.billing.requester_pays = new_requester_pays
  patch_gapi! :billing
end

#restore_file(file_path, generation, copy_source_acl: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, projection: nil, user_project: nil, fields: nil, options: {}) ⇒ Google::Cloud::Storage::File

Restores a soft-deleted object.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.restore_file "path/of/file", <generation-of-the-file>

Parameters:

  • file_path (String)

    Name of the file.

  • generation (Fixnum)

    Selects a specific revision of this object.

  • copy_source_acl (Boolean) (defaults to: nil)

    If true, copies the source file's ACL; otherwise, uses the bucket's default file ACL. The default is false.

  • if_generation_match (Fixnum) (defaults to: nil)

    Makes the operation conditional on whether the file's one live generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.

  • if_generation_not_match (Fixnum) (defaults to: nil)

    Makes the operation conditional on whether none of the file's live generations match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.

  • if_metageneration_match (Fixnum) (defaults to: nil)

    Makes the operation conditional on whether the file's one live metageneration matches the given value.

  • if_metageneration_not_match (Fixnum) (defaults to: nil)

    Makes the operation conditional on whether none of the object's live metagenerations match the given value.

  • projection (String) (defaults to: nil)

    Set of properties to return. Defaults to full.

  • user_project (String) (defaults to: nil)

    The project to be billed for this request. Required for Requester Pays buckets.

  • fields (String) (defaults to: nil)

    Selector specifying which fields to include in a partial response.

Returns:



1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
# File 'lib/google/cloud/storage/bucket.rb', line 1883

def restore_file file_path,
                 generation,
                 copy_source_acl: nil,
                 if_generation_match: nil,
                 if_generation_not_match: nil,
                 if_metageneration_match: nil,
                 if_metageneration_not_match: nil,
                 projection: nil,
                 user_project: nil,
                 fields: nil,
                 options: {}
  ensure_service!
  gapi = service.restore_file name,
                              file_path,
                              generation,
                              copy_source_acl: File::Acl.predefined_rule_for(copy_source_acl),
                              if_generation_match: if_generation_match,
                              if_generation_not_match: if_generation_not_match,
                              if_metageneration_match: if_metageneration_match,
                              if_metageneration_not_match: if_metageneration_not_match,
                              projection: projection,
                              user_project: user_project,
                              fields: fields,
                              options: options
  File.from_gapi gapi, service, user_project: user_project
end

#retention_effective_atDateTime?

The time from which the retention policy was effective. Whenever a retention policy is created or extended, GCS updates the effective date of the policy. The effective date signals the date starting from which objects were guaranteed to be retained for the full duration of the policy.

This field is updated when the retention policy is created or modified, including extension of a locked policy.

Returns:

  • (DateTime, nil)

    The effective date of the bucket's retention policy, if a policy exists.



793
794
795
# File 'lib/google/cloud/storage/bucket.rb', line 793

def retention_effective_at
  @gapi.retention_policy&.effective_time
end

#retention_periodInteger?

The period of time (in seconds) that files in the bucket must be retained, and cannot be deleted, overwritten, or archived. The value must be between 0 and 100 years (in seconds.)

See also: #retention_period=, #retention_effective_at, and #retention_policy_locked?.

Returns:

  • (Integer, nil)

    The retention period defined in seconds, if a retention policy exists for the bucket.



731
732
733
# File 'lib/google/cloud/storage/bucket.rb', line 731

def retention_period
  @gapi.retention_policy&.retention_period
end

#retention_period=(new_retention_period) ⇒ Object

The period of time (in seconds) that files in the bucket must be retained, and cannot be deleted, overwritten, or archived. Passing a valid Integer value will add a new retention policy to the bucket if none exists. Passing nil will remove the retention policy from the bucket if it exists, unless the policy is locked.

Locked policies can be extended in duration by using this method to set a higher value. Such an extension is permanent, and it cannot later be reduced. The extended duration will apply retroactively to all files currently in the bucket.

See also: #lock_retention_policy!, #retention_period, #retention_effective_at, and #retention_policy_locked?.

To pass metageneration preconditions, call this method within a block passed to #update.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.retention_period = 2592000 # 30 days in seconds

file = bucket.create_file "path/to/local.file.ext"
file.delete # raises Google::Cloud::PermissionDeniedError

Parameters:

  • new_retention_period (Integer, nil)

    The retention period defined in seconds. The value must be between 0 and 100 years (in seconds), or nil.



769
770
771
772
773
774
775
776
777
778
# File 'lib/google/cloud/storage/bucket.rb', line 769

def retention_period= new_retention_period
  if new_retention_period.nil?
    @gapi.retention_policy = nil
  else
    @gapi.retention_policy ||= API::Bucket::RetentionPolicy.new
    @gapi.retention_policy.retention_period = new_retention_period
  end

  patch_gapi! :retention_policy
end

#retention_policy_locked?Boolean

Whether the bucket's file retention policy is locked and its retention period cannot be reduced. See #retention_period= and #lock_retention_policy!.

This value can only be set to true by calling #lock_retention_policy!.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.retention_period = 2592000 # 30 days in seconds
bucket.lock_retention_policy!
bucket.retention_policy_locked? # true

file = bucket.create_file "path/to/local.file.ext"
file.delete # raises Google::Cloud::PermissionDeniedError

Returns:

  • (Boolean)

    Returns false if there is no retention policy or if the retention policy is unlocked and the retention period can be reduced. Returns true if the retention policy is locked and the retention period cannot be reduced.



824
825
826
827
828
# File 'lib/google/cloud/storage/bucket.rb', line 824

def retention_policy_locked?
  return false unless @gapi.retention_policy
  !@gapi.retention_policy.is_locked.nil? &&
    @gapi.retention_policy.is_locked
end

#rpoString?

Recovery Point Objective (RPO) is another attribute of a bucket, it measures how long it takes for a set of updates to be asynchronously copied to the other region. Currently, DEFAULT and ASYNC_TURBO are supported. When set to ASYNC_TURBO, Turbo Replication is enabled for a bucket. DEFAULT is used to reset rpo on an existing bucket with rpo set to ASYNC_TURBO. This value can be modified by calling #rpo=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.rpo = :DEFAULT
bucket.rpo #=> "DEFAULT"

Returns:

  • (String, nil)

    Currently, DEFAULT and ASYNC_TURBO are supported. Returns nil if the bucket has no RPO.



1161
1162
1163
# File 'lib/google/cloud/storage/bucket.rb', line 1161

def rpo
  @gapi.rpo
end

#rpo=(new_rpo) ⇒ Object

Sets the value for Recovery Point Objective (RPO) in the bucket. This value can be queried by calling #rpo.

Examples:

Set RPO to DEFAULT:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.rpo = :DEFAULT
bucket.rpo #=> "DEFAULT"

Set RPO to ASYNC_TURBO:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.rpo = :ASYNC_TURBO
bucket.rpo #=> "ASYNC_TURBO"

Parameters:

  • new_rpo (Symbol, String)

    The bucket's new Recovery Point Objective metadata. Currently, DEFAULT and ASYNC_TURBO are supported. When set to ASYNC_TURBO, Turbo Replication is enabled for a bucket.



1192
1193
1194
1195
# File 'lib/google/cloud/storage/bucket.rb', line 1192

def rpo= new_rpo
  @gapi.rpo = new_rpo&.to_s
  patch_gapi! :rpo
end

#signed_url(path = nil, method: "GET", expires: nil, content_type: nil, content_md5: nil, headers: nil, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil, query: nil, scheme: "HTTPS", virtual_hosted_style: nil, bucket_bound_hostname: nil, version: nil) ⇒ String

Generates a signed URL. See Signed URLs for more information.

Generating a signed URL requires service account credentials, either by connecting with a service account when calling Google::Cloud.storage, or by passing in the service account issuer and signing_key values. Although the private key can be passed as a string for convenience, creating and storing an instance of OpenSSL::PKey::RSA is more efficient when making multiple calls to signed_url.

A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
shared_url = bucket.signed_url "avatars/heidi/400x400.png"

Using the expires and version options:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
shared_url = bucket.signed_url "avatars/heidi/400x400.png",
                               expires: 300, # 5 minutes from now
                               version: :v4

Using the issuer and signing_key options:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
key = OpenSSL::PKey::RSA.new "-----BEGIN PRIVATE KEY-----\n..."
shared_url = bucket.signed_url "avatars/heidi/400x400.png",
                               issuer: "service-account@gcloud.com",
                               signing_key: key

Using Cloud IAMCredentials signBlob to create the signature:

require "google/cloud/storage"
require "google/apis/iamcredentials_v1"
require "googleauth"

# Issuer is the service account email that the Signed URL will be signed with
# and any permission granted in the Signed URL must be granted to the
# Google Service Account.
issuer = "service-account@project-id.iam.gserviceaccount.com"

# Create a lambda that accepts the string_to_sign
signer = lambda do |string_to_sign|
  IAMCredentials = Google::Apis::IamcredentialsV1
  iam_client = IAMCredentials::IAMCredentialsService.new

  # Get the environment configured authorization
  scopes = ["https://www.googleapis.com/auth/iam"]
  iam_client.authorization = Google::Auth.get_application_default scopes

  request = Google::Apis::IamcredentialsV1::SignBlobRequest.new(
    payload: string_to_sign
  )
  resource = "projects/-/serviceAccounts/#{issuer}"
  response = iam_client. resource, request
  response.signed_blob
end

storage = Google::Cloud::Storage.new

bucket_name = "my-todo-app"
file_path = "avatars/heidi/400x400.png"
url = storage.signed_url bucket_name, file_path,
                         method: "GET", issuer: issuer,
                         signer: signer

Using the headers option:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
shared_url = bucket.signed_url "avatars/heidi/400x400.png",
                               headers: {
                                 "x-goog-acl" => "private",
                                 "x-goog-meta-foo" => "bar,baz"
                               }

Generating a signed URL for resumable upload:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
url = bucket.signed_url "avatars/heidi/400x400.png",
                        method: "POST",
                        content_type: "image/png",
                        headers: {
                          "x-goog-resumable" => "start"
                        }
# Send the `x-goog-resumable:start` header and the content type
# with the resumable upload POST request.

Omitting path for a URL to list all files in the bucket.

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
list_files_url = bucket.signed_url version: :v4

Parameters:

  • path (String, nil) (defaults to: nil)

    Path to the file in Google Cloud Storage, or nil to generate a URL for listing all files in the bucket.

  • method (String) (defaults to: "GET")

    The HTTP verb to be used with the signed URL. Signed URLs can be used with GET, HEAD, PUT, and DELETE requests. Default is GET.

  • expires (Integer) (defaults to: nil)

    The number of seconds until the URL expires. If the version is :v2, the default is 300 (5 minutes). If the version is :v4, the default is 604800 (7 days).

  • content_type (String) (defaults to: nil)

    When provided, the client (browser) must send this value in the HTTP header. e.g. text/plain. This param is not used if the version is :v4.

  • content_md5 (String) (defaults to: nil)

    The MD5 digest value in base64. If you provide this in the string, the client (usually a browser) must provide this HTTP header with this same value in its request. This param is not used if the version is :v4.

  • headers (Hash) (defaults to: nil)

    Google extension headers (custom HTTP headers that begin with x-goog-) that must be included in requests that use the signed URL.

  • issuer (String) (defaults to: nil)

    Service Account's Client Email.

  • client_email (String) (defaults to: nil)

    Service Account's Client Email.

  • signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

  • signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil)

    Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.

    When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.

  • query (Hash) (defaults to: nil)

    Query string parameters to include in the signed URL. The given parameters are not verified by the signature.

    Parameters such as response-content-disposition and response-content-type can alter the behavior of the response when using the URL, but only when the file resource is missing the corresponding values. (These values can be permanently set using File#content_disposition= and File#content_type=.)

  • scheme (String) (defaults to: "HTTPS")

    The URL scheme. The default value is HTTPS.

  • virtual_hosted_style (Boolean) (defaults to: nil)

    Whether to use a virtual hosted-style hostname, which adds the bucket into the host portion of the URI rather than the path, e.g. https://mybucket.storage.googleapis.com/.... For V4 signing, this also sets the host header in the canonicalized extension headers to the virtual hosted-style host, unless that header is supplied via the headers param. The default value of false uses the form of https://storage.googleapis.com/mybucket.

  • bucket_bound_hostname (String) (defaults to: nil)

    Use a bucket-bound hostname, which replaces the storage.googleapis.com host with the name of a CNAME bucket, e.g. a bucket named gcs-subdomain.my.domain.tld, or a Google Cloud Load Balancer which routes to a bucket you own, e.g. my-load-balancer-domain.tld.

  • version (Symbol, String) (defaults to: nil)

    The version of the signed credential to create. Must be one of :v2 or :v4. The default value is :v2.

Returns:

  • (String)

    The signed URL.

Raises:

See Also:



2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
# File 'lib/google/cloud/storage/bucket.rb', line 2232

def signed_url path = nil,
               method: "GET",
               expires: nil,
               content_type: nil,
               content_md5: nil,
               headers: nil,
               issuer: nil,
               client_email: nil,
               signing_key: nil,
               private_key: nil,
               signer: nil,
               query: nil,
               scheme: "HTTPS",
               virtual_hosted_style: nil,
               bucket_bound_hostname: nil,
               version: nil
  ensure_service!
  version ||= :v2
  case version.to_sym
  when :v2
    sign = File::SignerV2.from_bucket self, path
    sign.signed_url method: method,
                    expires: expires,
                    headers: headers,
                    content_type: content_type,
                    content_md5: content_md5,
                    issuer: issuer,
                    client_email: client_email,
                    signing_key: signing_key,
                    private_key: private_key,
                    signer: signer,
                    query: query
  when :v4
    sign = File::SignerV4.from_bucket self, path
    sign.signed_url method: method,
                    expires: expires,
                    headers: headers,
                    issuer: issuer,
                    client_email: client_email,
                    signing_key: signing_key,
                    private_key: private_key,
                    signer: signer,
                    query: query,
                    scheme: scheme,
                    virtual_hosted_style: virtual_hosted_style,
                    bucket_bound_hostname: bucket_bound_hostname
  else
    raise ArgumentError, "version '#{version}' not supported"
  end
end

#soft_delete_policyGoogle::Apis::StorageV1::Bucket::SoftDeletePolicy

The bucket's soft delete policy. If this policy is set, any deleted objects will be soft-deleted according to the time specified in the policy. This value can be modified by calling #soft_delete_policy=.

days.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.soft_delete_policy

Returns:

  • (Google::Apis::StorageV1::Bucket::SoftDeletePolicy)

    The default retention policy is for 7



1215
1216
1217
# File 'lib/google/cloud/storage/bucket.rb', line 1215

def soft_delete_policy
  @gapi.soft_delete_policy
end

#soft_delete_policy=(new_soft_delete_policy) ⇒ Object

Sets the value for Soft Delete Policy in the bucket. This value can be queried by calling #soft_delete_policy.

Examples:

Set Soft Delete Policy to 10 days using SoftDeletePolicy class:

require "google/cloud/storage"
require "date"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

soft_delete_policy = Google::Apis::StorageV1::Bucket::SoftDeletePolicy.new
soft_delete_policy.retention_duration_seconds = 10*24*60*60

bucket.soft_delete_policy = soft_delete_policy

Set Soft Delete Policy to 5 days using Hash:

require "google/cloud/storage"
require "date"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

soft_delete_policy = { retention_duration_seconds: 432000 }
bucket.soft_delete_policy = soft_delete_policy

Parameters:

  • new_soft_delete_policy (Google::Apis::StorageV1::Bucket::SoftDeletePolicy, Hash(String => String))

    The bucket's new Soft Delete Policy.



1251
1252
1253
1254
# File 'lib/google/cloud/storage/bucket.rb', line 1251

def soft_delete_policy= new_soft_delete_policy
  @gapi.soft_delete_policy = new_soft_delete_policy || {}
  patch_gapi! :soft_delete_policy
end

#storage_classString

The bucket's storage class. This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Values include STANDARD, NEARLINE, COLDLINE, and ARCHIVE. REGIONAL,MULTI_REGIONAL, and DURABLE_REDUCED_AVAILABILITY are supported as legacy storage classes.

Returns:

  • (String)


404
405
406
# File 'lib/google/cloud/storage/bucket.rb', line 404

def storage_class
  @gapi.storage_class
end

#storage_class=(new_storage_class) ⇒ Object

Updates the bucket's storage class. This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Accepted values include :standard, :nearline, :coldline, and :archive, as well as the equivalent strings returned by #storage_class. :multi_regional, :regional, and durable_reduced_availability are accepted as legacy storage classes. For more information, see Storage Classes.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • new_storage_class (Symbol, String)

    Storage class of the bucket.



423
424
425
426
# File 'lib/google/cloud/storage/bucket.rb', line 423

def storage_class= new_storage_class
  @gapi.storage_class = storage_class_for new_storage_class
  patch_gapi! :storage_class
end

#test_permissions(*permissions) ⇒ Array<String>

Tests the specified permissions against the Cloud IAM access control policy.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

permissions = bucket.test_permissions "storage.buckets.get",
                                      "storage.buckets.delete"
permissions.include? "storage.buckets.get"    #=> true
permissions.include? "storage.buckets.delete" #=> false

Parameters:

  • permissions (String, Array<String>)

    The set of permissions against which to check access. Permissions must be of the format storage.resource.capability, where resource is one of buckets or objects.

Returns:

  • (Array<String>)

    The permissions held by the caller.

See Also:



2959
2960
2961
2962
2963
2964
2965
# File 'lib/google/cloud/storage/bucket.rb', line 2959

def test_permissions *permissions
  permissions = Array(permissions).flatten
  ensure_service!
  gapi = service.test_bucket_permissions name, permissions,
                                         user_project: user_project
  gapi.permissions
end

#uniform_bucket_level_access=(new_uniform_bucket_level_access) ⇒ Object

Sets whether uniform bucket-level access is enabled for this bucket. When this is enabled, access to the bucket will be configured through IAM, and legacy ACL policies will not work. When it is first enabled, #uniform_bucket_level_access_locked_at will be set by the API automatically. The uniform bucket-level access can then be disabled until the time specified, after which it will become immutable and calls to change it will fail. If uniform bucket-level access is enabled, calls to access legacy ACL information will fail.

Before enabling uniform bucket-level access please review uniform bucket-level access.

To pass metageneration preconditions, call this method within a block passed to #update.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.uniform_bucket_level_access = true
bucket.uniform_bucket_level_access? # true

bucket.default_acl.public! # Google::Cloud::InvalidArgumentError

# The deadline for disabling uniform bucket-level access.
puts bucket.uniform_bucket_level_access_locked_at

Parameters:

  • new_uniform_bucket_level_access (Boolean)

    When set to true, uniform bucket-level access is enabled in the bucket's IAM configuration.



983
984
985
986
987
988
989
# File 'lib/google/cloud/storage/bucket.rb', line 983

def uniform_bucket_level_access= new_uniform_bucket_level_access
  @gapi.iam_configuration ||= API::Bucket::IamConfiguration.new
  @gapi.iam_configuration.uniform_bucket_level_access ||= \
    API::Bucket::IamConfiguration::UniformBucketLevelAccess.new
  @gapi.iam_configuration.uniform_bucket_level_access.enabled = new_uniform_bucket_level_access
  patch_gapi! :iam_configuration
end

#uniform_bucket_level_access?Boolean

Whether the bucket's file IAM configuration enables uniform bucket-level access. The default is false. This value can be modified by calling #uniform_bucket_level_access=.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.uniform_bucket_level_access = true
bucket.uniform_bucket_level_access? # true

Returns:

  • (Boolean)

    Returns false if the bucket has no IAM configuration or if uniform bucket-level access is not enabled in the IAM configuration. Returns true if uniform bucket-level access is enabled in the IAM configuration.



946
947
948
949
950
# File 'lib/google/cloud/storage/bucket.rb', line 946

def uniform_bucket_level_access?
  return false unless @gapi.iam_configuration&.uniform_bucket_level_access
  !@gapi.iam_configuration.uniform_bucket_level_access.enabled.nil? &&
    @gapi.iam_configuration.uniform_bucket_level_access.enabled
end

#uniform_bucket_level_access_locked_atDateTime?

The deadline time for disabling uniform bucket-level access by calling #uniform_bucket_level_access=. After the locked time the uniform bucket-level access setting cannot be changed from true to false. Corresponds to the property locked_time.

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-bucket"

bucket.uniform_bucket_level_access = true

# The deadline for disabling uniform bucket-level access.
puts bucket.uniform_bucket_level_access_locked_at

Returns:



1011
1012
1013
1014
# File 'lib/google/cloud/storage/bucket.rb', line 1011

def uniform_bucket_level_access_locked_at
  return nil unless @gapi.iam_configuration&.uniform_bucket_level_access
  @gapi.iam_configuration.uniform_bucket_level_access.locked_time
end

#update(if_metageneration_match: nil, if_metageneration_not_match: nil) {|bucket| ... } ⇒ Object

Updates the bucket with changes made in the given block in a single PATCH request. The following attributes may be set: #cors, #logging_bucket=, #logging_prefix=, #versioning=, #website_main=, #website_404=, and #requester_pays=.

In addition, the #cors configuration accessible in the block is completely mutable and will be included in the request. (See Cors)

Examples:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
bucket.update do |b|
  b.website_main = "index.html"
  b.website_404 = "not_found.html"
  b.cors[0].methods = ["GET","POST","DELETE"]
  b.cors[1].headers << "X-Another-Custom-Header"
end

New CORS rules can also be added in a nested block:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-todo-app"

bucket.update do |b|
  b.cors do |c|
    c.add_rule ["http://example.org", "https://example.org"],
               "*",
               headers: ["X-My-Custom-Header"],
               max_age: 300
  end
end

With a if_metageneration_match precondition:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

bucket = storage.bucket "my-todo-app"
bucket.update if_metageneration_match: 6 do |b|
  b.website_main = "index.html"
end

Parameters:

  • if_metageneration_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the bucket's current metageneration matches the given value.

  • if_metageneration_not_match (Integer) (defaults to: nil)

    Makes the operation conditional on whether the bucket's current metageneration does not match the given value.

Yields:

  • (bucket)

    a block yielding a delegate object for updating the file



1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
# File 'lib/google/cloud/storage/bucket.rb', line 1369

def update if_metageneration_match: nil, if_metageneration_not_match: nil
  updater = Updater.new @gapi
  yield updater
  # Add check for mutable cors
  updater.check_for_changed_labels!
  updater.check_for_mutable_cors!
  updater.check_for_mutable_lifecycle!
  return if updater.updates.empty?
  update_gapi! updater.updates,
               if_metageneration_match: if_metageneration_match,
               if_metageneration_not_match: if_metageneration_not_match
end

#update_autoclass(autoclass_attributes) ⇒ Object

Update method to update all attributes of autoclass of a bucket It accepts params as a Hash of attributes in the following format:

{ enabled: true, terminal_storage_class: "ARCHIVE" }

terminal_storage_class field is optional. It defaults to NEARLINE. Valid terminal_storage_class values are NEARLINE and ARCHIVE.

Parameters:

  • autoclass_attributes (Hash(String => String))


491
492
493
494
495
496
497
# File 'lib/google/cloud/storage/bucket.rb', line 491

def update_autoclass autoclass_attributes
  @gapi.autoclass ||= API::Bucket::Autoclass.new
  autoclass_attributes.each do |k, v|
    @gapi.autoclass.send "#{k}=", v
  end
  patch_gapi! :autoclass
end

#update_policy(new_policy) ⇒ Policy Also known as: policy=

Updates the Cloud IAM access control policy for this bucket. The policy should be read from #policy. See Policy for an explanation of the policy etag property and how to modify policies.

You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.

Examples:

Updating a Policy that is implicitly version 1:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

policy = bucket.policy
policy.version # 1
policy.remove "roles/storage.admin", "user:owner@example.com"
policy.add "roles/storage.admin", "user:newowner@example.com"
policy.roles["roles/storage.objectViewer"] = ["allUsers"]

policy = bucket.update_policy policy

Updating a Policy from version 1 to version 3 by adding a condition:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

policy = bucket.policy requested_policy_version: 3
policy.version # 1
policy.version = 3

expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")"
policy.bindings.insert({
                        role: "roles/storage.admin",
                        members: ["user:owner@example.com"],
                        condition: {
                          title: "my-condition",
                          description: "description of condition",
                          expression: expr
                        }
                      })

policy = bucket.update_policy policy

Updating a version 3 Policy:

require "google/cloud/storage"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"

policy = bucket.policy requested_policy_version: 3
policy.version # 3 indicates an existing binding with a condition.

expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")"
policy.bindings.insert({
                        role: "roles/storage.admin",
                        members: ["user:owner@example.com"],
                        condition: {
                          title: "my-condition",
                          description: "description of condition",
                          expression: expr
                        }
                      })

policy = bucket.update_policy policy

Parameters:

  • new_policy (Policy)

    a new or modified Cloud IAM Policy for this bucket

Returns:

  • (Policy)

    The policy returned by the API update operation.

See Also:



2925
2926
2927
2928
2929
2930
# File 'lib/google/cloud/storage/bucket.rb', line 2925

def update_policy new_policy
  ensure_service!
  gapi = service.set_bucket_policy name, new_policy.to_gapi,
                                   user_project: user_project
  new_policy.class.from_gapi gapi
end

#versioning=(new_versioning) ⇒ Object

Updates whether Object Versioning is enabled for the bucket.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • new_versioning (Boolean)

    true if versioning is to be enabled for the bucket.



521
522
523
524
525
# File 'lib/google/cloud/storage/bucket.rb', line 521

def versioning= new_versioning
  @gapi.versioning ||= API::Bucket::Versioning.new
  @gapi.versioning.enabled = new_versioning
  patch_gapi! :versioning
end

#versioning?Boolean

Whether Object Versioning is enabled for the bucket.

Returns:

  • (Boolean)


506
507
508
# File 'lib/google/cloud/storage/bucket.rb', line 506

def versioning?
  @gapi.versioning&.enabled?
end

#website_404String

The page returned from a static website served from the bucket when a site visitor requests a resource that does not exist.

Returns:

  • (String)

See Also:



569
570
571
# File 'lib/google/cloud/storage/bucket.rb', line 569

def website_404
  @gapi.website&.not_found_page
end

#website_404=(website_404) ⇒ Object

Updates the page returned from a static website served from the bucket when a site visitor requests a resource that does not exist.

To pass metageneration preconditions, call this method within a block passed to #update.



608
609
610
611
612
# File 'lib/google/cloud/storage/bucket.rb', line 608

def website_404= website_404
  @gapi.website ||= API::Bucket::Website.new
  @gapi.website.not_found_page = website_404
  patch_gapi! :website
end

#website_mainString

The main page suffix for a static website. If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.

Returns:

  • (String)

    The main page suffix.

See Also:



539
540
541
# File 'lib/google/cloud/storage/bucket.rb', line 539

def website_main
  @gapi.website&.main_page_suffix
end

#website_main=(website_main) ⇒ Object

Updates the main page suffix for a static website.

To pass metageneration preconditions, call this method within a block passed to #update.

Parameters:

  • website_main (String)

    The main page suffix.

See Also:



554
555
556
557
558
# File 'lib/google/cloud/storage/bucket.rb', line 554

def website_main= website_main
  @gapi.website ||= API::Bucket::Website.new
  @gapi.website.main_page_suffix = website_main
  patch_gapi! :website
end