As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Ai Generativelanguage v1beta3 API

class google.ai.generativelanguage_v1beta3.types.BatchEmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Batch request to get a text embedding from the model.

model

Required. The name of the Model to use for generating the embedding. Examples: models/embedding-gecko-001

Type

str

texts

Required. The free-form input texts that the model will turn into an embedding. The current limit is 100 texts, over which an error will be thrown.

Type

MutableSequence[str]

class google.ai.generativelanguage_v1beta3.types.BatchEmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a EmbedTextRequest.

embeddings

Output only. The embeddings generated from the input text.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Embedding]

class google.ai.generativelanguage_v1beta3.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A collection of source attributions for a piece of content.

citation_sources

Citations to sources for a specific response.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.CitationSource]

class google.ai.generativelanguage_v1beta3.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A citation to a source for a portion of a specific response.

start_index

Optional. Start of segment of the response that is attributed to this source.

Index indicates the start of the segment, measured in bytes.

This field is a member of oneof _start_index.

Type

int

end_index

Optional. End of the attributed segment, exclusive.

This field is a member of oneof _end_index.

Type

int

uri

Optional. URI that is attributed as a source for a portion of the text.

This field is a member of oneof _uri.

Type

str

license_

Optional. License for the GitHub project that is attributed as a source for segment.

License info is required for code citations.

This field is a member of oneof _license.

Type

str

class google.ai.generativelanguage_v1beta3.types.ContentFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

reason

The reason content was blocked during request processing.

Type

google.ai.generativelanguage_v1beta3.types.ContentFilter.BlockedReason

message

A string that describes the filtering behavior in more detail.

This field is a member of oneof _message.

Type

str

class BlockedReason(value)[source]

Bases: proto.enums.Enum

A list of reasons why content may have been blocked.

Values:
BLOCKED_REASON_UNSPECIFIED (0):

A blocked reason was not specified.

SAFETY (1):

Content was blocked by safety settings.

OTHER (2):

Content was blocked, but the reason is uncategorized.

class google.ai.generativelanguage_v1beta3.types.CountMessageTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

prompt

Required. The prompt, whose token count is to be returned.

Type

google.ai.generativelanguage_v1beta3.types.MessagePrompt

class google.ai.generativelanguage_v1beta3.types.CountMessageTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountMessageTokens.

It returns the model’s token_count for the prompt.

token_count

The number of tokens that the model tokenizes the prompt into.

Always non-negative.

Type

int

class google.ai.generativelanguage_v1beta3.types.CountTextTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

prompt

Required. The free-form input text given to the model as a prompt.

Type

google.ai.generativelanguage_v1beta3.types.TextPrompt

class google.ai.generativelanguage_v1beta3.types.CountTextTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountTextTokens.

It returns the model’s token_count for the prompt.

token_count

The number of tokens that the model tokenizes the prompt into.

Always non-negative.

Type

int

class google.ai.generativelanguage_v1beta3.types.CreatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a Permission.

parent

Required. The parent resource of the Permission. Format: tunedModels/{tuned_model}

Type

str

permission

Required. The permission to create.

Type

google.ai.generativelanguage_v1beta3.types.Permission

class google.ai.generativelanguage_v1beta3.types.CreateTunedModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata about the state and progress of creating a tuned model returned from the long-running operation

tuned_model

Name of the tuned model associated with the tuning operation.

Type

str

total_steps

The total number of tuning steps.

Type

int

completed_steps

The number of steps completed.

Type

int

completed_percent

The completed percentage for the tuning operation.

Type

float

snapshots

Metrics collected during tuning.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.TuningSnapshot]

class google.ai.generativelanguage_v1beta3.types.CreateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a TunedModel.

tuned_model_id

Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression: a-z?.

This field is a member of oneof _tuned_model_id.

Type

str

tuned_model

Required. The tuned model to create.

Type

google.ai.generativelanguage_v1beta3.types.TunedModel

class google.ai.generativelanguage_v1beta3.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Dataset for training or validation.

examples

Optional. Inline examples.

This field is a member of oneof dataset.

Type

google.ai.generativelanguage_v1beta3.types.TuningExamples

class google.ai.generativelanguage_v1beta3.types.DeletePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete the Permission.

name

Required. The resource name of the permission. Format: tunedModels/{tuned_model}/permissions/{permission}

Type

str

class google.ai.generativelanguage_v1beta3.types.DeleteTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete a TunedModel.

name

Required. The resource name of the model. Format: tunedModels/my-model-id

Type

str

class google.ai.generativelanguage_v1beta3.types.EmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to get a text embedding from the model.

model

Required. The model name to use with the format model=models/{model}.

Type

str

text

Required. The free-form input text that the model will turn into an embedding.

Type

str

class google.ai.generativelanguage_v1beta3.types.EmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a EmbedTextRequest.

embedding

Output only. The embedding generated from the input text.

This field is a member of oneof _embedding.

Type

google.ai.generativelanguage_v1beta3.types.Embedding

class google.ai.generativelanguage_v1beta3.types.Embedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of floats representing the embedding.

value

The embedding values.

Type

MutableSequence[float]

class google.ai.generativelanguage_v1beta3.types.Example(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

An input/output example used to instruct the Model.

It demonstrates how the model should respond or format its response.

input

Required. An example of an input Message from the user.

Type

google.ai.generativelanguage_v1beta3.types.Message

output

Required. An example of what the model should output given the input.

Type

google.ai.generativelanguage_v1beta3.types.Message

class google.ai.generativelanguage_v1beta3.types.GenerateMessageRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a message response from the model.

model

Required. The name of the model to use.

Format: name=models/{model}.

Type

str

prompt

Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.

Type

google.ai.generativelanguage_v1beta3.types.MessagePrompt

temperature

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This field is a member of oneof _temperature.

Type

float

candidate_count

Optional. The number of generated response messages to return.

This value must be between [1, 8], inclusive. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Top-k sampling considers the set of top_k most probable tokens.

This field is a member of oneof _top_k.

Type

int

class google.ai.generativelanguage_v1beta3.types.GenerateMessageResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response from the model.

This includes candidate messages and conversation history in the form of chronologically-ordered messages.

candidates

Candidate response messages from the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Message]

messages

The conversation history used by the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Message]

filters

A set of content filtering metadata for the prompt and response text.

This indicates which SafetyCategory(s) blocked a candidate from this response, the lowest HarmProbability that triggered a block, and the HarmThreshold setting for that category.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.ContentFilter]

class google.ai.generativelanguage_v1beta3.types.GenerateTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a text completion response from the model.

model

Required. The name of the Model or TunedModel to use for generating the completion. Examples: models/text-bison-001 tunedModels/sentence-translator-u3b7m

Type

str

prompt

Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.

Type

google.ai.generativelanguage_v1beta3.types.TextPrompt

temperature

Optional. Controls the randomness of the output. Note: The default value varies by model, see the Model.temperature attribute of the Model returned the getModel function.

Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.

This field is a member of oneof _temperature.

Type

float

candidate_count

Optional. Number of generated responses to return.

This value must be between [1, 8], inclusive. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

max_output_tokens

Optional. The maximum number of tokens to include in a candidate.

If unset, this will default to output_token_limit specified in the Model specification.

This field is a member of oneof _max_output_tokens.

Type

int

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.

Note: The default value varies by model, see the Model.top_p attribute of the Model returned the getModel function.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Top-k sampling considers the set of top_k most probable tokens. Defaults to 40.

Note: The default value varies by model, see the Model.top_k attribute of the Model returned the getModel function.

This field is a member of oneof _top_k.

Type

int

safety_settings

A list of unique SafetySetting instances for blocking unsafe content.

that will be enforced on the GenerateTextRequest.prompt and GenerateTextResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.SafetySetting]

stop_sequences

The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

Type

MutableSequence[str]

class google.ai.generativelanguage_v1beta3.types.GenerateTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response from the model, including candidate completions.

candidates

Candidate responses from the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.TextCompletion]

filters

A set of content filtering metadata for the prompt and response text.

This indicates which SafetyCategory(s) blocked a candidate from this response, the lowest HarmProbability that triggered a block, and the HarmThreshold setting for that category. This indicates the smallest change to the SafetySettings that would be necessary to unblock at least 1 response.

The blocking is configured by the SafetySettings in the request (or the default SafetySettings of the API).

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.ContentFilter]

safety_feedback

Returns any safety feedback related to content filtering.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.SafetyFeedback]

class google.ai.generativelanguage_v1beta3.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Model.

name

Required. The resource name of the model.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

class google.ai.generativelanguage_v1beta3.types.GetPermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Permission.

name

Required. The resource name of the permission.

Format: tunedModels/{tuned_model}permissions/{permission}

Type

str

class google.ai.generativelanguage_v1beta3.types.GetTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Model.

name

Required. The resource name of the model.

Format: tunedModels/my-model-id

Type

str

class google.ai.generativelanguage_v1beta3.types.HarmCategory(value)[source]

Bases: proto.enums.Enum

The category of a rating.

These categories cover various kinds of harms that developers may wish to adjust.

Values:
HARM_CATEGORY_UNSPECIFIED (0):

Category is unspecified.

HARM_CATEGORY_DEROGATORY (1):

Negative or harmful comments targeting identity and/or protected attribute.

HARM_CATEGORY_TOXICITY (2):

Content that is rude, disrepspectful, or profane.

HARM_CATEGORY_VIOLENCE (3):

Describes scenarios depictng violence against an individual or group, or general descriptions of gore.

HARM_CATEGORY_SEXUAL (4):

Contains references to sexual acts or other lewd content.

HARM_CATEGORY_MEDICAL (5):

Promotes unchecked medical advice.

HARM_CATEGORY_DANGEROUS (6):

Dangerous content that promotes, facilitates, or encourages harmful acts.

class google.ai.generativelanguage_v1beta3.types.Hyperparameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Hyperparameters controlling the tuning process.

epoch_count

Immutable. The number of training epochs. An epoch is one pass through the training data. If not set, a default of 10 will be used.

This field is a member of oneof _epoch_count.

Type

int

batch_size

Immutable. The batch size hyperparameter for tuning. If not set, a default of 16 or 64 will be used based on the number of training examples.

This field is a member of oneof _batch_size.

Type

int

learning_rate

Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.0002 or 0.002 will be calculated based on the number of training examples.

This field is a member of oneof _learning_rate.

Type

float

class google.ai.generativelanguage_v1beta3.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing all Models.

page_size

The maximum number of Models to return (per page).

The service may return fewer models. If unspecified, at most 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.

Type

int

page_token

A page token, received from a previous ListModels call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListModels must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta3.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListModel containing a paginated list of Models.

models

The returned Models.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Model]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta3.types.ListPermissionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing permissions.

parent

Required. The parent resource of the permissions. Format: tunedModels/{tuned_model}

Type

str

page_size

Optional. The maximum number of Permissions to return (per page). The service may return fewer permissions.

If unspecified, at most 10 permissions will be returned. This method returns at most 1000 permissions per page, even if you pass larger page_size.

Type

int

page_token

Optional. A page token, received from a previous ListPermissions call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListPermissions must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta3.types.ListPermissionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListPermissions containing a paginated list of permissions.

permissions

Returned permissions.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Permission]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta3.types.ListTunedModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing TunedModels.

page_size

Optional. The maximum number of TunedModels to return (per page). The service may return fewer tuned models.

If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger page_size.

Type

int

page_token

Optional. A page token, received from a previous ListTunedModels call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListTunedModels must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta3.types.ListTunedModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListTunedModels containing a paginated list of Models.

tuned_models

The returned Models.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.TunedModel]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta3.types.Message(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The base unit of structured text.

A Message includes an author and the content of the Message.

The author is used to tag messages when they are fed to the model as text.

author

Optional. The author of this Message.

This serves as a key for tagging the content of this Message when it is fed to the model as text.

The author can be any alphanumeric string.

Type

str

content

Required. The text content of the structured Message.

Type

str

citation_metadata

Output only. Citation information for model-generated content in this Message.

If this Message was generated as output from the model, this field may be populated with attribution information for any text included in the content. This field is used only on output.

This field is a member of oneof _citation_metadata.

Type

google.ai.generativelanguage_v1beta3.types.CitationMetadata

class google.ai.generativelanguage_v1beta3.types.MessagePrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

All of the structured input text passed to the model as a prompt.

A MessagePrompt contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.

context

Optional. Text that should be provided to the model first to ground the response.

If not empty, this context will be given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.

This field can be a description of your prompt to the model to help provide context and guide the responses. Examples: “Translate the phrase from English to French.” or “Given a statement, classify the sentiment as happy, sad or neutral.”

Anything included in this field will take precedence over message history if the total input size exceeds the model’s input_token_limit and the input request is truncated.

Type

str

examples

Optional. Examples of what the model should generate.

This includes both user input and the response that the model should emulate.

These examples are treated identically to conversation messages except that they take precedence over the history in messages: If the total input size exceeds the model’s input_token_limit the input will be truncated. Items will be dropped from messages before examples.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Example]

messages

Required. A snapshot of the recent conversation history sorted chronologically.

Turns alternate between two authors.

If the total input size exceeds the model’s input_token_limit the input will be truncated: The oldest items will be dropped from messages.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.Message]

class google.ai.generativelanguage_v1beta3.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Information about a Generative Language Model.

name

Required. The resource name of the Model.

Format: models/{model} with a {model} naming convention of:

  • “{base_model_id}-{version}”

Examples:

  • models/chat-bison-001

Type

str

base_model_id

Required. The name of the base model, pass this to the generation request.

Examples:

  • chat-bison

Type

str

version

Required. The version number of the model.

This represents the major version

Type

str

display_name

The human-readable name of the model. E.g. “Chat Bison”. The name can be up to 128 characters long and can consist of any UTF-8 characters.

Type

str

description

A short description of the model.

Type

str

input_token_limit

Maximum number of input tokens allowed for this model.

Type

int

output_token_limit

Maximum number of output tokens available for this model.

Type

int

supported_generation_methods

The model’s supported generation methods.

The method names are defined as Pascal case strings, such as generateMessage which correspond to API methods.

Type

MutableSequence[str]

temperature

Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _temperature.

Type

float

top_p

For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _top_p.

Type

float

top_k

For Top-k sampling.

Top-k sampling considers the set of top_k most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _top_k.

Type

int

class google.ai.generativelanguage_v1beta3.types.Permission(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, file).

A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.

There are three concentric roles. Each role is a superset of the previous role’s permitted operations:

  • reader can use the resource (e.g. tuned model) for inference

  • writer has reader’s permissions and additionally can edit and share

  • owner has writer’s permissions and additionally can delete

name

Output only. The permission name. A unique name will be generated on create. Example: tunedModels/{tuned_model}permssions/{permission} Output only.

Type

str

grantee_type

Required. Immutable. The type of the grantee.

This field is a member of oneof _grantee_type.

Type

google.ai.generativelanguage_v1beta3.types.Permission.GranteeType

email_address

Optional. Immutable. The email address of the user of group which this permission refers. Field is not set when permission’s grantee type is EVERYONE.

This field is a member of oneof _email_address.

Type

str

role

Required. The role granted by this permission.

This field is a member of oneof _role.

Type

google.ai.generativelanguage_v1beta3.types.Permission.Role

class GranteeType(value)[source]

Bases: proto.enums.Enum

Defines types of the grantee of this permission.

Values:
GRANTEE_TYPE_UNSPECIFIED (0):

The default value. This value is unused.

USER (1):

Represents a user. When set, you must provide email_address for the user.

GROUP (2):

Represents a group. When set, you must provide email_address for the group.

EVERYONE (3):

Represents access to everyone. No extra information is required.

class Role(value)[source]

Bases: proto.enums.Enum

Defines the role granted by this permission.

Values:
ROLE_UNSPECIFIED (0):

The default value. This value is unused.

OWNER (1):

Owner can use, update, share and delete the resource.

WRITER (2):

Writer can use, update and share the resource.

READER (3):

Reader can use the resource.

class google.ai.generativelanguage_v1beta3.types.SafetyFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety feedback for an entire request.

This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

rating

Safety rating evaluated from content.

Type

google.ai.generativelanguage_v1beta3.types.SafetyRating

setting

Safety settings applied to the request.

Type

google.ai.generativelanguage_v1beta3.types.SafetySetting

class google.ai.generativelanguage_v1beta3.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety rating for a piece of content.

The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

category

Required. The category for this rating.

Type

google.ai.generativelanguage_v1beta3.types.HarmCategory

probability

Required. The probability of harm for this content.

Type

google.ai.generativelanguage_v1beta3.types.SafetyRating.HarmProbability

class HarmProbability(value)[source]

Bases: proto.enums.Enum

The probability that a piece of content is harmful.

The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.

Values:
HARM_PROBABILITY_UNSPECIFIED (0):

Probability is unspecified.

NEGLIGIBLE (1):

Content has a negligible chance of being unsafe.

LOW (2):

Content has a low chance of being unsafe.

MEDIUM (3):

Content has a medium chance of being unsafe.

HIGH (4):

Content has a high chance of being unsafe.

class google.ai.generativelanguage_v1beta3.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety setting, affecting the safety-blocking behavior.

Passing a safety setting for a category changes the allowed proability that content is blocked.

category

Required. The category for this setting.

Type

google.ai.generativelanguage_v1beta3.types.HarmCategory

threshold

Required. Controls the probability threshold at which harm is blocked.

Type

google.ai.generativelanguage_v1beta3.types.SafetySetting.HarmBlockThreshold

class HarmBlockThreshold(value)[source]

Bases: proto.enums.Enum

Block at and beyond a specified harm probability.

Values:
HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

Threshold is unspecified.

BLOCK_LOW_AND_ABOVE (1):

Content with NEGLIGIBLE will be allowed.

BLOCK_MEDIUM_AND_ABOVE (2):

Content with NEGLIGIBLE and LOW will be allowed.

BLOCK_ONLY_HIGH (3):

Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.

BLOCK_NONE (4):

All content will be allowed.

class google.ai.generativelanguage_v1beta3.types.TextCompletion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Output text returned from a model.

output

Output only. The generated text returned from the model.

Type

str

safety_ratings

Ratings for the safety of a response.

There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.SafetyRating]

citation_metadata

Output only. Citation information for model-generated output in this TextCompletion.

This field may be populated with attribution information for any text included in the output.

This field is a member of oneof _citation_metadata.

Type

google.ai.generativelanguage_v1beta3.types.CitationMetadata

class google.ai.generativelanguage_v1beta3.types.TextPrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Text given to the model as a prompt.

The Model will use this TextPrompt to Generate a text completion.

text

Required. The prompt text.

Type

str

class google.ai.generativelanguage_v1beta3.types.TransferOwnershipRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to transfer the ownership of the tuned model.

name

Required. The resource name of the tuned model to transfer ownership .

Format: tunedModels/my-model-id

Type

str

email_address

Required. The email address of the user to whom the tuned model is being transferred to.

Type

str

class google.ai.generativelanguage_v1beta3.types.TransferOwnershipResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from TransferOwnership.

class google.ai.generativelanguage_v1beta3.types.TunedModel(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A fine-tuned model created using ModelService.CreateTunedModel.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

tuned_model_source

Optional. TunedModel to use as the starting point for training the new model.

This field is a member of oneof source_model.

Type

google.ai.generativelanguage_v1beta3.types.TunedModelSource

base_model

Immutable. The name of the Model to tune. Example: models/text-bison-001

This field is a member of oneof source_model.

Type

str

name

Output only. The tuned model name. A unique name will be generated on create. Example: tunedModels/az2mb0bpw6i If display_name is set on create, the id portion of the name will be set by concatenating the words of the display_name with hyphens and adding a random portion for uniqueness. Example: display_name = “Sentence Translator” name = “tunedModels/sentence-translator-u3b7m”.

Type

str

display_name

Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.

Type

str

description

Optional. A short description of this model.

Type

str

temperature

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _temperature.

Type

float

top_p

Optional. For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. For Top-k sampling.

Top-k sampling considers the set of top_k most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _top_k.

Type

int

state

Output only. The state of the tuned model.

Type

google.ai.generativelanguage_v1beta3.types.TunedModel.State

create_time

Output only. The timestamp when this model was created.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. The timestamp when this model was updated.

Type

google.protobuf.timestamp_pb2.Timestamp

tuning_task

Required. The tuning task that creates the tuned model.

Type

google.ai.generativelanguage_v1beta3.types.TuningTask

class State(value)[source]

Bases: proto.enums.Enum

The state of the tuned model.

Values:
STATE_UNSPECIFIED (0):

The default value. This value is unused.

CREATING (1):

The model is being created.

ACTIVE (2):

The model is ready to be used.

FAILED (3):

The model failed to be created.

class google.ai.generativelanguage_v1beta3.types.TunedModelSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tuned model as a source for training a new model.

tuned_model

Immutable. The name of the TunedModel to use as the starting point for training the new model. Example: tunedModels/my-tuned-model

Type

str

base_model

Output only. The name of the base Model this TunedModel was tuned from. Example: models/text-bison-001

Type

str

class google.ai.generativelanguage_v1beta3.types.TuningExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A single example for tuning.

text_input

Optional. Text model input.

This field is a member of oneof model_input.

Type

str

output

Required. The expected model output.

Type

str

class google.ai.generativelanguage_v1beta3.types.TuningExamples(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A set of tuning examples. Can be training or validatation data.

examples

Required. The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.TuningExample]

class google.ai.generativelanguage_v1beta3.types.TuningSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Record for a single tuning step.

step

Output only. The tuning step.

Type

int

epoch

Output only. The epoch this step was part of.

Type

int

mean_loss

Output only. The mean loss of the training examples for this step.

Type

float

compute_time

Output only. The timestamp when this metric was computed.

Type

google.protobuf.timestamp_pb2.Timestamp

class google.ai.generativelanguage_v1beta3.types.TuningTask(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tuning tasks that create tuned models.

start_time

Output only. The timestamp when tuning this model started.

Type

google.protobuf.timestamp_pb2.Timestamp

complete_time

Output only. The timestamp when tuning this model completed.

Type

google.protobuf.timestamp_pb2.Timestamp

snapshots

Output only. Metrics collected during tuning.

Type

MutableSequence[google.ai.generativelanguage_v1beta3.types.TuningSnapshot]

training_data

Required. Input only. Immutable. The model training data.

Type

google.ai.generativelanguage_v1beta3.types.Dataset

hyperparameters

Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.

Type

google.ai.generativelanguage_v1beta3.types.Hyperparameters

class google.ai.generativelanguage_v1beta3.types.UpdatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update the Permission.

permission

Required. The permission to update.

The permission’s name field is used to identify the permission to update.

Type

google.ai.generativelanguage_v1beta3.types.Permission

update_mask

Required. The list of fields to update. Accepted ones:

  • role (Permission.role field)

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta3.types.UpdateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update a TunedModel.

tuned_model

Required. The tuned model to update.

Type

google.ai.generativelanguage_v1beta3.types.TunedModel

update_mask

Required. The list of fields to update.

Type

google.protobuf.field_mask_pb2.FieldMask