As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Ai Generativelanguage v1beta API

class google.ai.generativelanguage_v1beta.types.AttributionSourceId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Identifier for the source contributing to this attribution.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

grounding_passage

Identifier for an inline passage.

This field is a member of oneof source.

Type

google.ai.generativelanguage_v1beta.types.AttributionSourceId.GroundingPassageId

semantic_retriever_chunk

Identifier for a Chunk fetched via Semantic Retriever.

This field is a member of oneof source.

Type

google.ai.generativelanguage_v1beta.types.AttributionSourceId.SemanticRetrieverChunk

class GroundingPassageId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Identifier for a part within a GroundingPassage.

passage_id

Output only. ID of the passage matching the GenerateAnswerRequest’s GroundingPassage.id.

Type

str

part_index

Output only. Index of the part within the GenerateAnswerRequest’s GroundingPassage.content.

Type

int

class SemanticRetrieverChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Identifier for a Chunk retrieved via Semantic Retriever specified in the GenerateAnswerRequest using SemanticRetrieverConfig.

source

Output only. Name of the source matching the request’s SemanticRetrieverConfig.source. Example: corpora/123 or corpora/123/documents/abc

Type

str

chunk

Output only. Name of the Chunk containing the attributed text. Example: corpora/123/documents/abc/chunks/xyz

Type

str

class google.ai.generativelanguage_v1beta.types.BatchCreateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to batch create Chunks.

parent

Optional. The name of the Document where this batch of Chunks will be created. The parent field in every CreateChunkRequest must match this value. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

requests

Required. The request messages specifying the Chunks to create. A maximum of 100 Chunks can be created in a batch.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.CreateChunkRequest]

class google.ai.generativelanguage_v1beta.types.BatchCreateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from BatchCreateChunks containing a list of created Chunks.

chunks

Chunks created.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]

class google.ai.generativelanguage_v1beta.types.BatchDeleteChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to batch delete Chunks.

parent

Optional. The name of the Document containing the Chunks to delete. The parent field in every DeleteChunkRequest must match this value. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

requests

Required. The request messages specifying the Chunks to delete.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.DeleteChunkRequest]

class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Batch request to get embeddings from the model for a list of prompts.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

requests

Required. Embed requests for the batch. The model in each of these requests must match the model specified BatchEmbedContentsRequest.model.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedContentRequest]

class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a BatchEmbedContentsRequest.

embeddings

Output only. The embeddings for each request, in the same order as provided in the batch request.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.ContentEmbedding]

class google.ai.generativelanguage_v1beta.types.BatchEmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Batch request to get a text embedding from the model.

model

Required. The name of the Model to use for generating the embedding. Examples: models/embedding-gecko-001

Type

str

texts

Optional. The free-form input texts that the model will turn into an embedding. The current limit is 100 texts, over which an error will be thrown.

Type

MutableSequence[str]

requests

Optional. Embed requests for the batch. Only one of texts or requests can be set.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedTextRequest]

class google.ai.generativelanguage_v1beta.types.BatchEmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a EmbedTextRequest.

embeddings

Output only. The embeddings generated from the input text.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Embedding]

class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to batch update Chunks.

parent

Optional. The name of the Document containing the Chunks to update. The parent field in every UpdateChunkRequest must match this value. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

requests

Required. The request messages specifying the Chunks to update. A maximum of 100 Chunks can be updated in a batch.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.UpdateChunkRequest]

class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from BatchUpdateChunks containing a list of updated Chunks.

chunks

Chunks updated.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]

class google.ai.generativelanguage_v1beta.types.Blob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Raw media bytes.

Text should not be sent as raw bytes, use the ‘text’ field.

mime_type

The IANA standard MIME type of the source data. Examples:

  • image/png

  • image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.

Type

str

data

Raw bytes for media formats.

Type

bytes

class google.ai.generativelanguage_v1beta.types.CachedContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Content that has been preprocessed and can be used in subsequent request to GenerativeService.

Cached content can be only used with model it was created for.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

expire_time

Timestamp in UTC of when this resource is considered expired. This is always provided on output, regardless of what was sent on input.

This field is a member of oneof expiration.

Type

google.protobuf.timestamp_pb2.Timestamp

ttl

Input only. New TTL for this resource, input only.

This field is a member of oneof expiration.

Type

google.protobuf.duration_pb2.Duration

name

Optional. Identifier. The resource name referring to the cached content. Format: cachedContents/{id}

This field is a member of oneof _name.

Type

str

display_name

Optional. Immutable. The user-generated meaningful display name of the cached content. Maximum 128 Unicode characters.

This field is a member of oneof _display_name.

Type

str

model

Required. Immutable. The name of the Model to use for cached content Format: models/{model}

This field is a member of oneof _model.

Type

str

system_instruction

Optional. Input only. Immutable. Developer set system instruction. Currently text only.

This field is a member of oneof _system_instruction.

Type

google.ai.generativelanguage_v1beta.types.Content

contents

Optional. Input only. Immutable. The content to cache.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Content]

tools

Optional. Input only. Immutable. A list of Tools the model may use to generate the next response

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]

tool_config

Optional. Input only. Immutable. Tool config. This config is shared for all tools.

This field is a member of oneof _tool_config.

Type

google.ai.generativelanguage_v1beta.types.ToolConfig

create_time

Output only. Creation time of the cache entry.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. When the cache entry was last updated in UTC time.

Type

google.protobuf.timestamp_pb2.Timestamp

usage_metadata

Output only. Metadata on the usage of the cached content.

Type

google.ai.generativelanguage_v1beta.types.CachedContent.UsageMetadata

class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata on the usage of the cached content.

total_token_count

Total number of tokens that the cached content consumes.

Type

int

class google.ai.generativelanguage_v1beta.types.Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response candidate generated from the model.

index

Output only. Index of the candidate in the list of response candidates.

This field is a member of oneof _index.

Type

int

content

Output only. Generated content returned from the model.

Type

google.ai.generativelanguage_v1beta.types.Content

finish_reason

Optional. Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating tokens.

Type

google.ai.generativelanguage_v1beta.types.Candidate.FinishReason

safety_ratings

List of ratings for the safety of a response candidate. There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]

citation_metadata

Output only. Citation information for model-generated candidate.

This field may be populated with recitation information for any text included in the content. These are passages that are “recited” from copyrighted material in the foundational LLM’s training data.

Type

google.ai.generativelanguage_v1beta.types.CitationMetadata

token_count

Output only. Token count for this candidate.

Type

int

grounding_attributions

Output only. Attribution information for sources that contributed to a grounded answer.

This field is populated for GenerateAnswer calls.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingAttribution]

grounding_metadata

Output only. Grounding metadata for the candidate.

This field is populated for GenerateContent calls.

Type

google.ai.generativelanguage_v1beta.types.GroundingMetadata

avg_logprobs

Output only.

Type

float

logprobs_result

Output only. Log-likelihood scores for the response tokens and top tokens

Type

google.ai.generativelanguage_v1beta.types.LogprobsResult

class FinishReason(value)[source]

Bases: proto.enums.Enum

Defines the reason why the model stopped generating tokens.

Values:
FINISH_REASON_UNSPECIFIED (0):

Default value. This value is unused.

STOP (1):

Natural stop point of the model or provided stop sequence.

MAX_TOKENS (2):

The maximum number of tokens as specified in the request was reached.

SAFETY (3):

The response candidate content was flagged for safety reasons.

RECITATION (4):

The response candidate content was flagged for recitation reasons.

LANGUAGE (6):

The response candidate content was flagged for using an unsupported language.

OTHER (5):

Unknown reason.

BLOCKLIST (7):

Token generation stopped because the content contains forbidden terms.

PROHIBITED_CONTENT (8):

Token generation stopped for potentially containing prohibited content.

SPII (9):

Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).

MALFORMED_FUNCTION_CALL (10):

The function call generated by the model is invalid.

class google.ai.generativelanguage_v1beta.types.Chunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Chunk is a subpart of a Document that is treated as an independent unit for the purposes of vector representation and storage. A Corpus can have a maximum of 1 million Chunks.

name

Immutable. Identifier. The Chunk resource name. The ID (name excluding the corpora/*/documents/*/chunks/ prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a random 12-character unique ID will be generated. Example: corpora/{corpus_id}/documents/{document_id}/chunks/123a456b789c

Type

str

data

Required. The content for the Chunk, such as the text string. The maximum number of tokens per chunk is 2043.

Type

google.ai.generativelanguage_v1beta.types.ChunkData

custom_metadata

Optional. User provided custom metadata stored as key-value pairs. The maximum number of CustomMetadata per chunk is 20.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]

create_time

Output only. The Timestamp of when the Chunk was created.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. The Timestamp of when the Chunk was last updated.

Type

google.protobuf.timestamp_pb2.Timestamp

state

Output only. Current state of the Chunk.

Type

google.ai.generativelanguage_v1beta.types.Chunk.State

class State(value)[source]

Bases: proto.enums.Enum

States for the lifecycle of a Chunk.

Values:
STATE_UNSPECIFIED (0):

The default value. This value is used if the state is omitted.

STATE_PENDING_PROCESSING (1):

Chunk is being processed (embedding and vector storage).

STATE_ACTIVE (2):

Chunk is processed and available for querying.

STATE_FAILED (10):

Chunk failed processing.

class google.ai.generativelanguage_v1beta.types.ChunkData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Extracted data that represents the Chunk content.

string_value

The Chunk content as a string. The maximum number of tokens per chunk is 2043.

This field is a member of oneof data.

Type

str

class google.ai.generativelanguage_v1beta.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A collection of source attributions for a piece of content.

citation_sources

Citations to sources for a specific response.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.CitationSource]

class google.ai.generativelanguage_v1beta.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A citation to a source for a portion of a specific response.

start_index

Optional. Start of segment of the response that is attributed to this source.

Index indicates the start of the segment, measured in bytes.

This field is a member of oneof _start_index.

Type

int

end_index

Optional. End of the attributed segment, exclusive.

This field is a member of oneof _end_index.

Type

int

uri

Optional. URI that is attributed as a source for a portion of the text.

This field is a member of oneof _uri.

Type

str

license_

Optional. License for the GitHub project that is attributed as a source for segment.

License info is required for code citations.

This field is a member of oneof _license.

Type

str

class google.ai.generativelanguage_v1beta.types.CodeExecution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tool that executes code generated by the model, and automatically returns the result to the model.

See also ExecutableCode and CodeExecutionResult which are only generated when using this tool.

class google.ai.generativelanguage_v1beta.types.CodeExecutionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Result of executing the ExecutableCode.

Only generated when using the CodeExecution, and always follows a part containing the ExecutableCode.

outcome

Required. Outcome of the code execution.

Type

google.ai.generativelanguage_v1beta.types.CodeExecutionResult.Outcome

output

Optional. Contains stdout when code execution is successful, stderr or other description otherwise.

Type

str

class Outcome(value)[source]

Bases: proto.enums.Enum

Enumeration of possible outcomes of the code execution.

Values:
OUTCOME_UNSPECIFIED (0):

Unspecified status. This value should not be used.

OUTCOME_OK (1):

Code execution completed successfully.

OUTCOME_FAILED (2):

Code execution finished but with a failure. stderr should contain the reason.

OUTCOME_DEADLINE_EXCEEDED (3):

Code execution ran for too long, and was cancelled. There may or may not be a partial output present.

class google.ai.generativelanguage_v1beta.types.Condition(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Filter condition applicable to a single key.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

string_value

The string value to filter the metadata on.

This field is a member of oneof value.

Type

str

numeric_value

The numeric value to filter the metadata on.

This field is a member of oneof value.

Type

float

operation

Required. Operator applied to the given key-value pair to trigger the condition.

Type

google.ai.generativelanguage_v1beta.types.Condition.Operator

class Operator(value)[source]

Bases: proto.enums.Enum

Defines the valid operators that can be applied to a key-value pair.

Values:
OPERATOR_UNSPECIFIED (0):

The default value. This value is unused.

LESS (1):

Supported by numeric.

LESS_EQUAL (2):

Supported by numeric.

EQUAL (3):

Supported by numeric & string.

GREATER_EQUAL (4):

Supported by numeric.

GREATER (5):

Supported by numeric.

NOT_EQUAL (6):

Supported by numeric & string.

INCLUDES (7):

Supported by string only when CustomMetadata value type for the given key has a string_list_value.

EXCLUDES (8):

Supported by string only when CustomMetadata value type for the given key has a string_list_value.

class google.ai.generativelanguage_v1beta.types.Content(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The base structured datatype containing multi-part content of a message.

A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.

parts

Ordered Parts that constitute a single message. Parts may have different MIME types.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Part]

role

Optional. The producer of the content. Must be either ‘user’ or ‘model’. Useful to set for multi-turn conversations, otherwise can be left blank or unset.

Type

str

class google.ai.generativelanguage_v1beta.types.ContentEmbedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of floats representing an embedding.

values

The embedding values.

Type

MutableSequence[float]

class google.ai.generativelanguage_v1beta.types.ContentFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

reason

The reason content was blocked during request processing.

Type

google.ai.generativelanguage_v1beta.types.ContentFilter.BlockedReason

message

A string that describes the filtering behavior in more detail.

This field is a member of oneof _message.

Type

str

class BlockedReason(value)[source]

Bases: proto.enums.Enum

A list of reasons why content may have been blocked.

Values:
BLOCKED_REASON_UNSPECIFIED (0):

A blocked reason was not specified.

SAFETY (1):

Content was blocked by safety settings.

OTHER (2):

Content was blocked, but the reason is uncategorized.

class google.ai.generativelanguage_v1beta.types.Corpus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Corpus is a collection of Documents. A project can create up to 5 corpora.

name

Immutable. Identifier. The Corpus resource name. The ID (name excluding the “corpora/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived from display_name along with a 12 character random suffix. Example: corpora/my-awesome-corpora-123a456b789c

Type

str

display_name

Optional. The human-readable display name for the Corpus. The display name must be no more than 512 characters in length, including spaces. Example: “Docs on Semantic Retriever”.

Type

str

create_time

Output only. The Timestamp of when the Corpus was created.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. The Timestamp of when the Corpus was last updated.

Type

google.protobuf.timestamp_pb2.Timestamp

class google.ai.generativelanguage_v1beta.types.CountMessageTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

prompt

Required. The prompt, whose token count is to be returned.

Type

google.ai.generativelanguage_v1beta.types.MessagePrompt

class google.ai.generativelanguage_v1beta.types.CountMessageTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountMessageTokens.

It returns the model’s token_count for the prompt.

token_count

The number of tokens that the model tokenizes the prompt into.

Always non-negative.

Type

int

class google.ai.generativelanguage_v1beta.types.CountTextTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

prompt

Required. The free-form input text given to the model as a prompt.

Type

google.ai.generativelanguage_v1beta.types.TextPrompt

class google.ai.generativelanguage_v1beta.types.CountTextTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountTextTokens.

It returns the model’s token_count for the prompt.

token_count

The number of tokens that the model tokenizes the prompt into.

Always non-negative.

Type

int

class google.ai.generativelanguage_v1beta.types.CountTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

contents

Optional. The input given to the model as a prompt. This field is ignored when generate_content_request is set.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Content]

generate_content_request

Optional. The overall input given to the Model. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling. Models/Contents and generate_content_requests are mutually exclusive. You can either send Model + Contents or a generate_content_request, but never both.

Type

google.ai.generativelanguage_v1beta.types.GenerateContentRequest

class google.ai.generativelanguage_v1beta.types.CountTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountTokens.

It returns the model’s token_count for the prompt.

total_tokens

The number of tokens that the Model tokenizes the prompt into. Always non-negative.

Type

int

cached_content_token_count

Number of tokens in the cached part of the prompt (the cached content).

Type

int

class google.ai.generativelanguage_v1beta.types.CreateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create CachedContent.

cached_content

Required. The cached content to create.

Type

google.ai.generativelanguage_v1beta.types.CachedContent

class google.ai.generativelanguage_v1beta.types.CreateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a Chunk.

parent

Required. The name of the Document where this Chunk will be created. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

chunk

Required. The Chunk to create.

Type

google.ai.generativelanguage_v1beta.types.Chunk

class google.ai.generativelanguage_v1beta.types.CreateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a Corpus.

corpus

Required. The Corpus to create.

Type

google.ai.generativelanguage_v1beta.types.Corpus

class google.ai.generativelanguage_v1beta.types.CreateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a Document.

parent

Required. The name of the Corpus where this Document will be created. Example: corpora/my-corpus-123

Type

str

document

Required. The Document to create.

Type

google.ai.generativelanguage_v1beta.types.Document

class google.ai.generativelanguage_v1beta.types.CreateFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for CreateFile.

file

Optional. Metadata for the file to create.

Type

google.ai.generativelanguage_v1beta.types.File

class google.ai.generativelanguage_v1beta.types.CreateFileResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response for CreateFile.

file

Metadata for the created file.

Type

google.ai.generativelanguage_v1beta.types.File

class google.ai.generativelanguage_v1beta.types.CreatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a Permission.

parent

Required. The parent resource of the Permission. Formats: tunedModels/{tuned_model} corpora/{corpus}

Type

str

permission

Required. The permission to create.

Type

google.ai.generativelanguage_v1beta.types.Permission

class google.ai.generativelanguage_v1beta.types.CreateTunedModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata about the state and progress of creating a tuned model returned from the long-running operation

tuned_model

Name of the tuned model associated with the tuning operation.

Type

str

total_steps

The total number of tuning steps.

Type

int

completed_steps

The number of steps completed.

Type

int

completed_percent

The completed percentage for the tuning operation.

Type

float

snapshots

Metrics collected during tuning.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]

class google.ai.generativelanguage_v1beta.types.CreateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to create a TunedModel.

tuned_model_id

Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression: [a-z]([a-z0-9-]{0,38}[a-z0-9])?.

This field is a member of oneof _tuned_model_id.

Type

str

tuned_model

Required. The tuned model to create.

Type

google.ai.generativelanguage_v1beta.types.TunedModel

class google.ai.generativelanguage_v1beta.types.CustomMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

User provided metadata stored as key-value pairs.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

string_value

The string value of the metadata to store.

This field is a member of oneof value.

Type

str

string_list_value

The StringList value of the metadata to store.

This field is a member of oneof value.

Type

google.ai.generativelanguage_v1beta.types.StringList

numeric_value

The numeric value of the metadata to store.

This field is a member of oneof value.

Type

float

key

Required. The key of the metadata to store.

Type

str

class google.ai.generativelanguage_v1beta.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Dataset for training or validation.

examples

Optional. Inline examples.

This field is a member of oneof dataset.

Type

google.ai.generativelanguage_v1beta.types.TuningExamples

class google.ai.generativelanguage_v1beta.types.DeleteCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete CachedContent.

name

Required. The resource name referring to the content cache entry Format: cachedContents/{id}

Type

str

class google.ai.generativelanguage_v1beta.types.DeleteChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete a Chunk.

name

Required. The resource name of the Chunk to delete. Example: corpora/my-corpus-123/documents/the-doc-abc/chunks/some-chunk

Type

str

class google.ai.generativelanguage_v1beta.types.DeleteCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete a Corpus.

name

Required. The resource name of the Corpus. Example: corpora/my-corpus-123

Type

str

force

Optional. If set to true, any Documents and objects related to this Corpus will also be deleted.

If false (the default), a FAILED_PRECONDITION error will be returned if Corpus contains any Documents.

Type

bool

class google.ai.generativelanguage_v1beta.types.DeleteDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete a Document.

name

Required. The resource name of the Document to delete. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

force

Optional. If set to true, any Chunks and objects related to this Document will also be deleted.

If false (the default), a FAILED_PRECONDITION error will be returned if Document contains any Chunks.

Type

bool

class google.ai.generativelanguage_v1beta.types.DeleteFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for DeleteFile.

name

Required. The name of the File to delete. Example: files/abc-123

Type

str

class google.ai.generativelanguage_v1beta.types.DeletePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete the Permission.

name

Required. The resource name of the permission. Formats: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission}

Type

str

class google.ai.generativelanguage_v1beta.types.DeleteTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to delete a TunedModel.

name

Required. The resource name of the model. Format: tunedModels/my-model-id

Type

str

class google.ai.generativelanguage_v1beta.types.Document(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Document is a collection of Chunks. A Corpus can have a maximum of 10,000 Documents.

name

Immutable. Identifier. The Document resource name. The ID (name excluding the corpora/*/documents/ prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived from display_name along with a 12 character random suffix. Example: corpora/{corpus_id}/documents/my-awesome-doc-123a456b789c

Type

str

display_name

Optional. The human-readable display name for the Document. The display name must be no more than 512 characters in length, including spaces. Example: “Semantic Retriever Documentation”.

Type

str

custom_metadata

Optional. User provided custom metadata stored as key-value pairs used for querying. A Document can have a maximum of 20 CustomMetadata.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]

update_time

Output only. The Timestamp of when the Document was last updated.

Type

google.protobuf.timestamp_pb2.Timestamp

create_time

Output only. The Timestamp of when the Document was created.

Type

google.protobuf.timestamp_pb2.Timestamp

class google.ai.generativelanguage_v1beta.types.DynamicRetrievalConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Describes the options to customize dynamic retrieval.

mode

The mode of the predictor to be used in dynamic retrieval.

Type

google.ai.generativelanguage_v1beta.types.DynamicRetrievalConfig.Mode

dynamic_threshold

The threshold to be used in dynamic retrieval. If not set, a system default value is used.

This field is a member of oneof _dynamic_threshold.

Type

float

class Mode(value)[source]

Bases: proto.enums.Enum

The mode of the predictor to be used in dynamic retrieval.

Values:
MODE_UNSPECIFIED (0):

Always trigger retrieval.

MODE_DYNAMIC (1):

Run retrieval only when system decides it is necessary.

class google.ai.generativelanguage_v1beta.types.EmbedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request containing the Content for the model to embed.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

content

Required. The content to embed. Only the parts.text fields will be counted.

Type

google.ai.generativelanguage_v1beta.types.Content

task_type

Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001.

This field is a member of oneof _task_type.

Type

google.ai.generativelanguage_v1beta.types.TaskType

title

Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT.

Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval.

This field is a member of oneof _title.

Type

str

output_dimensionality

Optional. Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end. Supported by newer models since 2024 only. You cannot set this value if using the earlier model (models/embedding-001).

This field is a member of oneof _output_dimensionality.

Type

int

class google.ai.generativelanguage_v1beta.types.EmbedContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to an EmbedContentRequest.

embedding

Output only. The embedding generated from the input content.

Type

google.ai.generativelanguage_v1beta.types.ContentEmbedding

class google.ai.generativelanguage_v1beta.types.EmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to get a text embedding from the model.

model

Required. The model name to use with the format model=models/{model}.

Type

str

text

Optional. The free-form input text that the model will turn into an embedding.

Type

str

class google.ai.generativelanguage_v1beta.types.EmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a EmbedTextRequest.

embedding

Output only. The embedding generated from the input text.

This field is a member of oneof _embedding.

Type

google.ai.generativelanguage_v1beta.types.Embedding

class google.ai.generativelanguage_v1beta.types.Embedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of floats representing the embedding.

value

The embedding values.

Type

MutableSequence[float]

class google.ai.generativelanguage_v1beta.types.Example(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

An input/output example used to instruct the Model.

It demonstrates how the model should respond or format its response.

input

Required. An example of an input Message from the user.

Type

google.ai.generativelanguage_v1beta.types.Message

output

Required. An example of what the model should output given the input.

Type

google.ai.generativelanguage_v1beta.types.Message

class google.ai.generativelanguage_v1beta.types.ExecutableCode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Code generated by the model that is meant to be executed, and the result returned to the model.

Only generated when using the CodeExecution tool, in which the code will be automatically executed, and a corresponding CodeExecutionResult will also be generated.

language

Required. Programming language of the code.

Type

google.ai.generativelanguage_v1beta.types.ExecutableCode.Language

code

Required. The code to be executed.

Type

str

class Language(value)[source]

Bases: proto.enums.Enum

Supported programming languages for the generated code.

Values:
LANGUAGE_UNSPECIFIED (0):

Unspecified language. This value should not be used.

PYTHON (1):

Python >= 3.10, with numpy and simpy available.

class google.ai.generativelanguage_v1beta.types.File(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A file uploaded to the API.

video_metadata

Output only. Metadata for a video.

This field is a member of oneof metadata.

Type

google.ai.generativelanguage_v1beta.types.VideoMetadata

name

Immutable. Identifier. The File resource name. The ID (name excluding the “files/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be generated. Example: files/123-456

Type

str

display_name

Optional. The human-readable display name for the File. The display name must be no more than 512 characters in length, including spaces. Example: “Welcome Image”.

Type

str

mime_type

Output only. MIME type of the file.

Type

str

size_bytes

Output only. Size of the file in bytes.

Type

int

create_time

Output only. The timestamp of when the File was created.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. The timestamp of when the File was last updated.

Type

google.protobuf.timestamp_pb2.Timestamp

expiration_time

Output only. The timestamp of when the File will be deleted. Only set if the File is scheduled to expire.

Type

google.protobuf.timestamp_pb2.Timestamp

sha256_hash

Output only. SHA-256 hash of the uploaded bytes.

Type

bytes

uri

Output only. The uri of the File.

Type

str

state

Output only. Processing state of the File.

Type

google.ai.generativelanguage_v1beta.types.File.State

error

Output only. Error status if File processing failed.

Type

google.rpc.status_pb2.Status

class State(value)[source]

Bases: proto.enums.Enum

States for the lifecycle of a File.

Values:
STATE_UNSPECIFIED (0):

The default value. This value is used if the state is omitted.

PROCESSING (1):

File is being processed and cannot be used for inference yet.

ACTIVE (2):

File is processed and available for inference.

FAILED (10):

File failed processing.

class google.ai.generativelanguage_v1beta.types.FileData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

URI based data.

mime_type

Optional. The IANA standard MIME type of the source data.

Type

str

file_uri

Required. URI.

Type

str

class google.ai.generativelanguage_v1beta.types.FunctionCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name with the arguments and their values.

name

Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.

Type

str

args

Optional. The function parameters and values in JSON object format.

This field is a member of oneof _args.

Type

google.protobuf.struct_pb2.Struct

class google.ai.generativelanguage_v1beta.types.FunctionCallingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration for specifying function calling behavior.

mode

Optional. Specifies the mode in which function calling should execute. If unspecified, the default value will be set to AUTO.

Type

google.ai.generativelanguage_v1beta.types.FunctionCallingConfig.Mode

allowed_function_names

Optional. A set of function names that, when provided, limits the functions the model will call.

This should only be set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.

Type

MutableSequence[str]

class Mode(value)[source]

Bases: proto.enums.Enum

Defines the execution behavior for function calling by defining the execution mode.

Values:
MODE_UNSPECIFIED (0):

Unspecified function calling mode. This value should not be used.

AUTO (1):

Default model behavior, model decides to predict either a function call or a natural language response.

ANY (2):

Model is constrained to always predicting a function call only. If “allowed_function_names” are set, the predicted function call will be limited to any one of “allowed_function_names”, else the predicted function call will be any one of the provided “function_declarations”.

NONE (3):

Model will not predict any function call. Model behavior is same as when not passing any function declarations.

class google.ai.generativelanguage_v1beta.types.FunctionDeclaration(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

name

Required. The name of the function. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.

Type

str

description

Required. A brief description of the function.

Type

str

parameters

Optional. Describes the parameters to this function. Reflects the Open API 3.03 Parameter Object string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter.

This field is a member of oneof _parameters.

Type

google.ai.generativelanguage_v1beta.types.Schema

class google.ai.generativelanguage_v1beta.types.FunctionResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The result output from a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCall made based on model prediction.

name

Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.

Type

str

response

Required. The function response in JSON object format.

Type

google.protobuf.struct_pb2.Struct

class google.ai.generativelanguage_v1beta.types.GenerateAnswerRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a grounded answer from the Model.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

inline_passages

Passages provided inline with the request.

This field is a member of oneof grounding_source.

Type

google.ai.generativelanguage_v1beta.types.GroundingPassages

semantic_retriever

Content retrieved from resources created via the Semantic Retriever API.

This field is a member of oneof grounding_source.

Type

google.ai.generativelanguage_v1beta.types.SemanticRetrieverConfig

model

Required. The name of the Model to use for generating the grounded response.

Format: model=models/{model}.

Type

str

contents

Required. The content of the current conversation with the Model. For single-turn queries, this is a single question to answer. For multi-turn queries, this is a repeated field that contains conversation history and the last Content in the list containing the question.

Note: GenerateAnswer only supports queries in English.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Content]

answer_style

Required. Style in which answers should be returned.

Type

google.ai.generativelanguage_v1beta.types.GenerateAnswerRequest.AnswerStyle

safety_settings

Optional. A list of unique SafetySetting instances for blocking unsafe content.

This will be enforced on the GenerateAnswerRequest.contents and GenerateAnswerResponse.candidate. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]

temperature

Optional. Controls the randomness of the output.

Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. A low temperature (~0.2) is usually recommended for Attributed-Question-Answering use cases.

This field is a member of oneof _temperature.

Type

float

class AnswerStyle(value)[source]

Bases: proto.enums.Enum

Style for grounded answers.

Values:
ANSWER_STYLE_UNSPECIFIED (0):

Unspecified answer style.

ABSTRACTIVE (1):

Succint but abstract style.

EXTRACTIVE (2):

Very brief and extractive style.

VERBOSE (3):

Verbose style including extra details. The response may be formatted as a sentence, paragraph, multiple paragraphs, or bullet points, etc.

class google.ai.generativelanguage_v1beta.types.GenerateAnswerResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from the model for a grounded answer.

answer

Candidate answer from the model.

Note: The model always attempts to provide a grounded answer, even when the answer is unlikely to be answerable from the given passages. In that case, a low-quality or ungrounded answer may be provided, along with a low answerable_probability.

Type

google.ai.generativelanguage_v1beta.types.Candidate

answerable_probability

Output only. The model’s estimate of the probability that its answer is correct and grounded in the input passages.

A low answerable_probability indicates that the answer might not be grounded in the sources.

When answerable_probability is low, you may want to:

  • Display a message to the effect of “We couldn’t answer that question” to the user.

  • Fall back to a general-purpose LLM that answers the question from world knowledge. The threshold and nature of such fallbacks will depend on individual use cases. 0.5 is a good starting threshold.

This field is a member of oneof _answerable_probability.

Type

float

input_feedback

Output only. Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.

The input data can be one or more of the following:

  • Question specified by the last entry in GenerateAnswerRequest.content

  • Conversation history specified by the other entries in GenerateAnswerRequest.content

  • Grounding sources (GenerateAnswerRequest.semantic_retriever or GenerateAnswerRequest.inline_passages)

This field is a member of oneof _input_feedback.

Type

google.ai.generativelanguage_v1beta.types.GenerateAnswerResponse.InputFeedback

class InputFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.

block_reason

Optional. If set, the input was blocked and no candidates are returned. Rephrase the input.

This field is a member of oneof _block_reason.

Type

google.ai.generativelanguage_v1beta.types.GenerateAnswerResponse.InputFeedback.BlockReason

safety_ratings

Ratings for safety of the input. There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]

class BlockReason(value)[source]

Bases: proto.enums.Enum

Specifies what was the reason why input was blocked.

Values:
BLOCK_REASON_UNSPECIFIED (0):

Default value. This value is unused.

SAFETY (1):

Input was blocked due to safety reasons. Inspect safety_ratings to understand which safety category blocked it.

OTHER (2):

Input was blocked due to other reasons.

class google.ai.generativelanguage_v1beta.types.GenerateContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a completion from the model.

model

Required. The name of the Model to use for generating the completion.

Format: name=models/{model}.

Type

str

system_instruction

Optional. Developer set system instruction(s). Currently, text only.

This field is a member of oneof _system_instruction.

Type

google.ai.generativelanguage_v1beta.types.Content

contents

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Content]

tools

Optional. A list of Tools the Model may use to generate the next response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model. Supported Tools are Function and code_execution. Refer to the Function calling and the Code execution guides to learn more.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]

tool_config

Optional. Tool configuration for any Tool specified in the request. Refer to the Function calling guide for a usage example.

Type

google.ai.generativelanguage_v1beta.types.ToolConfig

safety_settings

Optional. A list of unique SafetySetting instances for blocking unsafe content.

This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]

generation_config

Optional. Configuration options for model generation and outputs.

This field is a member of oneof _generation_config.

Type

google.ai.generativelanguage_v1beta.types.GenerationConfig

cached_content

Optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}

This field is a member of oneof _cached_content.

Type

str

class google.ai.generativelanguage_v1beta.types.GenerateContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from the model supporting multiple candidate responses.

Safety ratings and content filtering are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finish_reason and in safety_ratings. The API:

  • Returns either all requested candidates or none of them

  • Returns no candidates at all only if there was something wrong with the prompt (check prompt_feedback)

  • Reports feedback on each candidate in finish_reason and safety_ratings.

candidates

Candidate responses from the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Candidate]

prompt_feedback

Returns the prompt’s feedback related to the content filters.

Type

google.ai.generativelanguage_v1beta.types.GenerateContentResponse.PromptFeedback

usage_metadata

Output only. Metadata on the generation requests’ token usage.

Type

google.ai.generativelanguage_v1beta.types.GenerateContentResponse.UsageMetadata

class PromptFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A set of the feedback metadata the prompt specified in GenerateContentRequest.content.

block_reason

Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.

Type

google.ai.generativelanguage_v1beta.types.GenerateContentResponse.PromptFeedback.BlockReason

safety_ratings

Ratings for safety of the prompt. There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]

class BlockReason(value)[source]

Bases: proto.enums.Enum

Specifies the reason why the prompt was blocked.

Values:
BLOCK_REASON_UNSPECIFIED (0):

Default value. This value is unused.

SAFETY (1):

Prompt was blocked due to safety reasons. Inspect safety_ratings to understand which safety category blocked it.

OTHER (2):

Prompt was blocked due to unknown reasons.

BLOCKLIST (3):

Prompt was blocked due to the terms which are included from the terminology blocklist.

PROHIBITED_CONTENT (4):

Prompt was blocked due to prohibited content.

class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata on the generation request’s token usage.

prompt_token_count

Number of tokens in the prompt. When cached_content is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.

Type

int

cached_content_token_count

Number of tokens in the cached part of the prompt (the cached content)

Type

int

candidates_token_count

Total number of tokens across all the generated response candidates.

Type

int

total_token_count

Total token count for the generation request (prompt + response candidates).

Type

int

class google.ai.generativelanguage_v1beta.types.GenerateMessageRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a message response from the model.

model

Required. The name of the model to use.

Format: name=models/{model}.

Type

str

prompt

Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.

Type

google.ai.generativelanguage_v1beta.types.MessagePrompt

temperature

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This field is a member of oneof _temperature.

Type

float

candidate_count

Optional. The number of generated response messages to return.

This value must be between [1, 8], inclusive. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Top-k sampling considers the set of top_k most probable tokens.

This field is a member of oneof _top_k.

Type

int

class google.ai.generativelanguage_v1beta.types.GenerateMessageResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response from the model.

This includes candidate messages and conversation history in the form of chronologically-ordered messages.

candidates

Candidate response messages from the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Message]

messages

The conversation history used by the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Message]

filters

A set of content filtering metadata for the prompt and response text.

This indicates which SafetyCategory(s) blocked a candidate from this response, the lowest HarmProbability that triggered a block, and the HarmThreshold setting for that category.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]

class google.ai.generativelanguage_v1beta.types.GenerateTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a text completion response from the model.

model

Required. The name of the Model or TunedModel to use for generating the completion. Examples: models/text-bison-001 tunedModels/sentence-translator-u3b7m

Type

str

prompt

Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.

Type

google.ai.generativelanguage_v1beta.types.TextPrompt

temperature

Optional. Controls the randomness of the output. Note: The default value varies by model, see the Model.temperature attribute of the Model returned the getModel function.

Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.

This field is a member of oneof _temperature.

Type

float

candidate_count

Optional. Number of generated responses to return.

This value must be between [1, 8], inclusive. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

max_output_tokens

Optional. The maximum number of tokens to include in a candidate.

If unset, this will default to output_token_limit specified in the Model specification.

This field is a member of oneof _max_output_tokens.

Type

int

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.

Note: The default value varies by model, see the Model.top_p attribute of the Model returned the getModel function.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Top-k sampling considers the set of top_k most probable tokens. Defaults to 40.

Note: The default value varies by model, see the Model.top_k attribute of the Model returned the getModel function.

This field is a member of oneof _top_k.

Type

int

safety_settings

Optional. A list of unique SafetySetting instances for blocking unsafe content.

that will be enforced on the GenerateTextRequest.prompt and GenerateTextResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY, HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL, HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported in text service.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]

stop_sequences

The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

Type

MutableSequence[str]

class google.ai.generativelanguage_v1beta.types.GenerateTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response from the model, including candidate completions.

candidates

Candidate responses from the model.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.TextCompletion]

filters

A set of content filtering metadata for the prompt and response text.

This indicates which SafetyCategory(s) blocked a candidate from this response, the lowest HarmProbability that triggered a block, and the HarmThreshold setting for that category. This indicates the smallest change to the SafetySettings that would be necessary to unblock at least 1 response.

The blocking is configured by the SafetySettings in the request (or the default SafetySettings of the API).

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]

safety_feedback

Returns any safety feedback related to content filtering.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyFeedback]

class google.ai.generativelanguage_v1beta.types.GenerationConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration options for model generation and outputs. Not all parameters are configurable for every model.

candidate_count

Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

stop_sequences

Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.

Type

MutableSequence[str]

max_output_tokens

Optional. The maximum number of tokens to include in a response candidate.

Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.

This field is a member of oneof _max_output_tokens.

Type

int

temperature

Optional. Controls the randomness of the output.

Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function.

Values can range from [0.0, 2.0].

This field is a member of oneof _temperature.

Type

float

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and Top-p (nucleus) sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.

Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty top_k attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow setting top_k on requests.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of top_k most probable tokens. Models running with nucleus sampling don’t allow top_k setting.

Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty top_k attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow setting top_k on requests.

This field is a member of oneof _top_k.

Type

int

response_mime_type

Optional. MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.

Type

str

response_schema

Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays.

If set, a compatible response_mime_type must also be set. Compatible MIME types: application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.

Type

google.ai.generativelanguage_v1beta.types.Schema

presence_penalty

Optional. Presence penalty applied to the next token’s logprobs if the token has already been seen in the response.

This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use [frequency_penalty][google.ai.generativelanguage.v1beta.GenerationConfig.frequency_penalty] for a penalty that increases with each use.

A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.

A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.

This field is a member of oneof _presence_penalty.

Type

float

frequency_penalty

Optional. Frequency penalty applied to the next token’s logprobs, multiplied by the number of times each token has been seen in the respponse so far.

A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more dificult it is for the model to use that token again increasing the vocabulary of responses.

Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the [max_output_tokens][google.ai.generativelanguage.v1beta.GenerationConfig.max_output_tokens] limit: “…the the the the the…”.

This field is a member of oneof _frequency_penalty.

Type

float

response_logprobs

Optional. If true, export the logprobs results in response.

This field is a member of oneof _response_logprobs.

Type

bool

logprobs

Optional. Only valid if [response_logprobs=True][google.ai.generativelanguage.v1beta.GenerationConfig.response_logprobs]. This sets the number of top logprobs to return at each decoding step in the [Candidate.logprobs_result][google.ai.generativelanguage.v1beta.Candidate.logprobs_result].

This field is a member of oneof _logprobs.

Type

int

class google.ai.generativelanguage_v1beta.types.GetCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to read CachedContent.

name

Required. The resource name referring to the content cache entry. Format: cachedContents/{id}

Type

str

class google.ai.generativelanguage_v1beta.types.GetChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Chunk.

name

Required. The name of the Chunk to retrieve. Example: corpora/my-corpus-123/documents/the-doc-abc/chunks/some-chunk

Type

str

class google.ai.generativelanguage_v1beta.types.GetCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Corpus.

name

Required. The name of the Corpus. Example: corpora/my-corpus-123

Type

str

class google.ai.generativelanguage_v1beta.types.GetDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Document.

name

Required. The name of the Document to retrieve. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

class google.ai.generativelanguage_v1beta.types.GetFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for GetFile.

name

Required. The name of the File to get. Example: files/abc-123

Type

str

class google.ai.generativelanguage_v1beta.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Model.

name

Required. The resource name of the model.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

class google.ai.generativelanguage_v1beta.types.GetPermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Permission.

name

Required. The resource name of the permission.

Formats: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission}

Type

str

class google.ai.generativelanguage_v1beta.types.GetTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Model.

name

Required. The resource name of the model.

Format: tunedModels/my-model-id

Type

str

class google.ai.generativelanguage_v1beta.types.GoogleSearchRetrieval(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tool to retrieve public web data for grounding, powered by Google.

dynamic_retrieval_config

Specifies the dynamic retrieval configuration for the given source.

Type

google.ai.generativelanguage_v1beta.types.DynamicRetrievalConfig

class google.ai.generativelanguage_v1beta.types.GroundingAttribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Attribution for a source that contributed to an answer.

source_id

Output only. Identifier for the source contributing to this attribution.

Type

google.ai.generativelanguage_v1beta.types.AttributionSourceId

content

Grounding source content that makes up this attribution.

Type

google.ai.generativelanguage_v1beta.types.Content

class google.ai.generativelanguage_v1beta.types.GroundingChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Grounding chunk.

web

Grounding chunk from the web.

This field is a member of oneof chunk_type.

Type

google.ai.generativelanguage_v1beta.types.GroundingChunk.Web

class Web(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Chunk from the web.

uri

URI reference of the chunk.

This field is a member of oneof _uri.

Type

str

title

Title of the chunk.

This field is a member of oneof _title.

Type

str

class google.ai.generativelanguage_v1beta.types.GroundingMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata returned to client when grounding is enabled.

search_entry_point

Optional. Google search entry for the following-up web searches.

This field is a member of oneof _search_entry_point.

Type

google.ai.generativelanguage_v1beta.types.SearchEntryPoint

grounding_chunks

List of supporting references retrieved from specified grounding source.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingChunk]

grounding_supports

List of grounding support.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingSupport]

retrieval_metadata

Metadata related to retrieval in the grounding flow.

This field is a member of oneof _retrieval_metadata.

Type

google.ai.generativelanguage_v1beta.types.RetrievalMetadata

class google.ai.generativelanguage_v1beta.types.GroundingPassage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Passage included inline with a grounding configuration.

id

Identifier for the passage for attributing this passage in grounded answers.

Type

str

content

Content of the passage.

Type

google.ai.generativelanguage_v1beta.types.Content

class google.ai.generativelanguage_v1beta.types.GroundingPassages(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A repeated list of passages.

passages

List of passages.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingPassage]

class google.ai.generativelanguage_v1beta.types.GroundingSupport(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Grounding support.

segment

Segment of the content this support belongs to.

This field is a member of oneof _segment.

Type

google.ai.generativelanguage_v1beta.types.Segment

grounding_chunk_indices

A list of indices (into ‘grounding_chunk’) specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.

Type

MutableSequence[int]

confidence_scores

Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. This list must have the same size as the grounding_chunk_indices.

Type

MutableSequence[float]

class google.ai.generativelanguage_v1beta.types.HarmCategory(value)[source]

Bases: proto.enums.Enum

The category of a rating.

These categories cover various kinds of harms that developers may wish to adjust.

Values:
HARM_CATEGORY_UNSPECIFIED (0):

Category is unspecified.

HARM_CATEGORY_DEROGATORY (1):

PaLM - Negative or harmful comments targeting identity and/or protected attribute.

HARM_CATEGORY_TOXICITY (2):

PaLM - Content that is rude, disrespectful, or profane.

HARM_CATEGORY_VIOLENCE (3):

PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore.

HARM_CATEGORY_SEXUAL (4):

PaLM - Contains references to sexual acts or other lewd content.

HARM_CATEGORY_MEDICAL (5):

PaLM - Promotes unchecked medical advice.

HARM_CATEGORY_DANGEROUS (6):

PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts.

HARM_CATEGORY_HARASSMENT (7):

Gemini - Harassment content.

HARM_CATEGORY_HATE_SPEECH (8):

Gemini - Hate speech and content.

HARM_CATEGORY_SEXUALLY_EXPLICIT (9):

Gemini - Sexually explicit content.

HARM_CATEGORY_DANGEROUS_CONTENT (10):

Gemini - Dangerous content.

HARM_CATEGORY_CIVIC_INTEGRITY (11):

Gemini - Content that may be used to harm civic integrity.

class google.ai.generativelanguage_v1beta.types.Hyperparameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Hyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

learning_rate

Optional. Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.001 or 0.0002 will be calculated based on the number of training examples.

This field is a member of oneof learning_rate_option.

Type

float

learning_rate_multiplier

Optional. Immutable. The learning rate multiplier is used to calculate a final learning_rate based on the default (recommended) value. Actual learning rate := learning_rate_multiplier * default learning rate Default learning rate is dependent on base model and dataset size. If not set, a default of 1.0 will be used.

This field is a member of oneof learning_rate_option.

Type

float

epoch_count

Immutable. The number of training epochs. An epoch is one pass through the training data. If not set, a default of 5 will be used.

This field is a member of oneof _epoch_count.

Type

int

batch_size

Immutable. The batch size hyperparameter for tuning. If not set, a default of 4 or 16 will be used based on the number of training examples.

This field is a member of oneof _batch_size.

Type

int

class google.ai.generativelanguage_v1beta.types.ListCachedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to list CachedContents.

page_size

Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.

Type

int

page_token

Optional. A page token, received from a previous ListCachedContents call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ListCachedContents must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListCachedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response with CachedContents list.

cached_contents

List of cached contents.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.CachedContent]

next_page_token

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing Chunks.

parent

Required. The name of the Document containing Chunks. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

page_size

Optional. The maximum number of Chunks to return (per page). The service may return fewer Chunks.

If unspecified, at most 10 Chunks will be returned. The maximum size limit is 100 Chunks per page.

Type

int

page_token

Optional. A page token, received from a previous ListChunks call.

Provide the next_page_token returned in the response as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListChunks must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListChunks containing a paginated list of Chunks. The Chunks are sorted by ascending chunk.create_time.

chunks

The returned Chunks.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]

next_page_token

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListCorporaRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing Corpora.

page_size

Optional. The maximum number of Corpora to return (per page). The service may return fewer Corpora.

If unspecified, at most 10 Corpora will be returned. The maximum size limit is 20 Corpora per page.

Type

int

page_token

Optional. A page token, received from a previous ListCorpora call.

Provide the next_page_token returned in the response as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListCorpora must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListCorporaResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListCorpora containing a paginated list of Corpora. The results are sorted by ascending corpus.create_time.

corpora

The returned corpora.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Corpus]

next_page_token

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListDocumentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing Documents.

parent

Required. The name of the Corpus containing Documents. Example: corpora/my-corpus-123

Type

str

page_size

Optional. The maximum number of Documents to return (per page). The service may return fewer Documents.

If unspecified, at most 10 Documents will be returned. The maximum size limit is 20 Documents per page.

Type

int

page_token

Optional. A page token, received from a previous ListDocuments call.

Provide the next_page_token returned in the response as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListDocuments must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListDocumentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListDocuments containing a paginated list of Documents. The Documents are sorted by ascending document.create_time.

documents

The returned Documents.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Document]

next_page_token

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListFilesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for ListFiles.

page_size

Optional. Maximum number of Files to return per page. If unspecified, defaults to 10. Maximum page_size is 100.

Type

int

page_token

Optional. A page token from a previous ListFiles call.

Type

str

class google.ai.generativelanguage_v1beta.types.ListFilesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response for ListFiles.

files

The list of Files.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.File]

next_page_token

A token that can be sent as a page_token into a subsequent ListFiles call.

Type

str

class google.ai.generativelanguage_v1beta.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing all Models.

page_size

The maximum number of Models to return (per page).

If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.

Type

int

page_token

A page token, received from a previous ListModels call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListModels must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListModel containing a paginated list of Models.

models

The returned Models.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Model]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListPermissionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing permissions.

parent

Required. The parent resource of the permissions. Formats: tunedModels/{tuned_model} corpora/{corpus}

Type

str

page_size

Optional. The maximum number of Permissions to return (per page). The service may return fewer permissions.

If unspecified, at most 10 permissions will be returned. This method returns at most 1000 permissions per page, even if you pass larger page_size.

Type

int

page_token

Optional. A page token, received from a previous ListPermissions call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListPermissions must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1beta.types.ListPermissionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListPermissions containing a paginated list of permissions.

permissions

Returned permissions.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Permission]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.ListTunedModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing TunedModels.

page_size

Optional. The maximum number of TunedModels to return (per page). The service may return fewer tuned models.

If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger page_size.

Type

int

page_token

Optional. A page token, received from a previous ListTunedModels call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListTunedModels must match the call that provided the page token.

Type

str

filter

Optional. A filter is a full text search over the tuned model’s description and display name. By default, results will not include tuned models shared with everyone.

Additional operators:

  • owner:me

  • writers:me

  • readers:me

  • readers:everyone

Examples

“owner:me” returns all tuned models to which

caller has owner role “readers:me” returns all tuned models to which caller has reader role “readers:everyone” returns all tuned models that are shared with everyone

Type

str

class google.ai.generativelanguage_v1beta.types.ListTunedModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListTunedModels containing a paginated list of Models.

tuned_models

The returned Models.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.TunedModel]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1beta.types.LogprobsResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Logprobs Result

top_candidates

Length = total number of decoding steps.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.TopCandidates]

chosen_candidates

Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]

class Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Candidate for the logprobs token and score.

token

The candidate’s token string value.

This field is a member of oneof _token.

Type

str

token_id

The candidate’s token id value.

This field is a member of oneof _token_id.

Type

int

log_probability

The candidate’s log probability.

This field is a member of oneof _log_probability.

Type

float

class TopCandidates(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Candidates with top log probabilities at each decoding step.

candidates

Sorted by log probability in descending order.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]

class google.ai.generativelanguage_v1beta.types.Message(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The base unit of structured text.

A Message includes an author and the content of the Message.

The author is used to tag messages when they are fed to the model as text.

author

Optional. The author of this Message.

This serves as a key for tagging the content of this Message when it is fed to the model as text.

The author can be any alphanumeric string.

Type

str

content

Required. The text content of the structured Message.

Type

str

citation_metadata

Output only. Citation information for model-generated content in this Message.

If this Message was generated as output from the model, this field may be populated with attribution information for any text included in the content. This field is used only on output.

This field is a member of oneof _citation_metadata.

Type

google.ai.generativelanguage_v1beta.types.CitationMetadata

class google.ai.generativelanguage_v1beta.types.MessagePrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

All of the structured input text passed to the model as a prompt.

A MessagePrompt contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.

context

Optional. Text that should be provided to the model first to ground the response.

If not empty, this context will be given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.

This field can be a description of your prompt to the model to help provide context and guide the responses. Examples: “Translate the phrase from English to French.” or “Given a statement, classify the sentiment as happy, sad or neutral.”

Anything included in this field will take precedence over message history if the total input size exceeds the model’s input_token_limit and the input request is truncated.

Type

str

examples

Optional. Examples of what the model should generate.

This includes both user input and the response that the model should emulate.

These examples are treated identically to conversation messages except that they take precedence over the history in messages: If the total input size exceeds the model’s input_token_limit the input will be truncated. Items will be dropped from messages before examples.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Example]

messages

Required. A snapshot of the recent conversation history sorted chronologically.

Turns alternate between two authors.

If the total input size exceeds the model’s input_token_limit the input will be truncated: The oldest items will be dropped from messages.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Message]

class google.ai.generativelanguage_v1beta.types.MetadataFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

User provided filter to limit retrieval based on Chunk or Document level metadata values. Example (genre = drama OR genre = action): key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]

key

Required. The key of the metadata to filter on.

Type

str

conditions

Required. The Conditions for the given key that will trigger this filter. Multiple Conditions are joined by logical ORs.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.Condition]

class google.ai.generativelanguage_v1beta.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Information about a Generative Language Model.

name

Required. The resource name of the Model. Refer to Model variants for all allowed values.

Format: models/{model} with a {model} naming convention of:

  • “{base_model_id}-{version}”

Examples:

  • models/gemini-1.5-flash-001

Type

str

base_model_id

Required. The name of the base model, pass this to the generation request.

Examples:

  • gemini-1.5-flash

Type

str

version

Required. The version number of the model.

This represents the major version (1.0 or 1.5)

Type

str

display_name

The human-readable name of the model. E.g. “Gemini 1.5 Flash”. The name can be up to 128 characters long and can consist of any UTF-8 characters.

Type

str

description

A short description of the model.

Type

str

input_token_limit

Maximum number of input tokens allowed for this model.

Type

int

output_token_limit

Maximum number of output tokens available for this model.

Type

int

supported_generation_methods

The model’s supported generation methods.

The corresponding API method names are defined as Pascal case strings, such as generateMessage and generateContent.

Type

MutableSequence[str]

temperature

Controls the randomness of the output.

Values can range over [0.0,max_temperature], inclusive. A higher value will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _temperature.

Type

float

max_temperature

The maximum temperature this model can use.

This field is a member of oneof _max_temperature.

Type

float

top_p

For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _top_p.

Type

float

top_k

For Top-k sampling.

Top-k sampling considers the set of top_k most probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn’t use top-k sampling, and top_k isn’t allowed as a generation parameter.

This field is a member of oneof _top_k.

Type

int

class google.ai.generativelanguage_v1beta.types.Part(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A datatype containing media that is part of a multi-part Content message.

A Part consists of data which has an associated datatype. A Part can only contain one of the accepted types in Part.data.

A Part must have a fixed IANA MIME type identifying the type and subtype of the media if the inline_data field is filled with raw bytes.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

text

Inline text.

This field is a member of oneof data.

Type

str

inline_data

Inline media bytes.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.Blob

function_call

A predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name with the arguments and their values.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.FunctionCall

function_response

The result output of a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.FunctionResponse

file_data

URI based data.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.FileData

executable_code

Code generated by the model that is meant to be executed.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.ExecutableCode

code_execution_result

Result of executing the ExecutableCode.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1beta.types.CodeExecutionResult

class google.ai.generativelanguage_v1beta.types.Permission(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus).

A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.

There are three concentric roles. Each role is a superset of the previous role’s permitted operations:

  • reader can use the resource (e.g. tuned model, corpus) for inference

  • writer has reader’s permissions and additionally can edit and share

  • owner has writer’s permissions and additionally can delete

name

Output only. Identifier. The permission name. A unique name will be generated on create. Examples: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission} Output only.

Type

str

grantee_type

Optional. Immutable. The type of the grantee.

This field is a member of oneof _grantee_type.

Type

google.ai.generativelanguage_v1beta.types.Permission.GranteeType

email_address

Optional. Immutable. The email address of the user of group which this permission refers. Field is not set when permission’s grantee type is EVERYONE.

This field is a member of oneof _email_address.

Type

str

role

Required. The role granted by this permission.

This field is a member of oneof _role.

Type

google.ai.generativelanguage_v1beta.types.Permission.Role

class GranteeType(value)[source]

Bases: proto.enums.Enum

Defines types of the grantee of this permission.

Values:
GRANTEE_TYPE_UNSPECIFIED (0):

The default value. This value is unused.

USER (1):

Represents a user. When set, you must provide email_address for the user.

GROUP (2):

Represents a group. When set, you must provide email_address for the group.

EVERYONE (3):

Represents access to everyone. No extra information is required.

class Role(value)[source]

Bases: proto.enums.Enum

Defines the role granted by this permission.

Values:
ROLE_UNSPECIFIED (0):

The default value. This value is unused.

OWNER (1):

Owner can use, update, share and delete the resource.

WRITER (2):

Writer can use, update and share the resource.

READER (3):

Reader can use the resource.

class google.ai.generativelanguage_v1beta.types.PredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request message for [PredictionService.Predict][google.ai.generativelanguage.v1beta.PredictionService.Predict].

model

Required. The name of the model for prediction. Format: name=models/{model}.

Type

str

instances

Required. The instances that are the input to the prediction call.

Type

MutableSequence[google.protobuf.struct_pb2.Value]

parameters

Optional. The parameters that govern the prediction call.

Type

google.protobuf.struct_pb2.Value

class google.ai.generativelanguage_v1beta.types.PredictResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response message for [PredictionService.Predict].

predictions

The outputs of the prediction call.

Type

MutableSequence[google.protobuf.struct_pb2.Value]

class google.ai.generativelanguage_v1beta.types.QueryCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for querying a Corpus.

name

Required. The name of the Corpus to query. Example: corpora/my-corpus-123

Type

str

query

Required. Query string to perform semantic search.

Type

str

metadata_filters

Optional. Filter for Chunk and Document metadata. Each MetadataFilter object should correspond to a unique key. Multiple MetadataFilter objects are joined by logical “AND”s.

Example query at document level: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)

MetadataFilter object list: metadata_filters = [ {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]}]

Example query at chunk level for a numeric range of values: (year > 2015 AND year <= 2020)

MetadataFilter object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]

Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]

results_count

Optional. The maximum number of Chunks to return. The service may return fewer Chunks.

If unspecified, at most 10 Chunks will be returned. The maximum specified result count is 100.

Type

int

class google.ai.generativelanguage_v1beta.types.QueryCorpusResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from QueryCorpus containing a list of relevant chunks.

relevant_chunks

The relevant chunks.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]

class google.ai.generativelanguage_v1beta.types.QueryDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for querying a Document.

name

Required. The name of the Document to query. Example: corpora/my-corpus-123/documents/the-doc-abc

Type

str

query

Required. Query string to perform semantic search.

Type

str

results_count

Optional. The maximum number of Chunks to return. The service may return fewer Chunks.

If unspecified, at most 10 Chunks will be returned. The maximum specified result count is 100.

Type

int

metadata_filters

Optional. Filter for Chunk metadata. Each MetadataFilter object should correspond to a unique key. Multiple MetadataFilter objects are joined by logical “AND”s.

Note: Document-level filtering is not supported for this request because a Document name is already specified.

Example query: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)

MetadataFilter object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}}, {key = “chunk.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}}]

Example query for a numeric range of values: (year > 2015 AND year <= 2020)

MetadataFilter object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]

Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]

class google.ai.generativelanguage_v1beta.types.QueryDocumentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from QueryDocument containing a list of relevant chunks.

relevant_chunks

The returned relevant chunks.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]

class google.ai.generativelanguage_v1beta.types.RelevantChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The information for a chunk relevant to a query.

chunk_relevance_score

Chunk relevance to the query.

Type

float

chunk

Chunk associated with the query.

Type

google.ai.generativelanguage_v1beta.types.Chunk

class google.ai.generativelanguage_v1beta.types.RetrievalMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata related to retrieval in the grounding flow.

google_search_dynamic_retrieval_score

Optional. Score indicating how likely information from google search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when google search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger google search.

Type

float

class google.ai.generativelanguage_v1beta.types.SafetyFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety feedback for an entire request.

This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

rating

Safety rating evaluated from content.

Type

google.ai.generativelanguage_v1beta.types.SafetyRating

setting

Safety settings applied to the request.

Type

google.ai.generativelanguage_v1beta.types.SafetySetting

class google.ai.generativelanguage_v1beta.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety rating for a piece of content.

The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

category

Required. The category for this rating.

Type

google.ai.generativelanguage_v1beta.types.HarmCategory

probability

Required. The probability of harm for this content.

Type

google.ai.generativelanguage_v1beta.types.SafetyRating.HarmProbability

blocked

Was this content blocked because of this rating?

Type

bool

class HarmProbability(value)[source]

Bases: proto.enums.Enum

The probability that a piece of content is harmful.

The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.

Values:
HARM_PROBABILITY_UNSPECIFIED (0):

Probability is unspecified.

NEGLIGIBLE (1):

Content has a negligible chance of being unsafe.

LOW (2):

Content has a low chance of being unsafe.

MEDIUM (3):

Content has a medium chance of being unsafe.

HIGH (4):

Content has a high chance of being unsafe.

class google.ai.generativelanguage_v1beta.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety setting, affecting the safety-blocking behavior.

Passing a safety setting for a category changes the allowed probability that content is blocked.

category

Required. The category for this setting.

Type

google.ai.generativelanguage_v1beta.types.HarmCategory

threshold

Required. Controls the probability threshold at which harm is blocked.

Type

google.ai.generativelanguage_v1beta.types.SafetySetting.HarmBlockThreshold

class HarmBlockThreshold(value)[source]

Bases: proto.enums.Enum

Block at and beyond a specified harm probability.

Values:
HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

Threshold is unspecified.

BLOCK_LOW_AND_ABOVE (1):

Content with NEGLIGIBLE will be allowed.

BLOCK_MEDIUM_AND_ABOVE (2):

Content with NEGLIGIBLE and LOW will be allowed.

BLOCK_ONLY_HIGH (3):

Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.

BLOCK_NONE (4):

All content will be allowed.

OFF (5):

Turn off the safety filter.

class google.ai.generativelanguage_v1beta.types.Schema(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The Schema object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object.

type_

Required. Data type.

Type

google.ai.generativelanguage_v1beta.types.Type

format_

Optional. The format of the data. This is used only for primitive datatypes. Supported formats:

for NUMBER type: float, double for INTEGER type: int32, int64 for STRING type: enum

Type

str

description

Optional. A brief description of the parameter. This could contain examples of use. Parameter description may be formatted as Markdown.

Type

str

nullable

Optional. Indicates if the value may be null.

Type

bool

enum

Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:[“EAST”, NORTH”, “SOUTH”, “WEST”]}

Type

MutableSequence[str]

items

Optional. Schema of the elements of Type.ARRAY.

This field is a member of oneof _items.

Type

google.ai.generativelanguage_v1beta.types.Schema

max_items

Optional. Maximum number of the elements for Type.ARRAY.

Type

int

min_items

Optional. Minimum number of the elements for Type.ARRAY.

Type

int

properties

Optional. Properties of Type.OBJECT.

Type

MutableMapping[str, google.ai.generativelanguage_v1beta.types.Schema]

required

Optional. Required properties of Type.OBJECT.

Type

MutableSequence[str]

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.ai.generativelanguage_v1beta.types.SearchEntryPoint(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Google search entry point.

rendered_content

Optional. Web content snippet that can be embedded in a web page or an app webview.

Type

str

sdk_blob

Optional. Base64 encoded JSON representing array of <search term, search url> tuple.

Type

bytes

class google.ai.generativelanguage_v1beta.types.Segment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Segment of the content.

part_index

Output only. The index of a Part object within its parent Content object.

Type

int

start_index

Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.

Type

int

end_index

Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.

Type

int

text

Output only. The text corresponding to the segment from the response.

Type

str

class google.ai.generativelanguage_v1beta.types.SemanticRetrieverConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration for retrieving grounding content from a Corpus or Document created using the Semantic Retriever API.

source

Required. Name of the resource for retrieval. Example: corpora/123 or corpora/123/documents/abc.

Type

str

query

Required. Query to use for matching Chunks in the given resource by similarity.

Type

google.ai.generativelanguage_v1beta.types.Content

metadata_filters

Optional. Filters for selecting Documents and/or Chunks from the resource.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]

max_chunks_count

Optional. Maximum number of relevant Chunks to retrieve.

This field is a member of oneof _max_chunks_count.

Type

int

minimum_relevance_score

Optional. Minimum relevance score for retrieved relevant Chunks.

This field is a member of oneof _minimum_relevance_score.

Type

float

class google.ai.generativelanguage_v1beta.types.StringList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

User provided string values assigned to a single metadata key.

values

The string values of the metadata to store.

Type

MutableSequence[str]

class google.ai.generativelanguage_v1beta.types.TaskType(value)[source]

Bases: proto.enums.Enum

Type of task for which the embedding will be used.

Values:
TASK_TYPE_UNSPECIFIED (0):

Unset value, which will default to one of the other enum values.

RETRIEVAL_QUERY (1):

Specifies the given text is a query in a search/retrieval setting.

RETRIEVAL_DOCUMENT (2):

Specifies the given text is a document from the corpus being searched.

SEMANTIC_SIMILARITY (3):

Specifies the given text will be used for STS.

CLASSIFICATION (4):

Specifies that the given text will be classified.

CLUSTERING (5):

Specifies that the embeddings will be used for clustering.

QUESTION_ANSWERING (6):

Specifies that the given text will be used for question answering.

FACT_VERIFICATION (7):

Specifies that the given text will be used for fact verification.

class google.ai.generativelanguage_v1beta.types.TextCompletion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Output text returned from a model.

output

Output only. The generated text returned from the model.

Type

str

safety_ratings

Ratings for the safety of a response.

There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]

citation_metadata

Output only. Citation information for model-generated output in this TextCompletion.

This field may be populated with attribution information for any text included in the output.

This field is a member of oneof _citation_metadata.

Type

google.ai.generativelanguage_v1beta.types.CitationMetadata

class google.ai.generativelanguage_v1beta.types.TextPrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Text given to the model as a prompt.

The Model will use this TextPrompt to Generate a text completion.

text

Required. The prompt text.

Type

str

class google.ai.generativelanguage_v1beta.types.Tool(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tool details that the model may use to generate response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

function_declarations

Optional. A list of FunctionDeclarations available to the model that can be used for function calling.

The model or system does not execute the function. Instead the defined function may be returned as a [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] with arguments to the client side for execution. The model may decide to call a subset of these functions by populating [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] in the response. The next conversation turn may contain a [FunctionResponse][google.ai.generativelanguage.v1beta.Part.function_response] with the [Content.role][google.ai.generativelanguage.v1beta.Content.role] “function” generation context for the next model turn.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionDeclaration]

google_search_retrieval

Optional. Retrieval tool that is powered by Google search.

Type

google.ai.generativelanguage_v1beta.types.GoogleSearchRetrieval

code_execution

Optional. Enables the model to execute code as part of generation.

Type

google.ai.generativelanguage_v1beta.types.CodeExecution

class google.ai.generativelanguage_v1beta.types.ToolConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The Tool configuration containing parameters for specifying Tool use in the request.

function_calling_config

Optional. Function calling config.

Type

google.ai.generativelanguage_v1beta.types.FunctionCallingConfig

class google.ai.generativelanguage_v1beta.types.TransferOwnershipRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to transfer the ownership of the tuned model.

name

Required. The resource name of the tuned model to transfer ownership.

Format: tunedModels/my-model-id

Type

str

email_address

Required. The email address of the user to whom the tuned model is being transferred to.

Type

str

class google.ai.generativelanguage_v1beta.types.TransferOwnershipResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from TransferOwnership.

class google.ai.generativelanguage_v1beta.types.TunedModel(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A fine-tuned model created using ModelService.CreateTunedModel.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

tuned_model_source

Optional. TunedModel to use as the starting point for training the new model.

This field is a member of oneof source_model.

Type

google.ai.generativelanguage_v1beta.types.TunedModelSource

base_model

Immutable. The name of the Model to tune. Example: models/gemini-1.5-flash-001

This field is a member of oneof source_model.

Type

str

name

Output only. The tuned model name. A unique name will be generated on create. Example: tunedModels/az2mb0bpw6i If display_name is set on create, the id portion of the name will be set by concatenating the words of the display_name with hyphens and adding a random portion for uniqueness.

Example:

  • display_name = Sentence Translator

  • name = tunedModels/sentence-translator-u3b7m

Type

str

display_name

Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.

Type

str

description

Optional. A short description of this model.

Type

str

temperature

Optional. Controls the randomness of the output.

Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _temperature.

Type

float

top_p

Optional. For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. For Top-k sampling.

Top-k sampling considers the set of top_k most probable tokens. This value specifies default to be used by the backend while making the call to the model.

This value specifies default to be the one used by the base model while creating the model.

This field is a member of oneof _top_k.

Type

int

state

Output only. The state of the tuned model.

Type

google.ai.generativelanguage_v1beta.types.TunedModel.State

create_time

Output only. The timestamp when this model was created.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Output only. The timestamp when this model was updated.

Type

google.protobuf.timestamp_pb2.Timestamp

tuning_task

Required. The tuning task that creates the tuned model.

Type

google.ai.generativelanguage_v1beta.types.TuningTask

reader_project_numbers

Optional. List of project numbers that have read access to the tuned model.

Type

MutableSequence[int]

class State(value)[source]

Bases: proto.enums.Enum

The state of the tuned model.

Values:
STATE_UNSPECIFIED (0):

The default value. This value is unused.

CREATING (1):

The model is being created.

ACTIVE (2):

The model is ready to be used.

FAILED (3):

The model failed to be created.

class google.ai.generativelanguage_v1beta.types.TunedModelSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tuned model as a source for training a new model.

tuned_model

Immutable. The name of the TunedModel to use as the starting point for training the new model. Example: tunedModels/my-tuned-model

Type

str

base_model

Output only. The name of the base Model this TunedModel was tuned from. Example: models/gemini-1.5-flash-001

Type

str

class google.ai.generativelanguage_v1beta.types.TuningExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A single example for tuning.

text_input

Optional. Text model input.

This field is a member of oneof model_input.

Type

str

output

Required. The expected model output.

Type

str

class google.ai.generativelanguage_v1beta.types.TuningExamples(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A set of tuning examples. Can be training or validation data.

examples

Required. The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.TuningExample]

class google.ai.generativelanguage_v1beta.types.TuningSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Record for a single tuning step.

step

Output only. The tuning step.

Type

int

epoch

Output only. The epoch this step was part of.

Type

int

mean_loss

Output only. The mean loss of the training examples for this step.

Type

float

compute_time

Output only. The timestamp when this metric was computed.

Type

google.protobuf.timestamp_pb2.Timestamp

class google.ai.generativelanguage_v1beta.types.TuningTask(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Tuning tasks that create tuned models.

start_time

Output only. The timestamp when tuning this model started.

Type

google.protobuf.timestamp_pb2.Timestamp

complete_time

Output only. The timestamp when tuning this model completed.

Type

google.protobuf.timestamp_pb2.Timestamp

snapshots

Output only. Metrics collected during tuning.

Type

MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]

training_data

Required. Input only. Immutable. The model training data.

Type

google.ai.generativelanguage_v1beta.types.Dataset

hyperparameters

Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.

Type

google.ai.generativelanguage_v1beta.types.Hyperparameters

class google.ai.generativelanguage_v1beta.types.Type(value)[source]

Bases: proto.enums.Enum

Type contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types

Values:
TYPE_UNSPECIFIED (0):

Not specified, should not be used.

STRING (1):

String type.

NUMBER (2):

Number type.

INTEGER (3):

Integer type.

BOOLEAN (4):

Boolean type.

ARRAY (5):

Array type.

OBJECT (6):

Object type.

class google.ai.generativelanguage_v1beta.types.UpdateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update CachedContent.

cached_content

Required. The content cache entry to update

Type

google.ai.generativelanguage_v1beta.types.CachedContent

update_mask

The list of fields to update.

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.UpdateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update a Chunk.

chunk

Required. The Chunk to update.

Type

google.ai.generativelanguage_v1beta.types.Chunk

update_mask

Required. The list of fields to update. Currently, this only supports updating custom_metadata and data.

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.UpdateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update a Corpus.

corpus

Required. The Corpus to update.

Type

google.ai.generativelanguage_v1beta.types.Corpus

update_mask

Required. The list of fields to update. Currently, this only supports updating display_name.

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.UpdateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update a Document.

document

Required. The Document to update.

Type

google.ai.generativelanguage_v1beta.types.Document

update_mask

Required. The list of fields to update. Currently, this only supports updating display_name and custom_metadata.

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.UpdatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update the Permission.

permission

Required. The permission to update.

The permission’s name field is used to identify the permission to update.

Type

google.ai.generativelanguage_v1beta.types.Permission

update_mask

Required. The list of fields to update. Accepted ones:

  • role (Permission.role field)

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.UpdateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to update a TunedModel.

tuned_model

Required. The tuned model to update.

Type

google.ai.generativelanguage_v1beta.types.TunedModel

update_mask

Required. The list of fields to update.

Type

google.protobuf.field_mask_pb2.FieldMask

class google.ai.generativelanguage_v1beta.types.VideoMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata for a video File.

video_duration

Duration of the video.

Type

google.protobuf.duration_pb2.Duration