Types for Google Ai Generativelanguage v1beta API¶
- class google.ai.generativelanguage_v1beta.types.AttributionSourceId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifier for the source contributing to this attribution.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- semantic_retriever_chunk¶
Identifier for a
Chunk
fetched via Semantic Retriever.This field is a member of oneof
source
.
- class GroundingPassageId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifier for a part within a
GroundingPassage
.- passage_id¶
Output only. ID of the passage matching the
GenerateAnswerRequest
’sGroundingPassage.id
.- Type
- class SemanticRetrieverChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identifier for a
Chunk
retrieved via Semantic Retriever specified in theGenerateAnswerRequest
usingSemanticRetrieverConfig
.- source¶
Output only. Name of the source matching the request’s
SemanticRetrieverConfig.source
. Example:corpora/123
orcorpora/123/documents/abc
- Type
- class google.ai.generativelanguage_v1beta.types.BatchCreateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to batch create
Chunk
s.- parent¶
Optional. The name of the
Document
where this batch ofChunk
s will be created. The parent field in everyCreateChunkRequest
must match this value. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- requests¶
Required. The request messages specifying the
Chunk
s to create. A maximum of 100Chunk
s can be created in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CreateChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchCreateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
BatchCreateChunks
containing a list of createdChunk
s.- chunks¶
Chunk
s created.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.BatchDeleteChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to batch delete
Chunk
s.- parent¶
Optional. The name of the
Document
containing theChunk
s to delete. The parent field in everyDeleteChunkRequest
must match this value. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- requests¶
Required. The request messages specifying the
Chunk
s to delete.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.DeleteChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Batch request to get embeddings from the model for a list of prompts.
- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModels
method.Format:
models/{model}
- Type
- requests¶
Required. Embed requests for the batch. The model in each of these requests must match the model specified
BatchEmbedContentsRequest.model
.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedContentRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response to a
BatchEmbedContentsRequest
.- embeddings¶
Output only. The embeddings for each request, in the same order as provided in the batch request.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentEmbedding]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Batch request to get a text embedding from the model.
- model¶
Required. The name of the
Model
to use for generating the embedding. Examples: models/embedding-gecko-001- Type
- texts¶
Optional. The free-form input texts that the model will turn into an embedding. The current limit is 100 texts, over which an error will be thrown.
- Type
MutableSequence[str]
- requests¶
Optional. Embed requests for the batch. Only one of
texts
orrequests
can be set.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedTextRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response to a EmbedTextRequest.
- embeddings¶
Output only. The embeddings generated from the input text.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Embedding]
- class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to batch update
Chunk
s.- parent¶
Optional. The name of the
Document
containing theChunk
s to update. The parent field in everyUpdateChunkRequest
must match this value. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- requests¶
Required. The request messages specifying the
Chunk
s to update. A maximum of 100Chunk
s can be updated in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.UpdateChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
BatchUpdateChunks
containing a list of updatedChunk
s.- chunks¶
Chunk
s updated.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.Blob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Raw media bytes.
Text should not be sent as raw bytes, use the ‘text’ field.
- mime_type¶
The IANA standard MIME type of the source data. Examples:
image/png
image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.
- Type
- class google.ai.generativelanguage_v1beta.types.CachedContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Content that has been preprocessed and can be used in subsequent request to GenerativeService.
Cached content can be only used with model it was created for.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- expire_time¶
Timestamp in UTC of when this resource is considered expired. This is always provided on output, regardless of what was sent on input.
This field is a member of oneof
expiration
.
- name¶
Optional. Identifier. The resource name referring to the cached content. Format:
cachedContents/{id}
This field is a member of oneof
_name
.- Type
- display_name¶
Optional. Immutable. The user-generated meaningful display name of the cached content. Maximum 128 Unicode characters.
This field is a member of oneof
_display_name
.- Type
- model¶
Required. Immutable. The name of the
Model
to use for cached content Format:models/{model}
This field is a member of oneof
_model
.- Type
- system_instruction¶
Optional. Input only. Immutable. Developer set system instruction. Currently text only.
This field is a member of oneof
_system_instruction
.
- contents¶
Optional. Input only. Immutable. The content to cache.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- tools¶
Optional. Input only. Immutable. A list of
Tools
the model may use to generate the next response- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]
- tool_config¶
Optional. Input only. Immutable. Tool config. This config is shared for all tools.
This field is a member of oneof
_tool_config
.
- create_time¶
Output only. Creation time of the cache entry.
- update_time¶
Output only. When the cache entry was last updated in UTC time.
- usage_metadata¶
Output only. Metadata on the usage of the cached content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata on the usage of the cached content.
- class google.ai.generativelanguage_v1beta.types.Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response candidate generated from the model.
- index¶
Output only. Index of the candidate in the list of response candidates.
This field is a member of oneof
_index
.- Type
- content¶
Output only. Generated content returned from the model.
- finish_reason¶
Optional. Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating tokens.
- safety_ratings¶
List of ratings for the safety of a response candidate. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- citation_metadata¶
Output only. Citation information for model-generated candidate.
This field may be populated with recitation information for any text included in the
content
. These are passages that are “recited” from copyrighted material in the foundational LLM’s training data.
- grounding_attributions¶
Output only. Attribution information for sources that contributed to a grounded answer.
This field is populated for
GenerateAnswer
calls.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingAttribution]
- grounding_metadata¶
Output only. Grounding metadata for the candidate.
This field is populated for
GenerateContent
calls.
- logprobs_result¶
Output only. Log-likelihood scores for the response tokens and top tokens
- class FinishReason(value)[source]¶
Bases:
proto.enums.Enum
Defines the reason why the model stopped generating tokens.
- Values:
- FINISH_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- STOP (1):
Natural stop point of the model or provided stop sequence.
- MAX_TOKENS (2):
The maximum number of tokens as specified in the request was reached.
- SAFETY (3):
The response candidate content was flagged for safety reasons.
- RECITATION (4):
The response candidate content was flagged for recitation reasons.
- LANGUAGE (6):
The response candidate content was flagged for using an unsupported language.
- OTHER (5):
Unknown reason.
- BLOCKLIST (7):
Token generation stopped because the content contains forbidden terms.
- PROHIBITED_CONTENT (8):
Token generation stopped for potentially containing prohibited content.
- SPII (9):
Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).
- MALFORMED_FUNCTION_CALL (10):
The function call generated by the model is invalid.
- class google.ai.generativelanguage_v1beta.types.Chunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A
Chunk
is a subpart of aDocument
that is treated as an independent unit for the purposes of vector representation and storage. ACorpus
can have a maximum of 1 millionChunk
s.- name¶
Immutable. Identifier. The
Chunk
resource name. The ID (name excluding the corpora/*/documents/*/chunks/ prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a random 12-character unique ID will be generated. Example:corpora/{corpus_id}/documents/{document_id}/chunks/123a456b789c
- Type
- data¶
Required. The content for the
Chunk
, such as the text string. The maximum number of tokens per chunk is 2043.
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs. The maximum number of
CustomMetadata
per chunk is 20.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]
- create_time¶
Output only. The Timestamp of when the
Chunk
was created.
- update_time¶
Output only. The Timestamp of when the
Chunk
was last updated.
- state¶
Output only. Current state of the
Chunk
.
- class State(value)[source]¶
Bases:
proto.enums.Enum
States for the lifecycle of a
Chunk
.- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- STATE_PENDING_PROCESSING (1):
Chunk
is being processed (embedding and vector storage).- STATE_ACTIVE (2):
Chunk
is processed and available for querying.- STATE_FAILED (10):
Chunk
failed processing.
- class google.ai.generativelanguage_v1beta.types.ChunkData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Extracted data that represents the
Chunk
content.
- class google.ai.generativelanguage_v1beta.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A collection of source attributions for a piece of content.
- citation_sources¶
Citations to sources for a specific response.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CitationSource]
- class google.ai.generativelanguage_v1beta.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A citation to a source for a portion of a specific response.
- start_index¶
Optional. Start of segment of the response that is attributed to this source.
Index indicates the start of the segment, measured in bytes.
This field is a member of oneof
_start_index
.- Type
- end_index¶
Optional. End of the attributed segment, exclusive.
This field is a member of oneof
_end_index
.- Type
- class google.ai.generativelanguage_v1beta.types.CodeExecution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Tool that executes code generated by the model, and automatically returns the result to the model.
See also
ExecutableCode
andCodeExecutionResult
which are only generated when using this tool.
- class google.ai.generativelanguage_v1beta.types.CodeExecutionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Result of executing the
ExecutableCode
.Only generated when using the
CodeExecution
, and always follows apart
containing theExecutableCode
.- outcome¶
Required. Outcome of the code execution.
- output¶
Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
- Type
- class Outcome(value)[source]¶
Bases:
proto.enums.Enum
Enumeration of possible outcomes of the code execution.
- Values:
- OUTCOME_UNSPECIFIED (0):
Unspecified status. This value should not be used.
- OUTCOME_OK (1):
Code execution completed successfully.
- OUTCOME_FAILED (2):
Code execution finished but with a failure.
stderr
should contain the reason.- OUTCOME_DEADLINE_EXCEEDED (3):
Code execution ran for too long, and was cancelled. There may or may not be a partial output present.
- class google.ai.generativelanguage_v1beta.types.Condition(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Filter condition applicable to a single key.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value to filter the metadata on.
This field is a member of oneof
value
.- Type
- numeric_value¶
The numeric value to filter the metadata on.
This field is a member of oneof
value
.- Type
- operation¶
Required. Operator applied to the given key-value pair to trigger the condition.
- class Operator(value)[source]¶
Bases:
proto.enums.Enum
Defines the valid operators that can be applied to a key-value pair.
- Values:
- OPERATOR_UNSPECIFIED (0):
The default value. This value is unused.
- LESS (1):
Supported by numeric.
- LESS_EQUAL (2):
Supported by numeric.
- EQUAL (3):
Supported by numeric & string.
- GREATER_EQUAL (4):
Supported by numeric.
- GREATER (5):
Supported by numeric.
- NOT_EQUAL (6):
Supported by numeric & string.
- INCLUDES (7):
Supported by string only when
CustomMetadata
value type for the given key has astring_list_value
.- EXCLUDES (8):
Supported by string only when
CustomMetadata
value type for the given key has astring_list_value
.
- class google.ai.generativelanguage_v1beta.types.Content(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The base structured datatype containing multi-part content of a message.
A
Content
includes arole
field designating the producer of theContent
and aparts
field containing multi-part data that contains the content of the message turn.- parts¶
Ordered
Parts
that constitute a single message. Parts may have different MIME types.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Part]
- class google.ai.generativelanguage_v1beta.types.ContentEmbedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of floats representing an embedding.
- class google.ai.generativelanguage_v1beta.types.ContentFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.
- reason¶
The reason content was blocked during request processing.
- message¶
A string that describes the filtering behavior in more detail.
This field is a member of oneof
_message
.- Type
- class BlockedReason(value)[source]¶
Bases:
proto.enums.Enum
A list of reasons why content may have been blocked.
- Values:
- BLOCKED_REASON_UNSPECIFIED (0):
A blocked reason was not specified.
- SAFETY (1):
Content was blocked by safety settings.
- OTHER (2):
Content was blocked, but the reason is uncategorized.
- class google.ai.generativelanguage_v1beta.types.Corpus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A
Corpus
is a collection ofDocument
s. A project can create up to 5 corpora.- name¶
Immutable. Identifier. The
Corpus
resource name. The ID (name excluding the “corpora/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_name
along with a 12 character random suffix. Example:corpora/my-awesome-corpora-123a456b789c
- Type
- display_name¶
Optional. The human-readable display name for the
Corpus
. The display name must be no more than 512 characters in length, including spaces. Example: “Docs on Semantic Retriever”.- Type
- create_time¶
Output only. The Timestamp of when the
Corpus
was created.
- update_time¶
Output only. The Timestamp of when the
Corpus
was last updated.
- class google.ai.generativelanguage_v1beta.types.CountMessageTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Counts the number of tokens in the
prompt
sent to a model.Models may tokenize text differently, so each model may return a different
token_count
.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModels
method.Format:
models/{model}
- Type
- prompt¶
Required. The prompt, whose token count is to be returned.
- class google.ai.generativelanguage_v1beta.types.CountMessageTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response from
CountMessageTokens
.It returns the model’s
token_count
for theprompt
.
- class google.ai.generativelanguage_v1beta.types.CountTextTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Counts the number of tokens in the
prompt
sent to a model.Models may tokenize text differently, so each model may return a different
token_count
.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModels
method.Format:
models/{model}
- Type
- prompt¶
Required. The free-form input text given to the model as a prompt.
- class google.ai.generativelanguage_v1beta.types.CountTextTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response from
CountTextTokens
.It returns the model’s
token_count
for theprompt
.
- class google.ai.generativelanguage_v1beta.types.CountTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Counts the number of tokens in the
prompt
sent to a model.Models may tokenize text differently, so each model may return a different
token_count
.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModels
method.Format:
models/{model}
- Type
- contents¶
Optional. The input given to the model as a prompt. This field is ignored when
generate_content_request
is set.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- generate_content_request¶
Optional. The overall input given to the
Model
. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling.Model
s/Content
s andgenerate_content_request
s are mutually exclusive. You can either sendModel
+Content
s or agenerate_content_request
, but never both.
- class google.ai.generativelanguage_v1beta.types.CountTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response from
CountTokens
.It returns the model’s
token_count
for theprompt
.- total_tokens¶
The number of tokens that the
Model
tokenizes theprompt
into. Always non-negative.- Type
- class google.ai.generativelanguage_v1beta.types.CreateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create CachedContent.
- cached_content¶
Required. The cached content to create.
- class google.ai.generativelanguage_v1beta.types.CreateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a
Chunk
.- parent¶
Required. The name of the
Document
where thisChunk
will be created. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- chunk¶
Required. The
Chunk
to create.
- class google.ai.generativelanguage_v1beta.types.CreateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a
Corpus
.- corpus¶
Required. The
Corpus
to create.
- class google.ai.generativelanguage_v1beta.types.CreateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a
Document
.- parent¶
Required. The name of the
Corpus
where thisDocument
will be created. Example:corpora/my-corpus-123
- Type
- document¶
Required. The
Document
to create.
- class google.ai.generativelanguage_v1beta.types.CreateFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for
CreateFile
.- file¶
Optional. Metadata for the file to create.
- class google.ai.generativelanguage_v1beta.types.CreateFileResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response for
CreateFile
.- file¶
Metadata for the created file.
- class google.ai.generativelanguage_v1beta.types.CreatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a
Permission
.- parent¶
Required. The parent resource of the
Permission
. Formats:tunedModels/{tuned_model}
corpora/{corpus}
- Type
- permission¶
Required. The permission to create.
- class google.ai.generativelanguage_v1beta.types.CreateTunedModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata about the state and progress of creating a tuned model returned from the long-running operation
- snapshots¶
Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]
- class google.ai.generativelanguage_v1beta.types.CreateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to create a TunedModel.
- tuned_model_id¶
Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression:
[a-z]([a-z0-9-]{0,38}[a-z0-9])?
.This field is a member of oneof
_tuned_model_id
.- Type
- tuned_model¶
Required. The tuned model to create.
- class google.ai.generativelanguage_v1beta.types.CustomMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
User provided metadata stored as key-value pairs.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value of the metadata to store.
This field is a member of oneof
value
.- Type
- string_list_value¶
The StringList value of the metadata to store.
This field is a member of oneof
value
.
- class google.ai.generativelanguage_v1beta.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset for training or validation.
- class google.ai.generativelanguage_v1beta.types.DeleteCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete CachedContent.
- class google.ai.generativelanguage_v1beta.types.DeleteChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete a
Chunk
.
- class google.ai.generativelanguage_v1beta.types.DeleteCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete a
Corpus
.
- class google.ai.generativelanguage_v1beta.types.DeleteDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete a
Document
.- name¶
Required. The resource name of the
Document
to delete. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- class google.ai.generativelanguage_v1beta.types.DeleteFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for
DeleteFile
.
- class google.ai.generativelanguage_v1beta.types.DeletePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete the
Permission
.
- class google.ai.generativelanguage_v1beta.types.DeleteTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to delete a TunedModel.
- class google.ai.generativelanguage_v1beta.types.Document(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A
Document
is a collection ofChunk
s. ACorpus
can have a maximum of 10,000Document
s.- name¶
Immutable. Identifier. The
Document
resource name. The ID (name excluding the corpora/*/documents/ prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_name
along with a 12 character random suffix. Example:corpora/{corpus_id}/documents/my-awesome-doc-123a456b789c
- Type
- display_name¶
Optional. The human-readable display name for the
Document
. The display name must be no more than 512 characters in length, including spaces. Example: “Semantic Retriever Documentation”.- Type
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs used for querying. A
Document
can have a maximum of 20CustomMetadata
.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]
- update_time¶
Output only. The Timestamp of when the
Document
was last updated.
- create_time¶
Output only. The Timestamp of when the
Document
was created.
- class google.ai.generativelanguage_v1beta.types.DynamicRetrievalConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the options to customize dynamic retrieval.
- mode¶
The mode of the predictor to be used in dynamic retrieval.
- dynamic_threshold¶
The threshold to be used in dynamic retrieval. If not set, a system default value is used.
This field is a member of oneof
_dynamic_threshold
.- Type
- class Mode(value)[source]¶
Bases:
proto.enums.Enum
The mode of the predictor to be used in dynamic retrieval.
- Values:
- MODE_UNSPECIFIED (0):
Always trigger retrieval.
- MODE_DYNAMIC (1):
Run retrieval only when system decides it is necessary.
- class google.ai.generativelanguage_v1beta.types.EmbedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request containing the
Content
for the model to embed.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModels
method.Format:
models/{model}
- Type
- content¶
Required. The content to embed. Only the
parts.text
fields will be counted.
- task_type¶
Optional. Optional task type for which the embeddings will be used. Can only be set for
models/embedding-001
.This field is a member of oneof
_task_type
.
- title¶
Optional. An optional title for the text. Only applicable when TaskType is
RETRIEVAL_DOCUMENT
.Note: Specifying a
title
forRETRIEVAL_DOCUMENT
provides better quality embeddings for retrieval.This field is a member of oneof
_title
.- Type
- output_dimensionality¶
Optional. Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end. Supported by newer models since 2024 only. You cannot set this value if using the earlier model (
models/embedding-001
).This field is a member of oneof
_output_dimensionality
.- Type
- class google.ai.generativelanguage_v1beta.types.EmbedContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response to an
EmbedContentRequest
.- embedding¶
Output only. The embedding generated from the input content.
- class google.ai.generativelanguage_v1beta.types.EmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get a text embedding from the model.
- class google.ai.generativelanguage_v1beta.types.EmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response to a EmbedTextRequest.
- class google.ai.generativelanguage_v1beta.types.Embedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of floats representing the embedding.
- class google.ai.generativelanguage_v1beta.types.Example(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
An input/output example used to instruct the Model.
It demonstrates how the model should respond or format its response.
- input¶
Required. An example of an input
Message
from the user.
- output¶
Required. An example of what the model should output given the input.
- class google.ai.generativelanguage_v1beta.types.ExecutableCode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Code generated by the model that is meant to be executed, and the result returned to the model.
Only generated when using the
CodeExecution
tool, in which the code will be automatically executed, and a correspondingCodeExecutionResult
will also be generated.- language¶
Required. Programming language of the
code
.
- class Language(value)[source]¶
Bases:
proto.enums.Enum
Supported programming languages for the generated code.
- Values:
- LANGUAGE_UNSPECIFIED (0):
Unspecified language. This value should not be used.
- PYTHON (1):
Python >= 3.10, with numpy and simpy available.
- class google.ai.generativelanguage_v1beta.types.File(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A file uploaded to the API.
- name¶
Immutable. Identifier. The
File
resource name. The ID (name excluding the “files/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be generated. Example:files/123-456
- Type
- display_name¶
Optional. The human-readable display name for the
File
. The display name must be no more than 512 characters in length, including spaces. Example: “Welcome Image”.- Type
- create_time¶
Output only. The timestamp of when the
File
was created.
- update_time¶
Output only. The timestamp of when the
File
was last updated.
- expiration_time¶
Output only. The timestamp of when the
File
will be deleted. Only set if theFile
is scheduled to expire.
- state¶
Output only. Processing state of the File.
- error¶
Output only. Error status if File processing failed.
- Type
google.rpc.status_pb2.Status
- class State(value)[source]¶
Bases:
proto.enums.Enum
States for the lifecycle of a File.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- PROCESSING (1):
File is being processed and cannot be used for inference yet.
- ACTIVE (2):
File is processed and available for inference.
- FAILED (10):
File failed processing.
- class google.ai.generativelanguage_v1beta.types.FileData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
URI based data.
- class google.ai.generativelanguage_v1beta.types.FunctionCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A predicted
FunctionCall
returned from the model that contains a string representing theFunctionDeclaration.name
with the arguments and their values.- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- class google.ai.generativelanguage_v1beta.types.FunctionCallingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for specifying function calling behavior.
- mode¶
Optional. Specifies the mode in which function calling should execute. If unspecified, the default value will be set to AUTO.
- allowed_function_names¶
Optional. A set of function names that, when provided, limits the functions the model will call.
This should only be set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
- Type
MutableSequence[str]
- class Mode(value)[source]¶
Bases:
proto.enums.Enum
Defines the execution behavior for function calling by defining the execution mode.
- Values:
- MODE_UNSPECIFIED (0):
Unspecified function calling mode. This value should not be used.
- AUTO (1):
Default model behavior, model decides to predict either a function call or a natural language response.
- ANY (2):
Model is constrained to always predicting a function call only. If “allowed_function_names” are set, the predicted function call will be limited to any one of “allowed_function_names”, else the predicted function call will be any one of the provided “function_declarations”.
- NONE (3):
Model will not predict any function call. Model behavior is same as when not passing any function declarations.
- class google.ai.generativelanguage_v1beta.types.FunctionDeclaration(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a
Tool
by the model and executed by the client.- name¶
Required. The name of the function. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- class google.ai.generativelanguage_v1beta.types.FunctionResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The result output from a
FunctionCall
that contains a string representing theFunctionDeclaration.name
and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCall
made based on model prediction.- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- response¶
Required. The function response in JSON object format.
- class google.ai.generativelanguage_v1beta.types.GenerateAnswerRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to generate a grounded answer from the
Model
.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- inline_passages¶
Passages provided inline with the request.
This field is a member of oneof
grounding_source
.
- semantic_retriever¶
Content retrieved from resources created via the Semantic Retriever API.
This field is a member of oneof
grounding_source
.
- model¶
Required. The name of the
Model
to use for generating the grounded response.Format:
model=models/{model}
.- Type
- contents¶
Required. The content of the current conversation with the
Model
. For single-turn queries, this is a single question to answer. For multi-turn queries, this is a repeated field that contains conversation history and the lastContent
in the list containing the question.Note:
GenerateAnswer
only supports queries in English.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- answer_style¶
Required. Style in which answers should be returned.
- safety_settings¶
Optional. A list of unique
SafetySetting
instances for blocking unsafe content.This will be enforced on the
GenerateAnswerRequest.contents
andGenerateAnswerResponse.candidate
. There should not be more than one setting for eachSafetyCategory
type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategory
specified in the safety_settings. If there is noSafetySetting
for a givenSafetyCategory
provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- temperature¶
Optional. Controls the randomness of the output.
Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. A low temperature (~0.2) is usually recommended for Attributed-Question-Answering use cases.
This field is a member of oneof
_temperature
.- Type
- class AnswerStyle(value)[source]¶
Bases:
proto.enums.Enum
Style for grounded answers.
- Values:
- ANSWER_STYLE_UNSPECIFIED (0):
Unspecified answer style.
- ABSTRACTIVE (1):
Succint but abstract style.
- EXTRACTIVE (2):
Very brief and extractive style.
- VERBOSE (3):
Verbose style including extra details. The response may be formatted as a sentence, paragraph, multiple paragraphs, or bullet points, etc.
- class google.ai.generativelanguage_v1beta.types.GenerateAnswerResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from the model for a grounded answer.
- answer¶
Candidate answer from the model.
Note: The model always attempts to provide a grounded answer, even when the answer is unlikely to be answerable from the given passages. In that case, a low-quality or ungrounded answer may be provided, along with a low
answerable_probability
.
- answerable_probability¶
Output only. The model’s estimate of the probability that its answer is correct and grounded in the input passages.
A low
answerable_probability
indicates that the answer might not be grounded in the sources.When
answerable_probability
is low, you may want to:Display a message to the effect of “We couldn’t answer that question” to the user.
Fall back to a general-purpose LLM that answers the question from world knowledge. The threshold and nature of such fallbacks will depend on individual use cases.
0.5
is a good starting threshold.
This field is a member of oneof
_answerable_probability
.- Type
- input_feedback¶
Output only. Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
The input data can be one or more of the following:
Question specified by the last entry in
GenerateAnswerRequest.content
Conversation history specified by the other entries in
GenerateAnswerRequest.content
Grounding sources (
GenerateAnswerRequest.semantic_retriever
orGenerateAnswerRequest.inline_passages
)
This field is a member of oneof
_input_feedback
.
- class InputFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
- block_reason¶
Optional. If set, the input was blocked and no candidates are returned. Rephrase the input.
This field is a member of oneof
_block_reason
.
- safety_ratings¶
Ratings for safety of the input. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.Enum
Specifies what was the reason why input was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Input was blocked due to safety reasons. Inspect
safety_ratings
to understand which safety category blocked it.- OTHER (2):
Input was blocked due to other reasons.
- class google.ai.generativelanguage_v1beta.types.GenerateContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to generate a completion from the model.
- model¶
Required. The name of the
Model
to use for generating the completion.Format:
name=models/{model}
.- Type
- system_instruction¶
Optional. Developer set system instruction(s). Currently, text only.
This field is a member of oneof
_system_instruction
.
- contents¶
Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- tools¶
Optional. A list of
Tools
theModel
may use to generate the next response.A
Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of theModel
. SupportedTool
s areFunction
andcode_execution
. Refer to the Function calling and the Code execution guides to learn more.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]
- tool_config¶
Optional. Tool configuration for any
Tool
specified in the request. Refer to the Function calling guide for a usage example.
- safety_settings¶
Optional. A list of unique
SafetySetting
instances for blocking unsafe content.This will be enforced on the
GenerateContentRequest.contents
andGenerateContentResponse.candidates
. There should not be more than one setting for eachSafetyCategory
type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategory
specified in the safety_settings. If there is noSafetySetting
for a givenSafetyCategory
provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- class google.ai.generativelanguage_v1beta.types.GenerateContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from the model supporting multiple candidate responses.
Safety ratings and content filtering are reported for both prompt in
GenerateContentResponse.prompt_feedback
and for each candidate infinish_reason
and insafety_ratings
. The API:Returns either all requested candidates or none of them
Returns no candidates at all only if there was something wrong with the prompt (check
prompt_feedback
)Reports feedback on each candidate in
finish_reason
andsafety_ratings
.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Candidate]
- prompt_feedback¶
Returns the prompt’s feedback related to the content filters.
- usage_metadata¶
Output only. Metadata on the generation requests’ token usage.
- class PromptFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A set of the feedback metadata the prompt specified in
GenerateContentRequest.content
.- block_reason¶
Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.
- safety_ratings¶
Ratings for safety of the prompt. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.Enum
Specifies the reason why the prompt was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Prompt was blocked due to safety reasons. Inspect
safety_ratings
to understand which safety category blocked it.- OTHER (2):
Prompt was blocked due to unknown reasons.
- BLOCKLIST (3):
Prompt was blocked due to the terms which are included from the terminology blocklist.
- PROHIBITED_CONTENT (4):
Prompt was blocked due to prohibited content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata on the generation request’s token usage.
- prompt_token_count¶
Number of tokens in the prompt. When
cached_content
is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.- Type
- cached_content_token_count¶
Number of tokens in the cached part of the prompt (the cached content)
- Type
- candidates_token_count¶
Total number of tokens across all the generated response candidates.
- Type
- class google.ai.generativelanguage_v1beta.types.GenerateMessageRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to generate a message response from the model.
- prompt¶
Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0]
, inclusive. A value closer to1.0
will produce responses that are more varied, while a value closer to0.0
will typically result in less surprising responses from the model.This field is a member of oneof
_temperature
.- Type
- candidate_count¶
Optional. The number of generated response messages to return.
This value must be between
[1, 8]
, inclusive. If unset, this will default to1
.This field is a member of oneof
_candidate_count
.- Type
- class google.ai.generativelanguage_v1beta.types.GenerateMessageResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response from the model.
This includes candidate messages and conversation history in the form of chronologically-ordered messages.
- candidates¶
Candidate response messages from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- messages¶
The conversation history used by the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory
(s) blocked a candidate from this response, the lowestHarmProbability
that triggered a block, and the HarmThreshold setting for that category.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]
- class google.ai.generativelanguage_v1beta.types.GenerateTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to generate a text completion response from the model.
- model¶
Required. The name of the
Model
orTunedModel
to use for generating the completion. Examples: models/text-bison-001 tunedModels/sentence-translator-u3b7m- Type
- prompt¶
Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.
- temperature¶
Optional. Controls the randomness of the output. Note: The default value varies by model, see the
Model.temperature
attribute of theModel
returned thegetModel
function.Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.
This field is a member of oneof
_temperature
.- Type
- candidate_count¶
Optional. Number of generated responses to return.
This value must be between [1, 8], inclusive. If unset, this will default to 1.
This field is a member of oneof
_candidate_count
.- Type
- max_output_tokens¶
Optional. The maximum number of tokens to include in a candidate.
If unset, this will default to output_token_limit specified in the
Model
specification.This field is a member of oneof
_max_output_tokens
.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.
Note: The default value varies by model, see the
Model.top_p
attribute of theModel
returned thegetModel
function.This field is a member of oneof
_top_p
.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Top-k sampling considers the set of
top_k
most probable tokens. Defaults to 40.Note: The default value varies by model, see the
Model.top_k
attribute of theModel
returned thegetModel
function.This field is a member of oneof
_top_k
.- Type
- safety_settings¶
Optional. A list of unique
SafetySetting
instances for blocking unsafe content.that will be enforced on the
GenerateTextRequest.prompt
andGenerateTextResponse.candidates
. There should not be more than one setting for eachSafetyCategory
type. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategory
specified in the safety_settings. If there is noSafetySetting
for a givenSafetyCategory
provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY, HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL, HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported in text service.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- class google.ai.generativelanguage_v1beta.types.GenerateTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The response from the model, including candidate completions.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TextCompletion]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory
(s) blocked a candidate from this response, the lowestHarmProbability
that triggered a block, and the HarmThreshold setting for that category. This indicates the smallest change to theSafetySettings
that would be necessary to unblock at least 1 response.The blocking is configured by the
SafetySettings
in the request (or the defaultSafetySettings
of the API).- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]
- safety_feedback¶
Returns any safety feedback related to content filtering.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyFeedback]
- class google.ai.generativelanguage_v1beta.types.GenerationConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration options for model generation and outputs. Not all parameters are configurable for every model.
- candidate_count¶
Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.
This field is a member of oneof
_candidate_count
.- Type
- stop_sequences¶
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a
stop_sequence
. The stop sequence will not be included as part of the response.- Type
MutableSequence[str]
- max_output_tokens¶
Optional. The maximum number of tokens to include in a response candidate.
Note: The default value varies by model, see the
Model.output_token_limit
attribute of theModel
returned from thegetModel
function.This field is a member of oneof
_max_output_tokens
.- Type
- temperature¶
Optional. Controls the randomness of the output.
Note: The default value varies by model, see the
Model.temperature
attribute of theModel
returned from thegetModel
function.Values can range from [0.0, 2.0].
This field is a member of oneof
_temperature
.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and Top-p (nucleus) sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.
Note: The default value varies by
Model
and is specified by theModel.top_p
attribute returned from thegetModel
function. An emptytop_k
attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_k
on requests.This field is a member of oneof
_top_p
.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of
top_k
most probable tokens. Models running with nucleus sampling don’t allow top_k setting.Note: The default value varies by
Model
and is specified by theModel.top_p
attribute returned from thegetModel
function. An emptytop_k
attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_k
on requests.This field is a member of oneof
_top_k
.- Type
- response_mime_type¶
Optional. MIME type of the generated candidate text. Supported MIME types are:
text/plain
: (default) Text output.application/json
: JSON response in the response candidates.text/x.enum
: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.- Type
- response_schema¶
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays.
If set, a compatible
response_mime_type
must also be set. Compatible MIME types:application/json
: Schema for JSON response. Refer to the JSON text generation guide for more details.
- presence_penalty¶
Optional. Presence penalty applied to the next token’s logprobs if the token has already been seen in the response.
This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use [frequency_penalty][google.ai.generativelanguage.v1beta.GenerationConfig.frequency_penalty] for a penalty that increases with each use.
A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.
A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
This field is a member of oneof
_presence_penalty
.- Type
- frequency_penalty¶
Optional. Frequency penalty applied to the next token’s logprobs, multiplied by the number of times each token has been seen in the respponse so far.
A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more dificult it is for the model to use that token again increasing the vocabulary of responses.
Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the [max_output_tokens][google.ai.generativelanguage.v1beta.GenerationConfig.max_output_tokens] limit: “…the the the the the…”.
This field is a member of oneof
_frequency_penalty
.- Type
- response_logprobs¶
Optional. If true, export the logprobs results in response.
This field is a member of oneof
_response_logprobs
.- Type
- logprobs¶
Optional. Only valid if [response_logprobs=True][google.ai.generativelanguage.v1beta.GenerationConfig.response_logprobs]. This sets the number of top logprobs to return at each decoding step in the [Candidate.logprobs_result][google.ai.generativelanguage.v1beta.Candidate.logprobs_result].
This field is a member of oneof
_logprobs
.- Type
- class google.ai.generativelanguage_v1beta.types.GetCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to read CachedContent.
- class google.ai.generativelanguage_v1beta.types.GetChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific
Chunk
.
- class google.ai.generativelanguage_v1beta.types.GetCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific
Corpus
.
- class google.ai.generativelanguage_v1beta.types.GetDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific
Document
.
- class google.ai.generativelanguage_v1beta.types.GetFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for
GetFile
.
- class google.ai.generativelanguage_v1beta.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific Model.
- class google.ai.generativelanguage_v1beta.types.GetPermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific
Permission
.
- class google.ai.generativelanguage_v1beta.types.GetTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for getting information about a specific Model.
- class google.ai.generativelanguage_v1beta.types.GoogleSearchRetrieval(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Tool to retrieve public web data for grounding, powered by Google.
- dynamic_retrieval_config¶
Specifies the dynamic retrieval configuration for the given source.
- class google.ai.generativelanguage_v1beta.types.GroundingAttribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Attribution for a source that contributed to an answer.
- source_id¶
Output only. Identifier for the source contributing to this attribution.
- content¶
Grounding source content that makes up this attribution.
- class google.ai.generativelanguage_v1beta.types.GroundingChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Grounding chunk.
- class google.ai.generativelanguage_v1beta.types.GroundingMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata returned to client when grounding is enabled.
- search_entry_point¶
Optional. Google search entry for the following-up web searches.
This field is a member of oneof
_search_entry_point
.
- grounding_chunks¶
List of supporting references retrieved from specified grounding source.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingChunk]
- grounding_supports¶
List of grounding support.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingSupport]
- class google.ai.generativelanguage_v1beta.types.GroundingPassage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Passage included inline with a grounding configuration.
- content¶
Content of the passage.
- class google.ai.generativelanguage_v1beta.types.GroundingPassages(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A repeated list of passages.
- passages¶
List of passages.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingPassage]
- class google.ai.generativelanguage_v1beta.types.GroundingSupport(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Grounding support.
- class google.ai.generativelanguage_v1beta.types.HarmCategory(value)[source]¶
Bases:
proto.enums.Enum
The category of a rating.
These categories cover various kinds of harms that developers may wish to adjust.
- Values:
- HARM_CATEGORY_UNSPECIFIED (0):
Category is unspecified.
- HARM_CATEGORY_DEROGATORY (1):
PaLM - Negative or harmful comments targeting identity and/or protected attribute.
- HARM_CATEGORY_TOXICITY (2):
PaLM - Content that is rude, disrespectful, or profane.
- HARM_CATEGORY_VIOLENCE (3):
PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore.
- HARM_CATEGORY_SEXUAL (4):
PaLM - Contains references to sexual acts or other lewd content.
- HARM_CATEGORY_MEDICAL (5):
PaLM - Promotes unchecked medical advice.
- HARM_CATEGORY_DANGEROUS (6):
PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts.
- HARM_CATEGORY_HARASSMENT (7):
Gemini - Harassment content.
- HARM_CATEGORY_HATE_SPEECH (8):
Gemini - Hate speech and content.
- HARM_CATEGORY_SEXUALLY_EXPLICIT (9):
Gemini - Sexually explicit content.
- HARM_CATEGORY_DANGEROUS_CONTENT (10):
Gemini - Dangerous content.
- HARM_CATEGORY_CIVIC_INTEGRITY (11):
Gemini - Content that may be used to harm civic integrity.
- class google.ai.generativelanguage_v1beta.types.Hyperparameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Hyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- learning_rate¶
Optional. Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.001 or 0.0002 will be calculated based on the number of training examples.
This field is a member of oneof
learning_rate_option
.- Type
- learning_rate_multiplier¶
Optional. Immutable. The learning rate multiplier is used to calculate a final learning_rate based on the default (recommended) value. Actual learning rate := learning_rate_multiplier * default learning rate Default learning rate is dependent on base model and dataset size. If not set, a default of 1.0 will be used.
This field is a member of oneof
learning_rate_option
.- Type
- class google.ai.generativelanguage_v1beta.types.ListCachedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to list CachedContents.
- page_size¶
Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
- Type
- class google.ai.generativelanguage_v1beta.types.ListCachedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response with CachedContents list.
- cached_contents¶
List of cached contents.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CachedContent]
- class google.ai.generativelanguage_v1beta.types.ListChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing
Chunk
s.- parent¶
Required. The name of the
Document
containingChunk
s. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- page_size¶
Optional. The maximum number of
Chunk
s to return (per page). The service may return fewerChunk
s.If unspecified, at most 10
Chunk
s will be returned. The maximum size limit is 100Chunk
s per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListChunks
call.Provide the
next_page_token
returned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListChunks
must match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListChunks
containing a paginated list ofChunk
s. TheChunk
s are sorted by ascendingchunk.create_time
.- chunks¶
The returned
Chunk
s.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.ListCorporaRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing
Corpora
.- page_size¶
Optional. The maximum number of
Corpora
to return (per page). The service may return fewerCorpora
.If unspecified, at most 10
Corpora
will be returned. The maximum size limit is 20Corpora
per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListCorpora
call.Provide the
next_page_token
returned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListCorpora
must match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListCorporaResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListCorpora
containing a paginated list ofCorpora
. The results are sorted by ascendingcorpus.create_time
.- corpora¶
The returned corpora.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Corpus]
- class google.ai.generativelanguage_v1beta.types.ListDocumentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing
Document
s.- parent¶
Required. The name of the
Corpus
containingDocument
s. Example:corpora/my-corpus-123
- Type
- page_size¶
Optional. The maximum number of
Document
s to return (per page). The service may return fewerDocument
s.If unspecified, at most 10
Document
s will be returned. The maximum size limit is 20Document
s per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListDocuments
call.Provide the
next_page_token
returned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListDocuments
must match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListDocumentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListDocuments
containing a paginated list ofDocument
s. TheDocument
s are sorted by ascendingdocument.create_time
.- documents¶
The returned
Document
s.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Document]
- class google.ai.generativelanguage_v1beta.types.ListFilesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for
ListFiles
.- page_size¶
Optional. Maximum number of
File
s to return per page. If unspecified, defaults to 10. Maximumpage_size
is 100.- Type
- class google.ai.generativelanguage_v1beta.types.ListFilesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response for
ListFiles
.- files¶
The list of
File
s.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.File]
- class google.ai.generativelanguage_v1beta.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing all Models.
- page_size¶
The maximum number of
Models
to return (per page).If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- class google.ai.generativelanguage_v1beta.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListModel
containing a paginated list of Models.- models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Model]
- class google.ai.generativelanguage_v1beta.types.ListPermissionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing permissions.
- parent¶
Required. The parent resource of the permissions. Formats:
tunedModels/{tuned_model}
corpora/{corpus}
- Type
- page_size¶
Optional. The maximum number of
Permission
s to return (per page). The service may return fewer permissions.If unspecified, at most 10 permissions will be returned. This method returns at most 1000 permissions per page, even if you pass larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListPermissions
call.Provide the
page_token
returned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListPermissions
must match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListPermissionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListPermissions
containing a paginated list of permissions.- permissions¶
Returned permissions.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Permission]
- class google.ai.generativelanguage_v1beta.types.ListTunedModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for listing TunedModels.
- page_size¶
Optional. The maximum number of
TunedModels
to return (per page). The service may return fewer tuned models.If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListTunedModels
call.Provide the
page_token
returned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListTunedModels
must match the call that provided the page token.- Type
- filter¶
Optional. A filter is a full text search over the tuned model’s description and display name. By default, results will not include tuned models shared with everyone.
Additional operators:
owner:me
writers:me
readers:me
readers:everyone
Examples
“owner:me” returns all tuned models to which
caller has owner role “readers:me” returns all tuned models to which caller has reader role “readers:everyone” returns all tuned models that are shared with everyone
- Type
- class google.ai.generativelanguage_v1beta.types.ListTunedModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
ListTunedModels
containing a paginated list of Models.- tuned_models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TunedModel]
- class google.ai.generativelanguage_v1beta.types.LogprobsResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Logprobs Result
- top_candidates¶
Length = total number of decoding steps.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.TopCandidates]
- chosen_candidates¶
Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]
- class Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Candidate for the logprobs token and score.
- class TopCandidates(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Candidates with top log probabilities at each decoding step.
- candidates¶
Sorted by log probability in descending order.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]
- class google.ai.generativelanguage_v1beta.types.Message(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The base unit of structured text.
A
Message
includes anauthor
and thecontent
of theMessage
.The
author
is used to tag messages when they are fed to the model as text.- author¶
Optional. The author of this Message.
This serves as a key for tagging the content of this Message when it is fed to the model as text.
The author can be any alphanumeric string.
- Type
- citation_metadata¶
Output only. Citation information for model-generated
content
in thisMessage
.If this
Message
was generated as output from the model, this field may be populated with attribution information for any text included in thecontent
. This field is used only on output.This field is a member of oneof
_citation_metadata
.
- class google.ai.generativelanguage_v1beta.types.MessagePrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
All of the structured input text passed to the model as a prompt.
A
MessagePrompt
contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.- context¶
Optional. Text that should be provided to the model first to ground the response.
If not empty, this
context
will be given to the model first before theexamples
andmessages
. When using acontext
be sure to provide it with every request to maintain continuity.This field can be a description of your prompt to the model to help provide context and guide the responses. Examples: “Translate the phrase from English to French.” or “Given a statement, classify the sentiment as happy, sad or neutral.”
Anything included in this field will take precedence over message history if the total input size exceeds the model’s
input_token_limit
and the input request is truncated.- Type
- examples¶
Optional. Examples of what the model should generate.
This includes both user input and the response that the model should emulate.
These
examples
are treated identically to conversation messages except that they take precedence over the history inmessages
: If the total input size exceeds the model’sinput_token_limit
the input will be truncated. Items will be dropped frommessages
beforeexamples
.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Example]
- messages¶
Required. A snapshot of the recent conversation history sorted chronologically.
Turns alternate between two authors.
If the total input size exceeds the model’s
input_token_limit
the input will be truncated: The oldest items will be dropped frommessages
.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- class google.ai.generativelanguage_v1beta.types.MetadataFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
User provided filter to limit retrieval based on
Chunk
orDocument
level metadata values. Example (genre = drama OR genre = action): key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]- conditions¶
Required. The
Condition
s for the given key that will trigger this filter. MultipleCondition
s are joined by logical ORs.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Condition]
- class google.ai.generativelanguage_v1beta.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Information about a Generative Language Model.
- name¶
Required. The resource name of the
Model
. Refer to Model variants for all allowed values.Format:
models/{model}
with a{model}
naming convention of:“{base_model_id}-{version}”
Examples:
models/gemini-1.5-flash-001
- Type
- base_model_id¶
Required. The name of the base model, pass this to the generation request.
Examples:
gemini-1.5-flash
- Type
- version¶
Required. The version number of the model.
This represents the major version (
1.0
or1.5
)- Type
- display_name¶
The human-readable name of the model. E.g. “Gemini 1.5 Flash”. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Type
- supported_generation_methods¶
The model’s supported generation methods.
The corresponding API method names are defined as Pascal case strings, such as
generateMessage
andgenerateContent
.- Type
MutableSequence[str]
- temperature¶
Controls the randomness of the output.
Values can range over
[0.0,max_temperature]
, inclusive. A higher value will produce responses that are more varied, while a value closer to0.0
will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_temperature
.- Type
- max_temperature¶
The maximum temperature this model can use.
This field is a member of oneof
_max_temperature
.- Type
- top_p¶
For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p
. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_top_p
.- Type
- top_k¶
For Top-k sampling.
Top-k sampling considers the set of
top_k
most probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn’t use top-k sampling, andtop_k
isn’t allowed as a generation parameter.This field is a member of oneof
_top_k
.- Type
- class google.ai.generativelanguage_v1beta.types.Part(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A datatype containing media that is part of a multi-part
Content
message.A
Part
consists of data which has an associated datatype. APart
can only contain one of the accepted types inPart.data
.A
Part
must have a fixed IANA MIME type identifying the type and subtype of the media if theinline_data
field is filled with raw bytes.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- function_call¶
A predicted
FunctionCall
returned from the model that contains a string representing theFunctionDeclaration.name
with the arguments and their values.This field is a member of oneof
data
.
- function_response¶
The result output of a
FunctionCall
that contains a string representing theFunctionDeclaration.name
and a structured JSON object containing any output from the function is used as context to the model.This field is a member of oneof
data
.
- executable_code¶
Code generated by the model that is meant to be executed.
This field is a member of oneof
data
.
- class google.ai.generativelanguage_v1beta.types.Permission(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus).
A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.
There are three concentric roles. Each role is a superset of the previous role’s permitted operations:
reader can use the resource (e.g. tuned model, corpus) for inference
writer has reader’s permissions and additionally can edit and share
owner has writer’s permissions and additionally can delete
- name¶
Output only. Identifier. The permission name. A unique name will be generated on create. Examples: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission} Output only.
- Type
- grantee_type¶
Optional. Immutable. The type of the grantee.
This field is a member of oneof
_grantee_type
.
- email_address¶
Optional. Immutable. The email address of the user of group which this permission refers. Field is not set when permission’s grantee type is EVERYONE.
This field is a member of oneof
_email_address
.- Type
- class GranteeType(value)[source]¶
Bases:
proto.enums.Enum
Defines types of the grantee of this permission.
- Values:
- GRANTEE_TYPE_UNSPECIFIED (0):
The default value. This value is unused.
- USER (1):
Represents a user. When set, you must provide email_address for the user.
- GROUP (2):
Represents a group. When set, you must provide email_address for the group.
- EVERYONE (3):
Represents access to everyone. No extra information is required.
- class Role(value)[source]¶
Bases:
proto.enums.Enum
Defines the role granted by this permission.
- Values:
- ROLE_UNSPECIFIED (0):
The default value. This value is unused.
- OWNER (1):
Owner can use, update, share and delete the resource.
- WRITER (2):
Writer can use, update and share the resource.
- READER (3):
Reader can use the resource.
- class google.ai.generativelanguage_v1beta.types.PredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [PredictionService.Predict][google.ai.generativelanguage.v1beta.PredictionService.Predict].
- instances¶
Required. The instances that are the input to the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- parameters¶
Optional. The parameters that govern the prediction call.
- class google.ai.generativelanguage_v1beta.types.PredictResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response message for [PredictionService.Predict].
- predictions¶
The outputs of the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- class google.ai.generativelanguage_v1beta.types.QueryCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for querying a
Corpus
.- metadata_filters¶
Optional. Filter for
Chunk
andDocument
metadata. EachMetadataFilter
object should correspond to a unique key. MultipleMetadataFilter
objects are joined by logical “AND”s.Example query at document level: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilter
object list: metadata_filters = [ {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]}]Example query at chunk level for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilter
object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.QueryCorpusResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
QueryCorpus
containing a list of relevant chunks.- relevant_chunks¶
The relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]
- class google.ai.generativelanguage_v1beta.types.QueryDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request for querying a
Document
.- name¶
Required. The name of the
Document
to query. Example:corpora/my-corpus-123/documents/the-doc-abc
- Type
- results_count¶
Optional. The maximum number of
Chunk
s to return. The service may return fewerChunk
s.If unspecified, at most 10
Chunk
s will be returned. The maximum specified result count is 100.- Type
- metadata_filters¶
Optional. Filter for
Chunk
metadata. EachMetadataFilter
object should correspond to a unique key. MultipleMetadataFilter
objects are joined by logical “AND”s.Note:
Document
-level filtering is not supported for this request because aDocument
name is already specified.Example query: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilter
object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}}, {key = “chunk.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}}]Example query for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilter
object list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.QueryDocumentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
QueryDocument
containing a list of relevant chunks.- relevant_chunks¶
The returned relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]
- class google.ai.generativelanguage_v1beta.types.RelevantChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The information for a chunk relevant to a query.
- chunk¶
Chunk
associated with the query.
- class google.ai.generativelanguage_v1beta.types.RetrievalMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata related to retrieval in the grounding flow.
- google_search_dynamic_retrieval_score¶
Optional. Score indicating how likely information from google search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when google search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger google search.
- Type
- class google.ai.generativelanguage_v1beta.types.SafetyFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Safety feedback for an entire request.
This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.
- rating¶
Safety rating evaluated from content.
- setting¶
Safety settings applied to the request.
- class google.ai.generativelanguage_v1beta.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Safety rating for a piece of content.
The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.
- category¶
Required. The category for this rating.
- probability¶
Required. The probability of harm for this content.
- class HarmProbability(value)[source]¶
Bases:
proto.enums.Enum
The probability that a piece of content is harmful.
The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.
- Values:
- HARM_PROBABILITY_UNSPECIFIED (0):
Probability is unspecified.
- NEGLIGIBLE (1):
Content has a negligible chance of being unsafe.
- LOW (2):
Content has a low chance of being unsafe.
- MEDIUM (3):
Content has a medium chance of being unsafe.
- HIGH (4):
Content has a high chance of being unsafe.
- class google.ai.generativelanguage_v1beta.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Safety setting, affecting the safety-blocking behavior.
Passing a safety setting for a category changes the allowed probability that content is blocked.
- category¶
Required. The category for this setting.
- threshold¶
Required. Controls the probability threshold at which harm is blocked.
- class HarmBlockThreshold(value)[source]¶
Bases:
proto.enums.Enum
Block at and beyond a specified harm probability.
- Values:
- HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):
Threshold is unspecified.
- BLOCK_LOW_AND_ABOVE (1):
Content with NEGLIGIBLE will be allowed.
- BLOCK_MEDIUM_AND_ABOVE (2):
Content with NEGLIGIBLE and LOW will be allowed.
- BLOCK_ONLY_HIGH (3):
Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.
- BLOCK_NONE (4):
All content will be allowed.
- OFF (5):
Turn off the safety filter.
- class google.ai.generativelanguage_v1beta.types.Schema(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The
Schema
object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object.- type_¶
Required. Data type.
- format_¶
Optional. The format of the data. This is used only for primitive datatypes. Supported formats:
for NUMBER type: float, double for INTEGER type: int32, int64 for STRING type: enum
- Type
- description¶
Optional. A brief description of the parameter. This could contain examples of use. Parameter description may be formatted as Markdown.
- Type
- enum¶
Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:[“EAST”, NORTH”, “SOUTH”, “WEST”]}
- Type
MutableSequence[str]
- properties¶
Optional. Properties of Type.OBJECT.
- Type
MutableMapping[str, google.ai.generativelanguage_v1beta.types.Schema]
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.ai.generativelanguage_v1beta.types.SearchEntryPoint(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Google search entry point.
- rendered_content¶
Optional. Web content snippet that can be embedded in a web page or an app webview.
- Type
- class google.ai.generativelanguage_v1beta.types.Segment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Segment of the content.
- start_index¶
Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
- Type
- end_index¶
Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
- Type
- class google.ai.generativelanguage_v1beta.types.SemanticRetrieverConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for retrieving grounding content from a
Corpus
orDocument
created using the Semantic Retriever API.- source¶
Required. Name of the resource for retrieval. Example:
corpora/123
orcorpora/123/documents/abc
.- Type
- query¶
Required. Query to use for matching
Chunk
s in the given resource by similarity.
- metadata_filters¶
Optional. Filters for selecting
Document
s and/orChunk
s from the resource.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.StringList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
User provided string values assigned to a single metadata key.
- class google.ai.generativelanguage_v1beta.types.TaskType(value)[source]¶
Bases:
proto.enums.Enum
Type of task for which the embedding will be used.
- Values:
- TASK_TYPE_UNSPECIFIED (0):
Unset value, which will default to one of the other enum values.
- RETRIEVAL_QUERY (1):
Specifies the given text is a query in a search/retrieval setting.
- RETRIEVAL_DOCUMENT (2):
Specifies the given text is a document from the corpus being searched.
- SEMANTIC_SIMILARITY (3):
Specifies the given text will be used for STS.
- CLASSIFICATION (4):
Specifies that the given text will be classified.
- CLUSTERING (5):
Specifies that the embeddings will be used for clustering.
- QUESTION_ANSWERING (6):
Specifies that the given text will be used for question answering.
- FACT_VERIFICATION (7):
Specifies that the given text will be used for fact verification.
- class google.ai.generativelanguage_v1beta.types.TextCompletion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Output text returned from a model.
- safety_ratings¶
Ratings for the safety of a response.
There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class google.ai.generativelanguage_v1beta.types.TextPrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Text given to the model as a prompt.
The Model will use this TextPrompt to Generate a text completion.
- class google.ai.generativelanguage_v1beta.types.Tool(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Tool details that the model may use to generate response.
A
Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.- function_declarations¶
Optional. A list of
FunctionDeclarations
available to the model that can be used for function calling.The model or system does not execute the function. Instead the defined function may be returned as a [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] with arguments to the client side for execution. The model may decide to call a subset of these functions by populating [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] in the response. The next conversation turn may contain a [FunctionResponse][google.ai.generativelanguage.v1beta.Part.function_response] with the [Content.role][google.ai.generativelanguage.v1beta.Content.role] “function” generation context for the next model turn.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionDeclaration]
- google_search_retrieval¶
Optional. Retrieval tool that is powered by Google search.
- code_execution¶
Optional. Enables the model to execute code as part of generation.
- class google.ai.generativelanguage_v1beta.types.ToolConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The Tool configuration containing parameters for specifying
Tool
use in the request.- function_calling_config¶
Optional. Function calling config.
- class google.ai.generativelanguage_v1beta.types.TransferOwnershipRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to transfer the ownership of the tuned model.
- name¶
Required. The resource name of the tuned model to transfer ownership.
Format:
tunedModels/my-model-id
- Type
- class google.ai.generativelanguage_v1beta.types.TransferOwnershipResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response from
TransferOwnership
.
- class google.ai.generativelanguage_v1beta.types.TunedModel(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A fine-tuned model created using ModelService.CreateTunedModel.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- tuned_model_source¶
Optional. TunedModel to use as the starting point for training the new model.
This field is a member of oneof
source_model
.
- base_model¶
Immutable. The name of the
Model
to tune. Example:models/gemini-1.5-flash-001
This field is a member of oneof
source_model
.- Type
- name¶
Output only. The tuned model name. A unique name will be generated on create. Example:
tunedModels/az2mb0bpw6i
If display_name is set on create, the id portion of the name will be set by concatenating the words of the display_name with hyphens and adding a random portion for uniqueness.Example:
display_name =
Sentence Translator
name =
tunedModels/sentence-translator-u3b7m
- Type
- display_name¶
Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.
- Type
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0]
, inclusive. A value closer to1.0
will produce responses that are more varied, while a value closer to0.0
will typically result in less surprising responses from the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_temperature
.- Type
- top_p¶
Optional. For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p
.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_p
.- Type
- top_k¶
Optional. For Top-k sampling.
Top-k sampling considers the set of
top_k
most probable tokens. This value specifies default to be used by the backend while making the call to the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_k
.- Type
- state¶
Output only. The state of the tuned model.
- create_time¶
Output only. The timestamp when this model was created.
- update_time¶
Output only. The timestamp when this model was updated.
- tuning_task¶
Required. The tuning task that creates the tuned model.
- reader_project_numbers¶
Optional. List of project numbers that have read access to the tuned model.
- Type
MutableSequence[int]
- class State(value)[source]¶
Bases:
proto.enums.Enum
The state of the tuned model.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is unused.
- CREATING (1):
The model is being created.
- ACTIVE (2):
The model is ready to be used.
- FAILED (3):
The model failed to be created.
- class google.ai.generativelanguage_v1beta.types.TunedModelSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Tuned model as a source for training a new model.
- tuned_model¶
Immutable. The name of the
TunedModel
to use as the starting point for training the new model. Example:tunedModels/my-tuned-model
- Type
- class google.ai.generativelanguage_v1beta.types.TuningExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A single example for tuning.
- class google.ai.generativelanguage_v1beta.types.TuningExamples(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A set of tuning examples. Can be training or validation data.
- examples¶
Required. The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningExample]
- class google.ai.generativelanguage_v1beta.types.TuningSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Record for a single tuning step.
- compute_time¶
Output only. The timestamp when this metric was computed.
- class google.ai.generativelanguage_v1beta.types.TuningTask(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Tuning tasks that create tuned models.
- start_time¶
Output only. The timestamp when tuning this model started.
- complete_time¶
Output only. The timestamp when tuning this model completed.
- snapshots¶
Output only. Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]
- training_data¶
Required. Input only. Immutable. The model training data.
- hyperparameters¶
Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.
- class google.ai.generativelanguage_v1beta.types.Type(value)[source]¶
Bases:
proto.enums.Enum
Type contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types
- Values:
- TYPE_UNSPECIFIED (0):
Not specified, should not be used.
- STRING (1):
String type.
- NUMBER (2):
Number type.
- INTEGER (3):
Integer type.
- BOOLEAN (4):
Boolean type.
- ARRAY (5):
Array type.
- OBJECT (6):
Object type.
- class google.ai.generativelanguage_v1beta.types.UpdateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update CachedContent.
- cached_content¶
Required. The content cache entry to update
- update_mask¶
The list of fields to update.
- class google.ai.generativelanguage_v1beta.types.UpdateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update a
Chunk
.- chunk¶
Required. The
Chunk
to update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
custom_metadata
anddata
.
- class google.ai.generativelanguage_v1beta.types.UpdateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update a
Corpus
.- corpus¶
Required. The
Corpus
to update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_name
.
- class google.ai.generativelanguage_v1beta.types.UpdateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update a
Document
.- document¶
Required. The
Document
to update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_name
andcustom_metadata
.
- class google.ai.generativelanguage_v1beta.types.UpdatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update the
Permission
.- permission¶
Required. The permission to update.
The permission’s
name
field is used to identify the permission to update.
- update_mask¶
Required. The list of fields to update. Accepted ones:
role (
Permission.role
field)
- class google.ai.generativelanguage_v1beta.types.UpdateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to update a TunedModel.
- tuned_model¶
Required. The tuned model to update.
- update_mask¶
Required. The list of fields to update.
- class google.ai.generativelanguage_v1beta.types.VideoMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata for a video
File
.- video_duration¶
Duration of the video.