Types for Google Ai Generativelanguage v1alpha API¶
- class google.ai.generativelanguage_v1alpha.types.AttributionSourceId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for the source contributing to this attribution.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- semantic_retriever_chunk¶
Identifier for a
Chunkfetched via Semantic Retriever.This field is a member of oneof
source.
- class GroundingPassageId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for a part within a
GroundingPassage.- passage_id¶
Output only. ID of the passage matching the
GenerateAnswerRequest’sGroundingPassage.id.- Type
- class SemanticRetrieverChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for a
Chunkretrieved via Semantic Retriever specified in theGenerateAnswerRequestusingSemanticRetrieverConfig.- source¶
Output only. Name of the source matching the request’s
SemanticRetrieverConfig.source. Example:corpora/123orcorpora/123/documents/abc- Type
- class google.ai.generativelanguage_v1alpha.types.BatchCreateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch create
Chunks.- parent¶
Optional. The name of the
Documentwhere this batch ofChunks will be created. The parent field in everyCreateChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to create. A maximum of 100Chunks can be created in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.CreateChunkRequest]
- class google.ai.generativelanguage_v1alpha.types.BatchCreateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
BatchCreateChunkscontaining a list of createdChunks.- chunks¶
Chunks created.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Chunk]
- class google.ai.generativelanguage_v1alpha.types.BatchDeleteChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch delete
Chunks.- parent¶
Optional. The name of the
Documentcontaining theChunks to delete. The parent field in everyDeleteChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to delete.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.DeleteChunkRequest]
- class google.ai.generativelanguage_v1alpha.types.BatchEmbedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageBatch request to get embeddings from the model for a list of prompts.
- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- requests¶
Required. Embed requests for the batch. The model in each of these requests must match the model specified
BatchEmbedContentsRequest.model.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.EmbedContentRequest]
- class google.ai.generativelanguage_v1alpha.types.BatchEmbedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a
BatchEmbedContentsRequest.- embeddings¶
Output only. The embeddings for each request, in the same order as provided in the batch request.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.ContentEmbedding]
- class google.ai.generativelanguage_v1alpha.types.BatchEmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageBatch request to get a text embedding from the model.
- model¶
Required. The name of the
Modelto use for generating the embedding. Examples: models/embedding-gecko-001- Type
- texts¶
Optional. The free-form input texts that the model will turn into an embedding. The current limit is 100 texts, over which an error will be thrown.
- Type
MutableSequence[str]
- requests¶
Optional. Embed requests for the batch. Only one of
textsorrequestscan be set.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.EmbedTextRequest]
- class google.ai.generativelanguage_v1alpha.types.BatchEmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a EmbedTextRequest.
- embeddings¶
Output only. The embeddings generated from the input text.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Embedding]
- class google.ai.generativelanguage_v1alpha.types.BatchUpdateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch update
Chunks.- parent¶
Optional. The name of the
Documentcontaining theChunks to update. The parent field in everyUpdateChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to update. A maximum of 100Chunks can be updated in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.UpdateChunkRequest]
- class google.ai.generativelanguage_v1alpha.types.BatchUpdateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
BatchUpdateChunkscontaining a list of updatedChunks.- chunks¶
Chunks updated.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Chunk]
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentClientContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIncremental update of the current conversation delivered from the client. All of the content here is unconditionally appended to the conversation history and used as part of the prompt to the model to generate content.
A message here will interrupt any current model generation.
- turns¶
Optional. The content appended to the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history and the latest request.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Content]
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentClientMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMessages sent by the client in the BidiGenerateContent call.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- setup¶
Optional. Session configuration sent in the first and only first client message.
This field is a member of oneof
message_type.
- client_content¶
Optional. Incremental update of the current conversation delivered from the client.
This field is a member of oneof
message_type.
- realtime_input¶
Optional. User input that is sent in real time.
This field is a member of oneof
message_type.
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentRealtimeInput(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser input that is sent in real time.
This is different from [BidiGenerateContentClientContent][google.ai.generativelanguage.v1alpha.BidiGenerateContentClientContent] in a few ways:
Can be sent continuously without interruption to model generation.
If there is a need to mix data interleaved across the [BidiGenerateContentClientContent][google.ai.generativelanguage.v1alpha.BidiGenerateContentClientContent] and the [BidiGenerateContentRealtimeInput][google.ai.generativelanguage.v1alpha.BidiGenerateContentRealtimeInput], the server attempts to optimize for best response, but there are no guarantees.
End of turn is not explicitly specified, but is rather derived from user activity (for example, end of speech).
Even before the end of turn, the data is processed incrementally to optimize for a fast start of the response from the model.
Is always direct user input that is sent in real time. Can be sent continuously without interruptions. The model automatically detects the beginning and the end of user speech and starts or terminates streaming the response accordingly. Data is processed incrementally as it arrives, minimizing latency.
- media_chunks¶
Optional. Inlined bytes data for media input.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Blob]
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentServerContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIncremental server update generated by the model in response to client messages.
Content is generated as quickly as possible, and not in real time. Clients may choose to buffer and play it out in real time.
- model_turn¶
Output only. The content that the model has generated as part of the current conversation with the user.
This field is a member of oneof
_model_turn.
- turn_complete¶
Output only. If true, indicates that the model is done generating. Generation will only start in response to additional client messages. Can be set alongside
content, indicating that thecontentis the last in the turn.- Type
- interrupted¶
Output only. If true, indicates that a client message has interrupted current model generation. If the client is playing out the content in real time, this is a good signal to stop and empty the current playback queue.
- Type
- grounding_metadata¶
Output only. Grounding metadata for the generated content.
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentServerMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse message for the BidiGenerateContent call.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- setup_complete¶
Output only. Sent in response to a
BidiGenerateContentSetupmessage from the client when setup is complete.This field is a member of oneof
message_type.
- server_content¶
Output only. Content generated by the model in response to client messages.
This field is a member of oneof
message_type.
- tool_call¶
Output only. Request for the client to execute the
function_callsand return the responses with the matchingids.This field is a member of oneof
message_type.
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentSetup(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMessage to be sent in the first and only first
BidiGenerateContentClientMessage. Contains configuration that will apply for the duration of the streaming RPC.Clients should wait for a
BidiGenerateContentSetupCompletemessage before sending any additional messages.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
Format:
models/{model}- Type
- generation_config¶
Optional. Generation config.
The following fields are not supported:
response_logprobsresponse_mime_typelogprobsresponse_schemastop_sequencerouting_configaudio_timestamp
- system_instruction¶
Optional. The user provided system instructions for the model. Note: Only text should be used in parts and content in each part will be in a separate paragraph.
- tools¶
Optional. A list of
Toolsthe model may use to generate the next response.A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Tool]
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentSetupComplete(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSent in response to a
BidiGenerateContentSetupmessage from the client.
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentToolCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for the client to execute the
function_callsand return the responses with the matchingids.- function_calls¶
Output only. The function call to be executed.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.FunctionCall]
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentToolCallCancellation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageNotification for the client that a previously issued
ToolCallMessagewith the specifiedids should have been not executed and should be cancelled. If there were side-effects to those tool calls, clients may attempt to undo the tool calls. This message occurs only in cases where the clients interrupt server turns.
- class google.ai.generativelanguage_v1alpha.types.BidiGenerateContentToolResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageClient generated response to a
ToolCallreceived from the server. IndividualFunctionResponseobjects are matched to the respectiveFunctionCallobjects by theidfield.Note that in the unary and server-streaming GenerateContent APIs function calling happens by exchanging the
Contentparts, while in the bidi GenerateContent APIs function calling happens over these dedicated set of messages.- function_responses¶
Optional. The response to the function calls.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.FunctionResponse]
- class google.ai.generativelanguage_v1alpha.types.Blob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRaw media bytes.
Text should not be sent as raw bytes, use the ‘text’ field.
- mime_type¶
The IANA standard MIME type of the source data. Examples:
image/png
image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.
- Type
- class google.ai.generativelanguage_v1alpha.types.CachedContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageContent that has been preprocessed and can be used in subsequent request to GenerativeService.
Cached content can be only used with model it was created for.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- expire_time¶
Timestamp in UTC of when this resource is considered expired. This is always provided on output, regardless of what was sent on input.
This field is a member of oneof
expiration.
- name¶
Optional. Identifier. The resource name referring to the cached content. Format:
cachedContents/{id}This field is a member of oneof
_name.- Type
- display_name¶
Optional. Immutable. The user-generated meaningful display name of the cached content. Maximum 128 Unicode characters.
This field is a member of oneof
_display_name.- Type
- model¶
Required. Immutable. The name of the
Modelto use for cached content Format:models/{model}This field is a member of oneof
_model.- Type
- system_instruction¶
Optional. Input only. Immutable. Developer set system instruction. Currently text only.
This field is a member of oneof
_system_instruction.
- contents¶
Optional. Input only. Immutable. The content to cache.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Content]
- tools¶
Optional. Input only. Immutable. A list of
Toolsthe model may use to generate the next response- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Tool]
- tool_config¶
Optional. Input only. Immutable. Tool config. This config is shared for all tools.
This field is a member of oneof
_tool_config.
- create_time¶
Output only. Creation time of the cache entry.
- update_time¶
Output only. When the cache entry was last updated in UTC time.
- usage_metadata¶
Output only. Metadata on the usage of the cached content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata on the usage of the cached content.
- class google.ai.generativelanguage_v1alpha.types.Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response candidate generated from the model.
- index¶
Output only. Index of the candidate in the list of response candidates.
This field is a member of oneof
_index.- Type
- content¶
Output only. Generated content returned from the model.
- finish_reason¶
Optional. Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating tokens.
- safety_ratings¶
List of ratings for the safety of a response candidate. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetyRating]
- citation_metadata¶
Output only. Citation information for model-generated candidate.
This field may be populated with recitation information for any text included in the
content. These are passages that are “recited” from copyrighted material in the foundational LLM’s training data.
- grounding_attributions¶
Output only. Attribution information for sources that contributed to a grounded answer.
This field is populated for
GenerateAnswercalls.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.GroundingAttribution]
- grounding_metadata¶
Output only. Grounding metadata for the candidate.
This field is populated for
GenerateContentcalls.
- logprobs_result¶
Output only. Log-likelihood scores for the response tokens and top tokens
- class FinishReason(value)[source]¶
Bases:
proto.enums.EnumDefines the reason why the model stopped generating tokens.
- Values:
- FINISH_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- STOP (1):
Natural stop point of the model or provided stop sequence.
- MAX_TOKENS (2):
The maximum number of tokens as specified in the request was reached.
- SAFETY (3):
The response candidate content was flagged for safety reasons.
- RECITATION (4):
The response candidate content was flagged for recitation reasons.
- LANGUAGE (6):
The response candidate content was flagged for using an unsupported language.
- OTHER (5):
Unknown reason.
- BLOCKLIST (7):
Token generation stopped because the content contains forbidden terms.
- PROHIBITED_CONTENT (8):
Token generation stopped for potentially containing prohibited content.
- SPII (9):
Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).
- MALFORMED_FUNCTION_CALL (10):
The function call generated by the model is invalid.
- IMAGE_SAFETY (11):
Token generation stopped because generated images contain safety violations.
- class google.ai.generativelanguage_v1alpha.types.Chunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Chunkis a subpart of aDocumentthat is treated as an independent unit for the purposes of vector representation and storage. ACorpuscan have a maximum of 1 millionChunks.- name¶
Immutable. Identifier. The
Chunkresource name. The ID (name excluding the “corpora//documents//chunks/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a random 12-character unique ID will be generated. Example:corpora/{corpus_id}/documents/{document_id}/chunks/123a456b789c- Type
- data¶
Required. The content for the
Chunk, such as the text string. The maximum number of tokens per chunk is 2043.
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs. The maximum number of
CustomMetadataper chunk is 20.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.CustomMetadata]
- create_time¶
Output only. The Timestamp of when the
Chunkwas created.
- update_time¶
Output only. The Timestamp of when the
Chunkwas last updated.
- state¶
Output only. Current state of the
Chunk.
- class State(value)[source]¶
Bases:
proto.enums.EnumStates for the lifecycle of a
Chunk.- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- STATE_PENDING_PROCESSING (1):
Chunkis being processed (embedding and vector storage).- STATE_ACTIVE (2):
Chunkis processed and available for querying.- STATE_FAILED (10):
Chunkfailed processing.
- class google.ai.generativelanguage_v1alpha.types.ChunkData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageExtracted data that represents the
Chunkcontent.
- class google.ai.generativelanguage_v1alpha.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA collection of source attributions for a piece of content.
- citation_sources¶
Citations to sources for a specific response.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.CitationSource]
- class google.ai.generativelanguage_v1alpha.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA citation to a source for a portion of a specific response.
- start_index¶
Optional. Start of segment of the response that is attributed to this source.
Index indicates the start of the segment, measured in bytes.
This field is a member of oneof
_start_index.- Type
- end_index¶
Optional. End of the attributed segment, exclusive.
This field is a member of oneof
_end_index.- Type
- class google.ai.generativelanguage_v1alpha.types.CodeExecution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool that executes code generated by the model, and automatically returns the result to the model.
See also
ExecutableCodeandCodeExecutionResultwhich are only generated when using this tool.
- class google.ai.generativelanguage_v1alpha.types.CodeExecutionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResult of executing the
ExecutableCode.Only generated when using the
CodeExecution, and always follows apartcontaining theExecutableCode.- outcome¶
Required. Outcome of the code execution.
- output¶
Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
- Type
- class Outcome(value)[source]¶
Bases:
proto.enums.EnumEnumeration of possible outcomes of the code execution.
- Values:
- OUTCOME_UNSPECIFIED (0):
Unspecified status. This value should not be used.
- OUTCOME_OK (1):
Code execution completed successfully.
- OUTCOME_FAILED (2):
Code execution finished but with a failure.
stderrshould contain the reason.- OUTCOME_DEADLINE_EXCEEDED (3):
Code execution ran for too long, and was cancelled. There may or may not be a partial output present.
- class google.ai.generativelanguage_v1alpha.types.Condition(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageFilter condition applicable to a single key.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value to filter the metadata on.
This field is a member of oneof
value.- Type
- numeric_value¶
The numeric value to filter the metadata on.
This field is a member of oneof
value.- Type
- operation¶
Required. Operator applied to the given key-value pair to trigger the condition.
- class Operator(value)[source]¶
Bases:
proto.enums.EnumDefines the valid operators that can be applied to a key-value pair.
- Values:
- OPERATOR_UNSPECIFIED (0):
The default value. This value is unused.
- LESS (1):
Supported by numeric.
- LESS_EQUAL (2):
Supported by numeric.
- EQUAL (3):
Supported by numeric & string.
- GREATER_EQUAL (4):
Supported by numeric.
- GREATER (5):
Supported by numeric.
- NOT_EQUAL (6):
Supported by numeric & string.
- INCLUDES (7):
Supported by string only when
CustomMetadatavalue type for the given key has astring_list_value.- EXCLUDES (8):
Supported by string only when
CustomMetadatavalue type for the given key has astring_list_value.
- class google.ai.generativelanguage_v1alpha.types.Content(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe base structured datatype containing multi-part content of a message.
A
Contentincludes arolefield designating the producer of theContentand apartsfield containing multi-part data that contains the content of the message turn.- parts¶
Ordered
Partsthat constitute a single message. Parts may have different MIME types.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Part]
- class google.ai.generativelanguage_v1alpha.types.ContentEmbedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA list of floats representing an embedding.
- class google.ai.generativelanguage_v1alpha.types.ContentFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageContent filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.
- reason¶
The reason content was blocked during request processing.
- message¶
A string that describes the filtering behavior in more detail.
This field is a member of oneof
_message.- Type
- class BlockedReason(value)[source]¶
Bases:
proto.enums.EnumA list of reasons why content may have been blocked.
- Values:
- BLOCKED_REASON_UNSPECIFIED (0):
A blocked reason was not specified.
- SAFETY (1):
Content was blocked by safety settings.
- OTHER (2):
Content was blocked, but the reason is uncategorized.
- class google.ai.generativelanguage_v1alpha.types.Corpus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Corpusis a collection ofDocuments. A project can create up to 5 corpora.- name¶
Immutable. Identifier. The
Corpusresource name. The ID (name excluding the “corpora/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_namealong with a 12 character random suffix. Example:corpora/my-awesome-corpora-123a456b789c- Type
- display_name¶
Optional. The human-readable display name for the
Corpus. The display name must be no more than 512 characters in length, including spaces. Example: “Docs on Semantic Retriever”.- Type
- create_time¶
Output only. The Timestamp of when the
Corpuswas created.
- update_time¶
Output only. The Timestamp of when the
Corpuswas last updated.
- class google.ai.generativelanguage_v1alpha.types.CountMessageTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- prompt¶
Required. The prompt, whose token count is to be returned.
- class google.ai.generativelanguage_v1alpha.types.CountMessageTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountMessageTokens.It returns the model’s
token_countfor theprompt.
- class google.ai.generativelanguage_v1alpha.types.CountTextTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- prompt¶
Required. The free-form input text given to the model as a prompt.
- class google.ai.generativelanguage_v1alpha.types.CountTextTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountTextTokens.It returns the model’s
token_countfor theprompt.
- class google.ai.generativelanguage_v1alpha.types.CountTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- contents¶
Optional. The input given to the model as a prompt. This field is ignored when
generate_content_requestis set.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Content]
- generate_content_request¶
Optional. The overall input given to the
Model. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling.Models/Contents andgenerate_content_requests are mutually exclusive. You can either sendModel+Contents or agenerate_content_request, but never both.
- class google.ai.generativelanguage_v1alpha.types.CountTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountTokens.It returns the model’s
token_countfor theprompt.- total_tokens¶
The number of tokens that the
Modeltokenizes thepromptinto. Always non-negative.- Type
- class google.ai.generativelanguage_v1alpha.types.CreateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create CachedContent.
- cached_content¶
Required. The cached content to create.
- class google.ai.generativelanguage_v1alpha.types.CreateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Chunk.- parent¶
Required. The name of the
Documentwhere thisChunkwill be created. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- chunk¶
Required. The
Chunkto create.
- class google.ai.generativelanguage_v1alpha.types.CreateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Corpus.- corpus¶
Required. The
Corpusto create.
- class google.ai.generativelanguage_v1alpha.types.CreateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Document.- parent¶
Required. The name of the
Corpuswhere thisDocumentwill be created. Example:corpora/my-corpus-123- Type
- document¶
Required. The
Documentto create.
- class google.ai.generativelanguage_v1alpha.types.CreateFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
CreateFile.- file¶
Optional. Metadata for the file to create.
- class google.ai.generativelanguage_v1alpha.types.CreateFileResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse for
CreateFile.- file¶
Metadata for the created file.
- class google.ai.generativelanguage_v1alpha.types.CreatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Permission.- parent¶
Required. The parent resource of the
Permission. Formats:tunedModels/{tuned_model}corpora/{corpus}- Type
- permission¶
Required. The permission to create.
- class google.ai.generativelanguage_v1alpha.types.CreateTunedModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata about the state and progress of creating a tuned model returned from the long-running operation
- snapshots¶
Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningSnapshot]
- class google.ai.generativelanguage_v1alpha.types.CreateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a TunedModel.
- tuned_model_id¶
Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression:
[a-z]([a-z0-9-]{0,38}[a-z0-9])?.This field is a member of oneof
_tuned_model_id.- Type
- tuned_model¶
Required. The tuned model to create.
- class google.ai.generativelanguage_v1alpha.types.CustomMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided metadata stored as key-value pairs.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value of the metadata to store.
This field is a member of oneof
value.- Type
- string_list_value¶
The StringList value of the metadata to store.
This field is a member of oneof
value.
- class google.ai.generativelanguage_v1alpha.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageDataset for training or validation.
- class google.ai.generativelanguage_v1alpha.types.DeleteCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete CachedContent.
- class google.ai.generativelanguage_v1alpha.types.DeleteChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Chunk.
- class google.ai.generativelanguage_v1alpha.types.DeleteCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Corpus.
- class google.ai.generativelanguage_v1alpha.types.DeleteDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Document.- name¶
Required. The resource name of the
Documentto delete. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- class google.ai.generativelanguage_v1alpha.types.DeleteFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
DeleteFile.
- class google.ai.generativelanguage_v1alpha.types.DeletePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete the
Permission.
- class google.ai.generativelanguage_v1alpha.types.DeleteTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a TunedModel.
- class google.ai.generativelanguage_v1alpha.types.Document(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Documentis a collection ofChunks. ACorpuscan have a maximum of 10,000Documents.- name¶
Immutable. Identifier. The
Documentresource name. The ID (name excluding the “corpora/*/documents/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_namealong with a 12 character random suffix. Example:corpora/{corpus_id}/documents/my-awesome-doc-123a456b789c- Type
- display_name¶
Optional. The human-readable display name for the
Document. The display name must be no more than 512 characters in length, including spaces. Example: “Semantic Retriever Documentation”.- Type
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs used for querying. A
Documentcan have a maximum of 20CustomMetadata.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.CustomMetadata]
- update_time¶
Output only. The Timestamp of when the
Documentwas last updated.
- create_time¶
Output only. The Timestamp of when the
Documentwas created.
- class google.ai.generativelanguage_v1alpha.types.DynamicRetrievalConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageDescribes the options to customize dynamic retrieval.
- mode¶
The mode of the predictor to be used in dynamic retrieval.
- dynamic_threshold¶
The threshold to be used in dynamic retrieval. If not set, a system default value is used.
This field is a member of oneof
_dynamic_threshold.- Type
- class Mode(value)[source]¶
Bases:
proto.enums.EnumThe mode of the predictor to be used in dynamic retrieval.
- Values:
- MODE_UNSPECIFIED (0):
Always trigger retrieval.
- MODE_DYNAMIC (1):
Run retrieval only when system decides it is necessary.
- class google.ai.generativelanguage_v1alpha.types.EmbedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest containing the
Contentfor the model to embed.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- content¶
Required. The content to embed. Only the
parts.textfields will be counted.
- task_type¶
Optional. Optional task type for which the embeddings will be used. Can only be set for
models/embedding-001.This field is a member of oneof
_task_type.
- title¶
Optional. An optional title for the text. Only applicable when TaskType is
RETRIEVAL_DOCUMENT.Note: Specifying a
titleforRETRIEVAL_DOCUMENTprovides better quality embeddings for retrieval.This field is a member of oneof
_title.- Type
- output_dimensionality¶
Optional. Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end. Supported by newer models since 2024 only. You cannot set this value if using the earlier model (
models/embedding-001).This field is a member of oneof
_output_dimensionality.- Type
- class google.ai.generativelanguage_v1alpha.types.EmbedContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to an
EmbedContentRequest.- embedding¶
Output only. The embedding generated from the input content.
- class google.ai.generativelanguage_v1alpha.types.EmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to get a text embedding from the model.
- class google.ai.generativelanguage_v1alpha.types.EmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a EmbedTextRequest.
- class google.ai.generativelanguage_v1alpha.types.Embedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA list of floats representing the embedding.
- class google.ai.generativelanguage_v1alpha.types.Example(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAn input/output example used to instruct the Model.
It demonstrates how the model should respond or format its response.
- input¶
Required. An example of an input
Messagefrom the user.
- output¶
Required. An example of what the model should output given the input.
- class google.ai.generativelanguage_v1alpha.types.ExecutableCode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCode generated by the model that is meant to be executed, and the result returned to the model.
Only generated when using the
CodeExecutiontool, in which the code will be automatically executed, and a correspondingCodeExecutionResultwill also be generated.- language¶
Required. Programming language of the
code.
- class Language(value)[source]¶
Bases:
proto.enums.EnumSupported programming languages for the generated code.
- Values:
- LANGUAGE_UNSPECIFIED (0):
Unspecified language. This value should not be used.
- PYTHON (1):
Python >= 3.10, with numpy and simpy available.
- class google.ai.generativelanguage_v1alpha.types.File(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA file uploaded to the API. Next ID: 15
- name¶
Immutable. Identifier. The
Fileresource name. The ID (name excluding the “files/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be generated. Example:files/123-456- Type
- display_name¶
Optional. The human-readable display name for the
File. The display name must be no more than 512 characters in length, including spaces. Example: “Welcome Image”.- Type
- create_time¶
Output only. The timestamp of when the
Filewas created.
- update_time¶
Output only. The timestamp of when the
Filewas last updated.
- expiration_time¶
Output only. The timestamp of when the
Filewill be deleted. Only set if theFileis scheduled to expire.
- state¶
Output only. Processing state of the File.
- error¶
Output only. Error status if File processing failed.
- Type
google.rpc.status_pb2.Status
- class State(value)[source]¶
Bases:
proto.enums.EnumStates for the lifecycle of a File.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- PROCESSING (1):
File is being processed and cannot be used for inference yet.
- ACTIVE (2):
File is processed and available for inference.
- FAILED (10):
File failed processing.
- class google.ai.generativelanguage_v1alpha.types.FileData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageURI based data.
- class google.ai.generativelanguage_v1alpha.types.FunctionCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA predicted
FunctionCallreturned from the model that contains a string representing theFunctionDeclaration.namewith the arguments and their values.- id¶
Optional. The unique id of the function call. If populated, the client to execute the
function_calland return the response with the matchingid.- Type
- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- class google.ai.generativelanguage_v1alpha.types.FunctionCallingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration for specifying function calling behavior.
- mode¶
Optional. Specifies the mode in which function calling should execute. If unspecified, the default value will be set to AUTO.
- allowed_function_names¶
Optional. A set of function names that, when provided, limits the functions the model will call.
This should only be set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
- Type
MutableSequence[str]
- class Mode(value)[source]¶
Bases:
proto.enums.EnumDefines the execution behavior for function calling by defining the execution mode.
- Values:
- MODE_UNSPECIFIED (0):
Unspecified function calling mode. This value should not be used.
- AUTO (1):
Default model behavior, model decides to predict either a function call or a natural language response.
- ANY (2):
Model is constrained to always predicting a function call only. If “allowed_function_names” are set, the predicted function call will be limited to any one of “allowed_function_names”, else the predicted function call will be any one of the provided “function_declarations”.
- NONE (3):
Model will not predict any function call. Model behavior is same as when not passing any function declarations.
- class google.ai.generativelanguage_v1alpha.types.FunctionDeclaration(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageStructured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a
Toolby the model and executed by the client.- name¶
Required. The name of the function. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- parameters¶
Optional. Describes the parameters to this function. Reflects the Open API 3.03 Parameter Object string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter.
This field is a member of oneof
_parameters.
- class google.ai.generativelanguage_v1alpha.types.FunctionResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe result output from a
FunctionCallthat contains a string representing theFunctionDeclaration.nameand a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCallmade based on model prediction.- id¶
Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call
id.- Type
- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 63.
- Type
- response¶
Required. The function response in JSON object format.
- class google.ai.generativelanguage_v1alpha.types.GenerateAnswerRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a grounded answer from the
Model.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- inline_passages¶
Passages provided inline with the request.
This field is a member of oneof
grounding_source.
- semantic_retriever¶
Content retrieved from resources created via the Semantic Retriever API.
This field is a member of oneof
grounding_source.
- model¶
Required. The name of the
Modelto use for generating the grounded response.Format:
model=models/{model}.- Type
- contents¶
Required. The content of the current conversation with the
Model. For single-turn queries, this is a single question to answer. For multi-turn queries, this is a repeated field that contains conversation history and the lastContentin the list containing the question.Note:
GenerateAnsweronly supports queries in English.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Content]
- answer_style¶
Required. Style in which answers should be returned.
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.This will be enforced on the
GenerateAnswerRequest.contentsandGenerateAnswerResponse.candidate. There should not be more than one setting for eachSafetyCategorytype. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetySetting]
- temperature¶
Optional. Controls the randomness of the output.
Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. A low temperature (~0.2) is usually recommended for Attributed-Question-Answering use cases.
This field is a member of oneof
_temperature.- Type
- class AnswerStyle(value)[source]¶
Bases:
proto.enums.EnumStyle for grounded answers.
- Values:
- ANSWER_STYLE_UNSPECIFIED (0):
Unspecified answer style.
- ABSTRACTIVE (1):
Succint but abstract style.
- EXTRACTIVE (2):
Very brief and extractive style.
- VERBOSE (3):
Verbose style including extra details. The response may be formatted as a sentence, paragraph, multiple paragraphs, or bullet points, etc.
- class google.ai.generativelanguage_v1alpha.types.GenerateAnswerResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from the model for a grounded answer.
- answer¶
Candidate answer from the model.
Note: The model always attempts to provide a grounded answer, even when the answer is unlikely to be answerable from the given passages. In that case, a low-quality or ungrounded answer may be provided, along with a low
answerable_probability.
- answerable_probability¶
Output only. The model’s estimate of the probability that its answer is correct and grounded in the input passages.
A low
answerable_probabilityindicates that the answer might not be grounded in the sources.When
answerable_probabilityis low, you may want to:Display a message to the effect of “We couldn’t answer that question” to the user.
Fall back to a general-purpose LLM that answers the question from world knowledge. The threshold and nature of such fallbacks will depend on individual use cases.
0.5is a good starting threshold.
This field is a member of oneof
_answerable_probability.- Type
- input_feedback¶
Output only. Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
The input data can be one or more of the following:
Question specified by the last entry in
GenerateAnswerRequest.contentConversation history specified by the other entries in
GenerateAnswerRequest.contentGrounding sources (
GenerateAnswerRequest.semantic_retrieverorGenerateAnswerRequest.inline_passages)
This field is a member of oneof
_input_feedback.
- class InputFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageFeedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
- block_reason¶
Optional. If set, the input was blocked and no candidates are returned. Rephrase the input.
This field is a member of oneof
_block_reason.
- safety_ratings¶
Ratings for safety of the input. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.EnumSpecifies what was the reason why input was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Input was blocked due to safety reasons. Inspect
safety_ratingsto understand which safety category blocked it.- OTHER (2):
Input was blocked due to other reasons.
- class google.ai.generativelanguage_v1alpha.types.GenerateContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a completion from the model.
- model¶
Required. The name of the
Modelto use for generating the completion.Format:
models/{model}.- Type
- system_instruction¶
Optional. Developer set system instruction(s). Currently, text only.
This field is a member of oneof
_system_instruction.
- contents¶
Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Content]
- tools¶
Optional. A list of
ToolstheModelmay use to generate the next response.A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of theModel. SupportedTools areFunctionandcode_execution. Refer to the Function calling and the Code execution guides to learn more.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Tool]
- tool_config¶
Optional. Tool configuration for any
Toolspecified in the request. Refer to the Function calling guide for a usage example.
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.This will be enforced on the
GenerateContentRequest.contentsandGenerateContentResponse.candidates. There should not be more than one setting for eachSafetyCategorytype. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetySetting]
- class google.ai.generativelanguage_v1alpha.types.GenerateContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from the model supporting multiple candidate responses.
Safety ratings and content filtering are reported for both prompt in
GenerateContentResponse.prompt_feedbackand for each candidate infinish_reasonand insafety_ratings. The API:Returns either all requested candidates or none of them
Returns no candidates at all only if there was something wrong with the prompt (check
prompt_feedback)Reports feedback on each candidate in
finish_reasonandsafety_ratings.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Candidate]
- prompt_feedback¶
Returns the prompt’s feedback related to the content filters.
- usage_metadata¶
Output only. Metadata on the generation requests’ token usage.
- class PromptFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA set of the feedback metadata the prompt specified in
GenerateContentRequest.content.- block_reason¶
Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.
- safety_ratings¶
Ratings for safety of the prompt. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.EnumSpecifies the reason why the prompt was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Prompt was blocked due to safety reasons. Inspect
safety_ratingsto understand which safety category blocked it.- OTHER (2):
Prompt was blocked due to unknown reasons.
- BLOCKLIST (3):
Prompt was blocked due to the terms which are included from the terminology blocklist.
- PROHIBITED_CONTENT (4):
Prompt was blocked due to prohibited content.
- IMAGE_SAFETY (5):
Candidates blocked due to unsafe image generation content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata on the generation request’s token usage.
- prompt_token_count¶
Number of tokens in the prompt. When
cached_contentis set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.- Type
- cached_content_token_count¶
Number of tokens in the cached part of the prompt (the cached content)
- Type
- candidates_token_count¶
Total number of tokens across all the generated response candidates.
- Type
- class google.ai.generativelanguage_v1alpha.types.GenerateMessageRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a message response from the model.
- prompt¶
Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0], inclusive. A value closer to1.0will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model.This field is a member of oneof
_temperature.- Type
- candidate_count¶
Optional. The number of generated response messages to return.
This value must be between
[1, 8], inclusive. If unset, this will default to1.This field is a member of oneof
_candidate_count.- Type
- class google.ai.generativelanguage_v1alpha.types.GenerateMessageResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response from the model.
This includes candidate messages and conversation history in the form of chronologically-ordered messages.
- candidates¶
Candidate response messages from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Message]
- messages¶
The conversation history used by the model.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Message]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory(s) blocked a candidate from this response, the lowestHarmProbabilitythat triggered a block, and the HarmThreshold setting for that category.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.ContentFilter]
- class google.ai.generativelanguage_v1alpha.types.GenerateTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a text completion response from the model.
- model¶
Required. The name of the
ModelorTunedModelto use for generating the completion. Examples: models/text-bison-001 tunedModels/sentence-translator-u3b7m- Type
- prompt¶
Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.
- temperature¶
Optional. Controls the randomness of the output. Note: The default value varies by model, see the
Model.temperatureattribute of theModelreturned thegetModelfunction.Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.
This field is a member of oneof
_temperature.- Type
- candidate_count¶
Optional. Number of generated responses to return.
This value must be between [1, 8], inclusive. If unset, this will default to 1.
This field is a member of oneof
_candidate_count.- Type
- max_output_tokens¶
Optional. The maximum number of tokens to include in a candidate.
If unset, this will default to output_token_limit specified in the
Modelspecification.This field is a member of oneof
_max_output_tokens.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.
Note: The default value varies by model, see the
Model.top_pattribute of theModelreturned thegetModelfunction.This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Top-k sampling considers the set of
top_kmost probable tokens. Defaults to 40.Note: The default value varies by model, see the
Model.top_kattribute of theModelreturned thegetModelfunction.This field is a member of oneof
_top_k.- Type
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.that will be enforced on the
GenerateTextRequest.promptandGenerateTextResponse.candidates. There should not be more than one setting for eachSafetyCategorytype. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY, HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL, HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported in text service.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetySetting]
- class google.ai.generativelanguage_v1alpha.types.GenerateTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response from the model, including candidate completions.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TextCompletion]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory(s) blocked a candidate from this response, the lowestHarmProbabilitythat triggered a block, and the HarmThreshold setting for that category. This indicates the smallest change to theSafetySettingsthat would be necessary to unblock at least 1 response.The blocking is configured by the
SafetySettingsin the request (or the defaultSafetySettingsof the API).- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.ContentFilter]
- safety_feedback¶
Returns any safety feedback related to content filtering.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetyFeedback]
- class google.ai.generativelanguage_v1alpha.types.GenerationConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration options for model generation and outputs. Not all parameters are configurable for every model.
- candidate_count¶
Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.
This field is a member of oneof
_candidate_count.- Type
- stop_sequences¶
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a
stop_sequence. The stop sequence will not be included as part of the response.- Type
MutableSequence[str]
- max_output_tokens¶
Optional. The maximum number of tokens to include in a response candidate.
Note: The default value varies by model, see the
Model.output_token_limitattribute of theModelreturned from thegetModelfunction.This field is a member of oneof
_max_output_tokens.- Type
- temperature¶
Optional. Controls the randomness of the output.
Note: The default value varies by model, see the
Model.temperatureattribute of theModelreturned from thegetModelfunction.Values can range from [0.0, 2.0].
This field is a member of oneof
_temperature.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and Top-p (nucleus) sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.
Note: The default value varies by
Modeland is specified by theModel.top_pattribute returned from thegetModelfunction. An emptytop_kattribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_kon requests.This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of
top_kmost probable tokens. Models running with nucleus sampling don’t allow top_k setting.Note: The default value varies by
Modeland is specified by theModel.top_pattribute returned from thegetModelfunction. An emptytop_kattribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_kon requests.This field is a member of oneof
_top_k.- Type
- response_mime_type¶
Optional. MIME type of the generated candidate text. Supported MIME types are:
text/plain: (default) Text output.application/json: JSON response in the response candidates.text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.- Type
- response_schema¶
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays.
If set, a compatible
response_mime_typemust also be set. Compatible MIME types:application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.
- presence_penalty¶
Optional. Presence penalty applied to the next token’s logprobs if the token has already been seen in the response.
This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use [frequency_penalty][google.ai.generativelanguage.v1alpha.GenerationConfig.frequency_penalty] for a penalty that increases with each use.
A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.
A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
This field is a member of oneof
_presence_penalty.- Type
- frequency_penalty¶
Optional. Frequency penalty applied to the next token’s logprobs, multiplied by the number of times each token has been seen in the respponse so far.
A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more dificult it is for the model to use that token again increasing the vocabulary of responses.
Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the [max_output_tokens][google.ai.generativelanguage.v1alpha.GenerationConfig.max_output_tokens] limit.
This field is a member of oneof
_frequency_penalty.- Type
- response_logprobs¶
Optional. If true, export the logprobs results in response.
This field is a member of oneof
_response_logprobs.- Type
- logprobs¶
Optional. Only valid if [response_logprobs=True][google.ai.generativelanguage.v1alpha.GenerationConfig.response_logprobs]. This sets the number of top logprobs to return at each decoding step in the [Candidate.logprobs_result][google.ai.generativelanguage.v1alpha.Candidate.logprobs_result].
This field is a member of oneof
_logprobs.- Type
- enable_enhanced_civic_answers¶
Optional. Enables enhanced civic answers. It may not be available for all models.
This field is a member of oneof
_enable_enhanced_civic_answers.- Type
- response_modalities¶
Optional. The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response.
A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned.
An empty list is equivalent to requesting only text.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.GenerationConfig.Modality]
- speech_config¶
Optional. The speech generation config.
This field is a member of oneof
_speech_config.
- class Modality(value)[source]¶
Bases:
proto.enums.EnumSupported modalities of the response.
- Values:
- MODALITY_UNSPECIFIED (0):
Default value.
- TEXT (1):
Indicates the model should return text.
- IMAGE (2):
Indicates the model should return images.
- AUDIO (3):
Indicates the model should return audio.
- class google.ai.generativelanguage_v1alpha.types.GetCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to read CachedContent.
- class google.ai.generativelanguage_v1alpha.types.GetChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Chunk.
- class google.ai.generativelanguage_v1alpha.types.GetCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Corpus.
- class google.ai.generativelanguage_v1alpha.types.GetDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Document.
- class google.ai.generativelanguage_v1alpha.types.GetFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
GetFile.
- class google.ai.generativelanguage_v1alpha.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific Model.
- class google.ai.generativelanguage_v1alpha.types.GetPermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Permission.
- class google.ai.generativelanguage_v1alpha.types.GetTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific Model.
- class google.ai.generativelanguage_v1alpha.types.GoogleSearchRetrieval(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool to retrieve public web data for grounding, powered by Google.
- dynamic_retrieval_config¶
Specifies the dynamic retrieval configuration for the given source.
- class google.ai.generativelanguage_v1alpha.types.GroundingAttribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAttribution for a source that contributed to an answer.
- source_id¶
Output only. Identifier for the source contributing to this attribution.
- content¶
Grounding source content that makes up this attribution.
- class google.ai.generativelanguage_v1alpha.types.GroundingChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGrounding chunk.
- class google.ai.generativelanguage_v1alpha.types.GroundingMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata returned to client when grounding is enabled.
- search_entry_point¶
Optional. Google search entry for the following-up web searches.
This field is a member of oneof
_search_entry_point.
- grounding_chunks¶
List of supporting references retrieved from specified grounding source.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.GroundingChunk]
- grounding_supports¶
List of grounding support.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.GroundingSupport]
- class google.ai.generativelanguage_v1alpha.types.GroundingPassage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessagePassage included inline with a grounding configuration.
- content¶
Content of the passage.
- class google.ai.generativelanguage_v1alpha.types.GroundingPassages(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA repeated list of passages.
- passages¶
List of passages.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.GroundingPassage]
- class google.ai.generativelanguage_v1alpha.types.GroundingSupport(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGrounding support.
- class google.ai.generativelanguage_v1alpha.types.HarmCategory(value)[source]¶
Bases:
proto.enums.EnumThe category of a rating.
These categories cover various kinds of harms that developers may wish to adjust.
- Values:
- HARM_CATEGORY_UNSPECIFIED (0):
Category is unspecified.
- HARM_CATEGORY_DEROGATORY (1):
PaLM - Negative or harmful comments targeting identity and/or protected attribute.
- HARM_CATEGORY_TOXICITY (2):
PaLM - Content that is rude, disrespectful, or profane.
- HARM_CATEGORY_VIOLENCE (3):
PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore.
- HARM_CATEGORY_SEXUAL (4):
PaLM - Contains references to sexual acts or other lewd content.
- HARM_CATEGORY_MEDICAL (5):
PaLM - Promotes unchecked medical advice.
- HARM_CATEGORY_DANGEROUS (6):
PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts.
- HARM_CATEGORY_HARASSMENT (7):
Gemini - Harassment content.
- HARM_CATEGORY_HATE_SPEECH (8):
Gemini - Hate speech and content.
- HARM_CATEGORY_SEXUALLY_EXPLICIT (9):
Gemini - Sexually explicit content.
- HARM_CATEGORY_DANGEROUS_CONTENT (10):
Gemini - Dangerous content.
- HARM_CATEGORY_CIVIC_INTEGRITY (11):
Gemini - Content that may be used to harm civic integrity.
- class google.ai.generativelanguage_v1alpha.types.Hyperparameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageHyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- learning_rate¶
Optional. Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.001 or 0.0002 will be calculated based on the number of training examples.
This field is a member of oneof
learning_rate_option.- Type
- learning_rate_multiplier¶
Optional. Immutable. The learning rate multiplier is used to calculate a final learning_rate based on the default (recommended) value. Actual learning rate := learning_rate_multiplier * default learning rate Default learning rate is dependent on base model and dataset size. If not set, a default of 1.0 will be used.
This field is a member of oneof
learning_rate_option.- Type
- class google.ai.generativelanguage_v1alpha.types.ListCachedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to list CachedContents.
- page_size¶
Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
- Type
- class google.ai.generativelanguage_v1alpha.types.ListCachedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse with CachedContents list.
- cached_contents¶
List of cached contents.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.CachedContent]
- class google.ai.generativelanguage_v1alpha.types.ListChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Chunks.- parent¶
Required. The name of the
DocumentcontainingChunks. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- page_size¶
Optional. The maximum number of
Chunks to return (per page). The service may return fewerChunks.If unspecified, at most 10
Chunks will be returned. The maximum size limit is 100Chunks per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListChunkscall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListChunksmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1alpha.types.ListChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListChunkscontaining a paginated list ofChunks. TheChunks are sorted by ascendingchunk.create_time.- chunks¶
The returned
Chunks.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Chunk]
- class google.ai.generativelanguage_v1alpha.types.ListCorporaRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Corpora.- page_size¶
Optional. The maximum number of
Corporato return (per page). The service may return fewerCorpora.If unspecified, at most 10
Corporawill be returned. The maximum size limit is 20Corporaper page.- Type
- page_token¶
Optional. A page token, received from a previous
ListCorporacall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListCorporamust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1alpha.types.ListCorporaResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListCorporacontaining a paginated list ofCorpora. The results are sorted by ascendingcorpus.create_time.- corpora¶
The returned corpora.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Corpus]
- class google.ai.generativelanguage_v1alpha.types.ListDocumentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Documents.- parent¶
Required. The name of the
CorpuscontainingDocuments. Example:corpora/my-corpus-123- Type
- page_size¶
Optional. The maximum number of
Documents to return (per page). The service may return fewerDocuments.If unspecified, at most 10
Documents will be returned. The maximum size limit is 20Documents per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListDocumentscall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListDocumentsmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1alpha.types.ListDocumentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListDocumentscontaining a paginated list ofDocuments. TheDocuments are sorted by ascendingdocument.create_time.- documents¶
The returned
Documents.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Document]
- class google.ai.generativelanguage_v1alpha.types.ListFilesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
ListFiles.- page_size¶
Optional. Maximum number of
Files to return per page. If unspecified, defaults to 10. Maximumpage_sizeis 100.- Type
- class google.ai.generativelanguage_v1alpha.types.ListFilesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse for
ListFiles.- files¶
The list of
Files.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.File]
- class google.ai.generativelanguage_v1alpha.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing all Models.
- page_size¶
The maximum number of
Modelsto return (per page).If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- class google.ai.generativelanguage_v1alpha.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListModelcontaining a paginated list of Models.- models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Model]
- class google.ai.generativelanguage_v1alpha.types.ListPermissionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing permissions.
- parent¶
Required. The parent resource of the permissions. Formats:
tunedModels/{tuned_model}corpora/{corpus}- Type
- page_size¶
Optional. The maximum number of
Permissions to return (per page). The service may return fewer permissions.If unspecified, at most 10 permissions will be returned. This method returns at most 1000 permissions per page, even if you pass larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListPermissionscall.Provide the
page_tokenreturned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListPermissionsmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1alpha.types.ListPermissionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListPermissionscontaining a paginated list of permissions.- permissions¶
Returned permissions.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Permission]
- class google.ai.generativelanguage_v1alpha.types.ListTunedModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing TunedModels.
- page_size¶
Optional. The maximum number of
TunedModelsto return (per page). The service may return fewer tuned models.If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListTunedModelscall.Provide the
page_tokenreturned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListTunedModelsmust match the call that provided the page token.- Type
- filter¶
Optional. A filter is a full text search over the tuned model’s description and display name. By default, results will not include tuned models shared with everyone.
Additional operators:
owner:me
writers:me
readers:me
readers:everyone
Examples
“owner:me” returns all tuned models to which
caller has owner role “readers:me” returns all tuned models to which caller has reader role “readers:everyone” returns all tuned models that are shared with everyone
- Type
- class google.ai.generativelanguage_v1alpha.types.ListTunedModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListTunedModelscontaining a paginated list of Models.- tuned_models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TunedModel]
- class google.ai.generativelanguage_v1alpha.types.LogprobsResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageLogprobs Result
- top_candidates¶
Length = total number of decoding steps.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.LogprobsResult.TopCandidates]
- chosen_candidates¶
Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.LogprobsResult.Candidate]
- class Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCandidate for the logprobs token and score.
- class TopCandidates(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCandidates with top log probabilities at each decoding step.
- candidates¶
Sorted by log probability in descending order.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.LogprobsResult.Candidate]
- class google.ai.generativelanguage_v1alpha.types.Message(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe base unit of structured text.
A
Messageincludes anauthorand thecontentof theMessage.The
authoris used to tag messages when they are fed to the model as text.- author¶
Optional. The author of this Message.
This serves as a key for tagging the content of this Message when it is fed to the model as text.
The author can be any alphanumeric string.
- Type
- citation_metadata¶
Output only. Citation information for model-generated
contentin thisMessage.If this
Messagewas generated as output from the model, this field may be populated with attribution information for any text included in thecontent. This field is used only on output.This field is a member of oneof
_citation_metadata.
- class google.ai.generativelanguage_v1alpha.types.MessagePrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAll of the structured input text passed to the model as a prompt.
A
MessagePromptcontains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.- context¶
Optional. Text that should be provided to the model first to ground the response.
If not empty, this
contextwill be given to the model first before theexamplesandmessages. When using acontextbe sure to provide it with every request to maintain continuity.This field can be a description of your prompt to the model to help provide context and guide the responses. Examples: “Translate the phrase from English to French.” or “Given a statement, classify the sentiment as happy, sad or neutral.”
Anything included in this field will take precedence over message history if the total input size exceeds the model’s
input_token_limitand the input request is truncated.- Type
- examples¶
Optional. Examples of what the model should generate.
This includes both user input and the response that the model should emulate.
These
examplesare treated identically to conversation messages except that they take precedence over the history inmessages: If the total input size exceeds the model’sinput_token_limitthe input will be truncated. Items will be dropped frommessagesbeforeexamples.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Example]
- messages¶
Required. A snapshot of the recent conversation history sorted chronologically.
Turns alternate between two authors.
If the total input size exceeds the model’s
input_token_limitthe input will be truncated: The oldest items will be dropped frommessages.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Message]
- class google.ai.generativelanguage_v1alpha.types.MetadataFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided filter to limit retrieval based on
ChunkorDocumentlevel metadata values. Example (genre = drama OR genre = action): key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]- conditions¶
Required. The
Conditions for the given key that will trigger this filter. MultipleConditions are joined by logical ORs.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.Condition]
- class google.ai.generativelanguage_v1alpha.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageInformation about a Generative Language Model.
- name¶
Required. The resource name of the
Model. Refer to Model variants for all allowed values.Format:
models/{model}with a{model}naming convention of:“{base_model_id}-{version}”
Examples:
models/gemini-1.5-flash-001
- Type
- base_model_id¶
Required. The name of the base model, pass this to the generation request.
Examples:
gemini-1.5-flash
- Type
- version¶
Required. The version number of the model.
This represents the major version (
1.0or1.5)- Type
- display_name¶
The human-readable name of the model. E.g. “Gemini 1.5 Flash”. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Type
- supported_generation_methods¶
The model’s supported generation methods.
The corresponding API method names are defined as Pascal case strings, such as
generateMessageandgenerateContent.- Type
MutableSequence[str]
- temperature¶
Controls the randomness of the output.
Values can range over
[0.0,max_temperature], inclusive. A higher value will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_temperature.- Type
- max_temperature¶
The maximum temperature this model can use.
This field is a member of oneof
_max_temperature.- Type
- top_p¶
For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_top_p.- Type
- top_k¶
For Top-k sampling.
Top-k sampling considers the set of
top_kmost probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn’t use top-k sampling, andtop_kisn’t allowed as a generation parameter.This field is a member of oneof
_top_k.- Type
- class google.ai.generativelanguage_v1alpha.types.Part(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA datatype containing media that is part of a multi-part
Contentmessage.A
Partconsists of data which has an associated datatype. APartcan only contain one of the accepted types inPart.data.A
Partmust have a fixed IANA MIME type identifying the type and subtype of the media if theinline_datafield is filled with raw bytes.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- function_call¶
A predicted
FunctionCallreturned from the model that contains a string representing theFunctionDeclaration.namewith the arguments and their values.This field is a member of oneof
data.
- function_response¶
The result output of a
FunctionCallthat contains a string representing theFunctionDeclaration.nameand a structured JSON object containing any output from the function is used as context to the model.This field is a member of oneof
data.
- executable_code¶
Code generated by the model that is meant to be executed.
This field is a member of oneof
data.
- class google.ai.generativelanguage_v1alpha.types.Permission(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessagePermission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus).
A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.
There are three concentric roles. Each role is a superset of the previous role’s permitted operations:
reader can use the resource (e.g. tuned model, corpus) for inference
writer has reader’s permissions and additionally can edit and share
owner has writer’s permissions and additionally can delete
- name¶
Output only. Identifier. The permission name. A unique name will be generated on create. Examples: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission} Output only.
- Type
- grantee_type¶
Optional. Immutable. The type of the grantee.
This field is a member of oneof
_grantee_type.
- email_address¶
Optional. Immutable. The email address of the user of group which this permission refers. Field is not set when permission’s grantee type is EVERYONE.
This field is a member of oneof
_email_address.- Type
- class GranteeType(value)[source]¶
Bases:
proto.enums.EnumDefines types of the grantee of this permission.
- Values:
- GRANTEE_TYPE_UNSPECIFIED (0):
The default value. This value is unused.
- USER (1):
Represents a user. When set, you must provide email_address for the user.
- GROUP (2):
Represents a group. When set, you must provide email_address for the group.
- EVERYONE (3):
Represents access to everyone. No extra information is required.
- class Role(value)[source]¶
Bases:
proto.enums.EnumDefines the role granted by this permission.
- Values:
- ROLE_UNSPECIFIED (0):
The default value. This value is unused.
- OWNER (1):
Owner can use, update, share and delete the resource.
- WRITER (2):
Writer can use, update and share the resource.
- READER (3):
Reader can use the resource.
- class google.ai.generativelanguage_v1alpha.types.PrebuiltVoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for the prebuilt speaker to use.
- class google.ai.generativelanguage_v1alpha.types.PredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest message for [PredictionService.Predict][google.ai.generativelanguage.v1alpha.PredictionService.Predict].
- instances¶
Required. The instances that are the input to the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- parameters¶
Optional. The parameters that govern the prediction call.
- class google.ai.generativelanguage_v1alpha.types.PredictResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse message for [PredictionService.Predict].
- predictions¶
The outputs of the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- class google.ai.generativelanguage_v1alpha.types.QueryCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for querying a
Corpus.- metadata_filters¶
Optional. Filter for
ChunkandDocumentmetadata. EachMetadataFilterobject should correspond to a unique key. MultipleMetadataFilterobjects are joined by logical “AND”s.Example query at document level: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilterobject list: metadata_filters = [ {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]}]Example query at chunk level for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.MetadataFilter]
- class google.ai.generativelanguage_v1alpha.types.QueryCorpusResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
QueryCorpuscontaining a list of relevant chunks.- relevant_chunks¶
The relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.RelevantChunk]
- class google.ai.generativelanguage_v1alpha.types.QueryDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for querying a
Document.- name¶
Required. The name of the
Documentto query. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- results_count¶
Optional. The maximum number of
Chunks to return. The service may return fewerChunks.If unspecified, at most 10
Chunks will be returned. The maximum specified result count is 100.- Type
- metadata_filters¶
Optional. Filter for
Chunkmetadata. EachMetadataFilterobject should correspond to a unique key. MultipleMetadataFilterobjects are joined by logical “AND”s.Note:
Document-level filtering is not supported for this request because aDocumentname is already specified.Example query: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}}, {key = “chunk.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}}]Example query for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.MetadataFilter]
- class google.ai.generativelanguage_v1alpha.types.QueryDocumentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
QueryDocumentcontaining a list of relevant chunks.- relevant_chunks¶
The returned relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.RelevantChunk]
- class google.ai.generativelanguage_v1alpha.types.RelevantChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe information for a chunk relevant to a query.
- chunk¶
Chunkassociated with the query.
- class google.ai.generativelanguage_v1alpha.types.RetrievalMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata related to retrieval in the grounding flow.
- google_search_dynamic_retrieval_score¶
Optional. Score indicating how likely information from google search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when google search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger google search.
- Type
- class google.ai.generativelanguage_v1alpha.types.SafetyFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety feedback for an entire request.
This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.
- rating¶
Safety rating evaluated from content.
- setting¶
Safety settings applied to the request.
- class google.ai.generativelanguage_v1alpha.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety rating for a piece of content.
The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.
- category¶
Required. The category for this rating.
- probability¶
Required. The probability of harm for this content.
- class HarmProbability(value)[source]¶
Bases:
proto.enums.EnumThe probability that a piece of content is harmful.
The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.
- Values:
- HARM_PROBABILITY_UNSPECIFIED (0):
Probability is unspecified.
- NEGLIGIBLE (1):
Content has a negligible chance of being unsafe.
- LOW (2):
Content has a low chance of being unsafe.
- MEDIUM (3):
Content has a medium chance of being unsafe.
- HIGH (4):
Content has a high chance of being unsafe.
- class google.ai.generativelanguage_v1alpha.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety setting, affecting the safety-blocking behavior.
Passing a safety setting for a category changes the allowed probability that content is blocked.
- category¶
Required. The category for this setting.
- threshold¶
Required. Controls the probability threshold at which harm is blocked.
- class HarmBlockThreshold(value)[source]¶
Bases:
proto.enums.EnumBlock at and beyond a specified harm probability.
- Values:
- HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):
Threshold is unspecified.
- BLOCK_LOW_AND_ABOVE (1):
Content with NEGLIGIBLE will be allowed.
- BLOCK_MEDIUM_AND_ABOVE (2):
Content with NEGLIGIBLE and LOW will be allowed.
- BLOCK_ONLY_HIGH (3):
Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.
- BLOCK_NONE (4):
All content will be allowed.
- OFF (5):
Turn off the safety filter.
- class google.ai.generativelanguage_v1alpha.types.Schema(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe
Schemaobject allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object.- type_¶
Required. Data type.
- format_¶
Optional. The format of the data. This is used only for primitive datatypes. Supported formats:
for NUMBER type: float, double for INTEGER type: int32, int64 for STRING type: enum
- Type
- description¶
Optional. A brief description of the parameter. This could contain examples of use. Parameter description may be formatted as Markdown.
- Type
- enum¶
Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:[“EAST”, NORTH”, “SOUTH”, “WEST”]}
- Type
MutableSequence[str]
- properties¶
Optional. Properties of Type.OBJECT.
- Type
MutableMapping[str, google.ai.generativelanguage_v1alpha.types.Schema]
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.ai.generativelanguage_v1alpha.types.SearchEntryPoint(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGoogle search entry point.
- rendered_content¶
Optional. Web content snippet that can be embedded in a web page or an app webview.
- Type
- class google.ai.generativelanguage_v1alpha.types.Segment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSegment of the content.
- start_index¶
Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
- Type
- end_index¶
Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
- Type
- class google.ai.generativelanguage_v1alpha.types.SemanticRetrieverConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration for retrieving grounding content from a
CorpusorDocumentcreated using the Semantic Retriever API.- source¶
Required. Name of the resource for retrieval. Example:
corpora/123orcorpora/123/documents/abc.- Type
- query¶
Required. Query to use for matching
Chunks in the given resource by similarity.
- metadata_filters¶
Optional. Filters for selecting
Documents and/orChunks from the resource.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.MetadataFilter]
- class google.ai.generativelanguage_v1alpha.types.SpeechConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe speech generation config.
- voice_config¶
The configuration for the speaker to use.
- class google.ai.generativelanguage_v1alpha.types.StringList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided string values assigned to a single metadata key.
- class google.ai.generativelanguage_v1alpha.types.TaskType(value)[source]¶
Bases:
proto.enums.EnumType of task for which the embedding will be used.
- Values:
- TASK_TYPE_UNSPECIFIED (0):
Unset value, which will default to one of the other enum values.
- RETRIEVAL_QUERY (1):
Specifies the given text is a query in a search/retrieval setting.
- RETRIEVAL_DOCUMENT (2):
Specifies the given text is a document from the corpus being searched.
- SEMANTIC_SIMILARITY (3):
Specifies the given text will be used for STS.
- CLASSIFICATION (4):
Specifies that the given text will be classified.
- CLUSTERING (5):
Specifies that the embeddings will be used for clustering.
- QUESTION_ANSWERING (6):
Specifies that the given text will be used for question answering.
- FACT_VERIFICATION (7):
Specifies that the given text will be used for fact verification.
- class google.ai.generativelanguage_v1alpha.types.TextCompletion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageOutput text returned from a model.
- safety_ratings¶
Ratings for the safety of a response.
There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.SafetyRating]
- class google.ai.generativelanguage_v1alpha.types.TextPrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageText given to the model as a prompt.
The Model will use this TextPrompt to Generate a text completion.
- class google.ai.generativelanguage_v1alpha.types.Tool(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool details that the model may use to generate response.
A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.- function_declarations¶
Optional. A list of
FunctionDeclarationsavailable to the model that can be used for function calling.The model or system does not execute the function. Instead the defined function may be returned as a [FunctionCall][google.ai.generativelanguage.v1alpha.Part.function_call] with arguments to the client side for execution. The model may decide to call a subset of these functions by populating [FunctionCall][google.ai.generativelanguage.v1alpha.Part.function_call] in the response. The next conversation turn may contain a [FunctionResponse][google.ai.generativelanguage.v1alpha.Part.function_response] with the [Content.role][google.ai.generativelanguage.v1alpha.Content.role] “function” generation context for the next model turn.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.FunctionDeclaration]
- google_search_retrieval¶
Optional. Retrieval tool that is powered by Google search.
- code_execution¶
Optional. Enables the model to execute code as part of generation.
- google_search¶
Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
- class GoogleSearch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
- class google.ai.generativelanguage_v1alpha.types.ToolConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe Tool configuration containing parameters for specifying
Tooluse in the request.- function_calling_config¶
Optional. Function calling config.
- class google.ai.generativelanguage_v1alpha.types.TransferOwnershipRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to transfer the ownership of the tuned model.
- name¶
Required. The resource name of the tuned model to transfer ownership.
Format:
tunedModels/my-model-id- Type
- class google.ai.generativelanguage_v1alpha.types.TransferOwnershipResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
TransferOwnership.
- class google.ai.generativelanguage_v1alpha.types.TunedModel(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA fine-tuned model created using ModelService.CreateTunedModel.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- tuned_model_source¶
Optional. TunedModel to use as the starting point for training the new model.
This field is a member of oneof
source_model.
- base_model¶
Immutable. The name of the
Modelto tune. Example:models/gemini-1.5-flash-001This field is a member of oneof
source_model.- Type
- name¶
Output only. The tuned model name. A unique name will be generated on create. Example:
tunedModels/az2mb0bpw6iIf display_name is set on create, the id portion of the name will be set by concatenating the words of the display_name with hyphens and adding a random portion for uniqueness.Example:
display_name =
Sentence Translatorname =
tunedModels/sentence-translator-u3b7m
- Type
- display_name¶
Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.
- Type
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0], inclusive. A value closer to1.0will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_temperature.- Type
- top_p¶
Optional. For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. For Top-k sampling.
Top-k sampling considers the set of
top_kmost probable tokens. This value specifies default to be used by the backend while making the call to the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_k.- Type
- state¶
Output only. The state of the tuned model.
- create_time¶
Output only. The timestamp when this model was created.
- update_time¶
Output only. The timestamp when this model was updated.
- tuning_task¶
Required. The tuning task that creates the tuned model.
- reader_project_numbers¶
Optional. List of project numbers that have read access to the tuned model.
- Type
MutableSequence[int]
- class State(value)[source]¶
Bases:
proto.enums.EnumThe state of the tuned model.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is unused.
- CREATING (1):
The model is being created.
- ACTIVE (2):
The model is ready to be used.
- FAILED (3):
The model failed to be created.
- class google.ai.generativelanguage_v1alpha.types.TunedModelSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTuned model as a source for training a new model.
- tuned_model¶
Immutable. The name of the
TunedModelto use as the starting point for training the new model. Example:tunedModels/my-tuned-model- Type
- class google.ai.generativelanguage_v1alpha.types.TuningContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe structured datatype containing multi-part content of an example message.
This is a subset of the Content proto used during model inference with limited type support. A
Contentincludes arolefield designating the producer of theContentand apartsfield containing multi-part data that contains the content of the message turn.- parts¶
Ordered
Partsthat constitute a single message. Parts may have different MIME types.- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningPart]
- class google.ai.generativelanguage_v1alpha.types.TuningExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA single example for tuning.
- class google.ai.generativelanguage_v1alpha.types.TuningExamples(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA set of tuning examples. Can be training or validation data.
- examples¶
The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningExample]
- multiturn_examples¶
Content examples. For multiturn conversations.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningMultiturnExample]
- class google.ai.generativelanguage_v1alpha.types.TuningMultiturnExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA tuning example with multiturn input.
- system_instruction¶
Optional. Developer set system instructions. Currently, text only.
This field is a member of oneof
_system_instruction.
- contents¶
Each Content represents a turn in the conversation.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningContent]
- class google.ai.generativelanguage_v1alpha.types.TuningPart(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA datatype containing data that is part of a multi-part
TuningContentmessage.This is a subset of the Part used for model inference, with limited type support.
A
Partconsists of data which has an associated datatype. APartcan only contain one of the accepted types inPart.data.
- class google.ai.generativelanguage_v1alpha.types.TuningSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRecord for a single tuning step.
- compute_time¶
Output only. The timestamp when this metric was computed.
- class google.ai.generativelanguage_v1alpha.types.TuningTask(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTuning tasks that create tuned models.
- start_time¶
Output only. The timestamp when tuning this model started.
- complete_time¶
Output only. The timestamp when tuning this model completed.
- snapshots¶
Output only. Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1alpha.types.TuningSnapshot]
- training_data¶
Required. Input only. Immutable. The model training data.
- hyperparameters¶
Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.
- class google.ai.generativelanguage_v1alpha.types.Type(value)[source]¶
Bases:
proto.enums.EnumType contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types
- Values:
- TYPE_UNSPECIFIED (0):
Not specified, should not be used.
- STRING (1):
String type.
- NUMBER (2):
Number type.
- INTEGER (3):
Integer type.
- BOOLEAN (4):
Boolean type.
- ARRAY (5):
Array type.
- OBJECT (6):
Object type.
- class google.ai.generativelanguage_v1alpha.types.UpdateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update CachedContent.
- cached_content¶
Required. The content cache entry to update
- update_mask¶
The list of fields to update.
- class google.ai.generativelanguage_v1alpha.types.UpdateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Chunk.- chunk¶
Required. The
Chunkto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
custom_metadataanddata.
- class google.ai.generativelanguage_v1alpha.types.UpdateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Corpus.- corpus¶
Required. The
Corpusto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_name.
- class google.ai.generativelanguage_v1alpha.types.UpdateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Document.- document¶
Required. The
Documentto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_nameandcustom_metadata.
- class google.ai.generativelanguage_v1alpha.types.UpdatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update the
Permission.- permission¶
Required. The permission to update.
The permission’s
namefield is used to identify the permission to update.
- update_mask¶
Required. The list of fields to update. Accepted ones:
role (
Permission.rolefield)
- class google.ai.generativelanguage_v1alpha.types.UpdateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a TunedModel.
- tuned_model¶
Required. The tuned model to update.
- update_mask¶
Optional. The list of fields to update.
- class google.ai.generativelanguage_v1alpha.types.VideoMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata for a video
File.- video_duration¶
Duration of the video.
- class google.ai.generativelanguage_v1alpha.types.VoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for the voice to use.