Types for Google Ai Generativelanguage v1beta API¶
- class google.ai.generativelanguage_v1beta.types.AttributionSourceId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for the source contributing to this attribution.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- semantic_retriever_chunk¶
Identifier for a
Chunkfetched via Semantic Retriever.This field is a member of oneof
source.
- class GroundingPassageId(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for a part within a
GroundingPassage.- passage_id¶
Output only. ID of the passage matching the
GenerateAnswerRequest’sGroundingPassage.id.- Type
- class SemanticRetrieverChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIdentifier for a
Chunkretrieved via Semantic Retriever specified in theGenerateAnswerRequestusingSemanticRetrieverConfig.- source¶
Output only. Name of the source matching the request’s
SemanticRetrieverConfig.source. Example:corpora/123orcorpora/123/documents/abc- Type
- class google.ai.generativelanguage_v1beta.types.AudioTranscriptionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe audio transcription configuration.
- class google.ai.generativelanguage_v1beta.types.BatchCreateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch create
Chunks.- parent¶
Optional. The name of the
Documentwhere this batch ofChunks will be created. The parent field in everyCreateChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to create. A maximum of 100Chunks can be created in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CreateChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchCreateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
BatchCreateChunkscontaining a list of createdChunks.- chunks¶
Chunks created.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.BatchDeleteChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch delete
Chunks.- parent¶
Optional. The name of the
Documentcontaining theChunks to delete. The parent field in everyDeleteChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to delete.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.DeleteChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageBatch request to get embeddings from the model for a list of prompts.
- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- requests¶
Required. Embed requests for the batch. The model in each of these requests must match the model specified
BatchEmbedContentsRequest.model.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedContentRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a
BatchEmbedContentsRequest.- embeddings¶
Output only. The embeddings for each request, in the same order as provided in the batch request.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentEmbedding]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageBatch request to get a text embedding from the model.
- model¶
Required. The name of the
Modelto use for generating the embedding. Examples: models/embedding-gecko-001- Type
- texts¶
Optional. The free-form input texts that the model will turn into an embedding. The current limit is 100 texts, over which an error will be thrown.
- Type
MutableSequence[str]
- requests¶
Optional. Embed requests for the batch. Only one of
textsorrequestscan be set.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.EmbedTextRequest]
- class google.ai.generativelanguage_v1beta.types.BatchEmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a EmbedTextRequest.
- embeddings¶
Output only. The embeddings generated from the input text.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Embedding]
- class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to batch update
Chunks.- parent¶
Optional. The name of the
Documentcontaining theChunks to update. The parent field in everyUpdateChunkRequestmust match this value. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- requests¶
Required. The request messages specifying the
Chunks to update. A maximum of 100Chunks can be updated in a batch.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.UpdateChunkRequest]
- class google.ai.generativelanguage_v1beta.types.BatchUpdateChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
BatchUpdateChunkscontaining a list of updatedChunks.- chunks¶
Chunks updated.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentClientContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIncremental update of the current conversation delivered from the client. All of the content here is unconditionally appended to the conversation history and used as part of the prompt to the model to generate content.
A message here will interrupt any current model generation.
- turns¶
Optional. The content appended to the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history and the latest request.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentClientMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMessages sent by the client in the BidiGenerateContent call.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- setup¶
Optional. Session configuration sent only in the first client message.
This field is a member of oneof
message_type.
- client_content¶
Optional. Incremental update of the current conversation delivered from the client.
This field is a member of oneof
message_type.
- realtime_input¶
Optional. User input that is sent in real time.
This field is a member of oneof
message_type.
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentRealtimeInput(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser input that is sent in real time.
The different modalities (audio, video and text) are handled as concurrent streams. The ordering across these streams is not guaranteed.
This is different from [BidiGenerateContentClientContent][google.ai.generativelanguage.v1beta.BidiGenerateContentClientContent] in a few ways:
Can be sent continuously without interruption to model generation.
If there is a need to mix data interleaved across the [BidiGenerateContentClientContent][google.ai.generativelanguage.v1beta.BidiGenerateContentClientContent] and the [BidiGenerateContentRealtimeInput][google.ai.generativelanguage.v1beta.BidiGenerateContentRealtimeInput], the server attempts to optimize for best response, but there are no guarantees.
End of turn is not explicitly specified, but is rather derived from user activity (for example, end of speech).
Even before the end of turn, the data is processed incrementally to optimize for a fast start of the response from the model.
- media_chunks¶
Optional. Inlined bytes data for media input. Multiple
media_chunksare not supported, all but the first will be ignored.DEPRECATED: Use one of
audio,video, ortextinstead.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Blob]
- audio¶
Optional. These form the realtime audio input stream.
- audio_stream_end¶
Optional. Indicates that the audio stream has ended, e.g. because the microphone was turned off.
This should only be sent when automatic activity detection is enabled (which is the default).
The client can reopen the stream by sending an audio message.
This field is a member of oneof
_audio_stream_end.- Type
- video¶
Optional. These form the realtime video input stream.
- text¶
Optional. These form the realtime text input stream.
This field is a member of oneof
_text.- Type
- activity_start¶
Optional. Marks the start of user activity. This can only be sent if automatic (i.e. server-side) activity detection is disabled.
- activity_end¶
Optional. Marks the end of user activity. This can only be sent if automatic (i.e. server-side) activity detection is disabled.
- class ActivityEnd(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMarks the end of user activity.
- class ActivityStart(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMarks the start of user activity.
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentServerContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageIncremental server update generated by the model in response to client messages.
Content is generated as quickly as possible, and not in real time. Clients may choose to buffer and play it out in real time.
- model_turn¶
Output only. The content that the model has generated as part of the current conversation with the user.
This field is a member of oneof
_model_turn.
- generation_complete¶
Output only. If true, indicates that the model is done generating.
When model is interrupted while generating there will be no ‘generation_complete’ message in interrupted turn, it will go through ‘interrupted > turn_complete’.
When model assumes realtime playback there will be delay between generation_complete and turn_complete that is caused by model waiting for playback to finish.
- Type
- turn_complete¶
Output only. If true, indicates that the model has completed its turn. Generation will only start in response to additional client messages.
- Type
- interrupted¶
Output only. If true, indicates that a client message has interrupted current model generation. If the client is playing out the content in real time, this is a good signal to stop and empty the current playback queue.
- Type
- grounding_metadata¶
Output only. Grounding metadata for the generated content.
- input_transcription¶
Output only. Input audio transcription. The transcription is sent independently of the other server messages and there is no guaranteed ordering.
- output_transcription¶
Output only. Output audio transcription. These transcriptions are part of the Generation output of the server. The last output transcription of this turn is sent before either
generation_completeorinterrupted, which in turn are followed byturn_complete. There is no guaranteed exact ordering between transcriptions and othermodel_turnoutput but the server tries to send the transcripts close to the corresponding audio output.
- url_context_metadata¶
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentServerMessage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse message for the BidiGenerateContent call.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- setup_complete¶
Output only. Sent in response to a
BidiGenerateContentSetupmessage from the client when setup is complete.This field is a member of oneof
message_type.
- server_content¶
Output only. Content generated by the model in response to client messages.
This field is a member of oneof
message_type.
- tool_call¶
Output only. Request for the client to execute the
function_callsand return the responses with the matchingids.This field is a member of oneof
message_type.
- tool_call_cancellation¶
Output only. Notification for the client that a previously issued
ToolCallMessagewith the specifiedids should be cancelled.This field is a member of oneof
message_type.
- go_away¶
Output only. A notice that the server will soon disconnect.
This field is a member of oneof
message_type.
- session_resumption_update¶
Output only. Update of the session resumption state.
This field is a member of oneof
message_type.
- usage_metadata¶
Output only. Usage metadata about the response(s).
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentSetup(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMessage to be sent in the first (and only in the first)
BidiGenerateContentClientMessage. Contains configuration that will apply for the duration of the streaming RPC.Clients should wait for a
BidiGenerateContentSetupCompletemessage before sending any additional messages.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
Format:
models/{model}- Type
- generation_config¶
Optional. Generation config.
The following fields are not supported:
response_logprobsresponse_mime_typelogprobsresponse_schemaresponse_json_schemastop_sequencerouting_configaudio_timestamp
- system_instruction¶
Optional. The user provided system instructions for the model. Note: Only text should be used in parts and content in each part will be in a separate paragraph.
- tools¶
Optional. A list of
Toolsthe model may use to generate the next response.A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]
- realtime_input_config¶
Optional. Configures the handling of realtime input.
- session_resumption¶
Optional. Configures session resumption mechanism.
If included, the server will send
SessionResumptionUpdatemessages.
- context_window_compression¶
Optional. Configures a context window compression mechanism. If included, the server will automatically reduce the size of the context when it exceeds the configured length.
- input_audio_transcription¶
Optional. If set, enables transcription of voice input. The transcription aligns with the input audio language, if configured.
- output_audio_transcription¶
Optional. If set, enables transcription of the model’s audio output. The transcription aligns with the language code specified for the output audio, if configured.
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentSetupComplete(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSent in response to a
BidiGenerateContentSetupmessage from the client.
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentToolCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for the client to execute the
function_callsand return the responses with the matchingids.- function_calls¶
Output only. The function call to be executed.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionCall]
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentToolCallCancellation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageNotification for the client that a previously issued
ToolCallMessagewith the specifiedids should not have been executed and should be cancelled. If there were side-effects to those tool calls, clients may attempt to undo the tool calls. This message occurs only in cases where the clients interrupt server turns.
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentToolResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageClient generated response to a
ToolCallreceived from the server. IndividualFunctionResponseobjects are matched to the respectiveFunctionCallobjects by theidfield.Note that in the unary and server-streaming GenerateContent APIs function calling happens by exchanging the
Contentparts, while in the bidi GenerateContent APIs function calling happens over these dedicated set of messages.- function_responses¶
Optional. The response to the function calls.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionResponse]
- class google.ai.generativelanguage_v1beta.types.BidiGenerateContentTranscription(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTranscription of audio (input or output).
- class google.ai.generativelanguage_v1beta.types.Blob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRaw media bytes.
Text should not be sent as raw bytes, use the ‘text’ field.
- mime_type¶
The IANA standard MIME type of the source data. Examples:
image/png
image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.
- Type
- class google.ai.generativelanguage_v1beta.types.CachedContent(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageContent that has been preprocessed and can be used in subsequent request to GenerativeService.
Cached content can be only used with model it was created for.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- expire_time¶
Timestamp in UTC of when this resource is considered expired. This is always provided on output, regardless of what was sent on input.
This field is a member of oneof
expiration.
- name¶
Output only. Identifier. The resource name referring to the cached content. Format:
cachedContents/{id}This field is a member of oneof
_name.- Type
- display_name¶
Optional. Immutable. The user-generated meaningful display name of the cached content. Maximum 128 Unicode characters.
This field is a member of oneof
_display_name.- Type
- model¶
Required. Immutable. The name of the
Modelto use for cached content Format:models/{model}This field is a member of oneof
_model.- Type
- system_instruction¶
Optional. Input only. Immutable. Developer set system instruction. Currently text only.
This field is a member of oneof
_system_instruction.
- contents¶
Optional. Input only. Immutable. The content to cache.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- tools¶
Optional. Input only. Immutable. A list of
Toolsthe model may use to generate the next response- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]
- tool_config¶
Optional. Input only. Immutable. Tool config. This config is shared for all tools.
This field is a member of oneof
_tool_config.
- create_time¶
Output only. Creation time of the cache entry.
- update_time¶
Output only. When the cache entry was last updated in UTC time.
- usage_metadata¶
Output only. Metadata on the usage of the cached content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata on the usage of the cached content.
- class google.ai.generativelanguage_v1beta.types.Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response candidate generated from the model.
- index¶
Output only. Index of the candidate in the list of response candidates.
This field is a member of oneof
_index.- Type
- content¶
Output only. Generated content returned from the model.
- finish_reason¶
Optional. Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating tokens.
- finish_message¶
Optional. Output only. Details the reason why the model stopped generating tokens. This is populated only when
finish_reasonis set.This field is a member of oneof
_finish_message.- Type
- safety_ratings¶
List of ratings for the safety of a response candidate. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- citation_metadata¶
Output only. Citation information for model-generated candidate.
This field may be populated with recitation information for any text included in the
content. These are passages that are “recited” from copyrighted material in the foundational LLM’s training data.
- grounding_attributions¶
Output only. Attribution information for sources that contributed to a grounded answer.
This field is populated for
GenerateAnswercalls.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingAttribution]
- grounding_metadata¶
Output only. Grounding metadata for the candidate.
This field is populated for
GenerateContentcalls.
- logprobs_result¶
Output only. Log-likelihood scores for the response tokens and top tokens
- url_context_metadata¶
Output only. Metadata related to url context retrieval tool.
- class FinishReason(value)[source]¶
Bases:
proto.enums.EnumDefines the reason why the model stopped generating tokens.
- Values:
- FINISH_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- STOP (1):
Natural stop point of the model or provided stop sequence.
- MAX_TOKENS (2):
The maximum number of tokens as specified in the request was reached.
- SAFETY (3):
The response candidate content was flagged for safety reasons.
- RECITATION (4):
The response candidate content was flagged for recitation reasons.
- LANGUAGE (6):
The response candidate content was flagged for using an unsupported language.
- OTHER (5):
Unknown reason.
- BLOCKLIST (7):
Token generation stopped because the content contains forbidden terms.
- PROHIBITED_CONTENT (8):
Token generation stopped for potentially containing prohibited content.
- SPII (9):
Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).
- MALFORMED_FUNCTION_CALL (10):
The function call generated by the model is invalid.
- IMAGE_SAFETY (11):
Token generation stopped because generated images contain safety violations.
- IMAGE_PROHIBITED_CONTENT (14):
Image generation stopped because generated images has other prohibited content.
- IMAGE_OTHER (15):
Image generation stopped because of other miscellaneous issue.
- NO_IMAGE (16):
The model was expected to generate an image, but none was generated.
- IMAGE_RECITATION (17):
Image generation stopped due to recitation.
- UNEXPECTED_TOOL_CALL (12):
Model generated a tool call but no tools were enabled in the request.
- TOO_MANY_TOOL_CALLS (13):
Model called too many tools consecutively, thus the system exited execution.
- class google.ai.generativelanguage_v1beta.types.Chunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Chunkis a subpart of aDocumentthat is treated as an independent unit for the purposes of vector representation and storage. ACorpuscan have a maximum of 1 millionChunks.- name¶
Immutable. Identifier. The
Chunkresource name. The ID (name excluding the “corpora//documents//chunks/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a random 12-character unique ID will be generated. Example:corpora/{corpus_id}/documents/{document_id}/chunks/123a456b789c- Type
- data¶
Required. The content for the
Chunk, such as the text string. The maximum number of tokens per chunk is 2043.
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs. The maximum number of
CustomMetadataper chunk is 20.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]
- create_time¶
Output only. The Timestamp of when the
Chunkwas created.
- update_time¶
Output only. The Timestamp of when the
Chunkwas last updated.
- state¶
Output only. Current state of the
Chunk.
- class State(value)[source]¶
Bases:
proto.enums.EnumStates for the lifecycle of a
Chunk.- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- STATE_PENDING_PROCESSING (1):
Chunkis being processed (embedding and vector storage).- STATE_ACTIVE (2):
Chunkis processed and available for querying.- STATE_FAILED (10):
Chunkfailed processing.
- class google.ai.generativelanguage_v1beta.types.ChunkData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageExtracted data that represents the
Chunkcontent.
- class google.ai.generativelanguage_v1beta.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA collection of source attributions for a piece of content.
- citation_sources¶
Citations to sources for a specific response.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CitationSource]
- class google.ai.generativelanguage_v1beta.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA citation to a source for a portion of a specific response.
- start_index¶
Optional. Start of segment of the response that is attributed to this source.
Index indicates the start of the segment, measured in bytes.
This field is a member of oneof
_start_index.- Type
- end_index¶
Optional. End of the attributed segment, exclusive.
This field is a member of oneof
_end_index.- Type
- class google.ai.generativelanguage_v1beta.types.CodeExecution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool that executes code generated by the model, and automatically returns the result to the model.
See also
ExecutableCodeandCodeExecutionResultwhich are only generated when using this tool.
- class google.ai.generativelanguage_v1beta.types.CodeExecutionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResult of executing the
ExecutableCode.Only generated when using the
CodeExecution, and always follows apartcontaining theExecutableCode.- outcome¶
Required. Outcome of the code execution.
- output¶
Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
- Type
- class Outcome(value)[source]¶
Bases:
proto.enums.EnumEnumeration of possible outcomes of the code execution.
- Values:
- OUTCOME_UNSPECIFIED (0):
Unspecified status. This value should not be used.
- OUTCOME_OK (1):
Code execution completed successfully.
- OUTCOME_FAILED (2):
Code execution finished but with a failure.
stderrshould contain the reason.- OUTCOME_DEADLINE_EXCEEDED (3):
Code execution ran for too long, and was cancelled. There may or may not be a partial output present.
- class google.ai.generativelanguage_v1beta.types.Condition(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageFilter condition applicable to a single key.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value to filter the metadata on.
This field is a member of oneof
value.- Type
- numeric_value¶
The numeric value to filter the metadata on.
This field is a member of oneof
value.- Type
- operation¶
Required. Operator applied to the given key-value pair to trigger the condition.
- class Operator(value)[source]¶
Bases:
proto.enums.EnumDefines the valid operators that can be applied to a key-value pair.
- Values:
- OPERATOR_UNSPECIFIED (0):
The default value. This value is unused.
- LESS (1):
Supported by numeric.
- LESS_EQUAL (2):
Supported by numeric.
- EQUAL (3):
Supported by numeric & string.
- GREATER_EQUAL (4):
Supported by numeric.
- GREATER (5):
Supported by numeric.
- NOT_EQUAL (6):
Supported by numeric & string.
- INCLUDES (7):
Supported by string only when
CustomMetadatavalue type for the given key has astring_list_value.- EXCLUDES (8):
Supported by string only when
CustomMetadatavalue type for the given key has astring_list_value.
- class google.ai.generativelanguage_v1beta.types.Content(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe base structured datatype containing multi-part content of a message.
A
Contentincludes arolefield designating the producer of theContentand apartsfield containing multi-part data that contains the content of the message turn.- parts¶
Ordered
Partsthat constitute a single message. Parts may have different MIME types.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Part]
- class google.ai.generativelanguage_v1beta.types.ContentEmbedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA list of floats representing an embedding.
- class google.ai.generativelanguage_v1beta.types.ContentFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageContent filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.
- reason¶
The reason content was blocked during request processing.
- message¶
A string that describes the filtering behavior in more detail.
This field is a member of oneof
_message.- Type
- class BlockedReason(value)[source]¶
Bases:
proto.enums.EnumA list of reasons why content may have been blocked.
- Values:
- BLOCKED_REASON_UNSPECIFIED (0):
A blocked reason was not specified.
- SAFETY (1):
Content was blocked by safety settings.
- OTHER (2):
Content was blocked, but the reason is uncategorized.
- class google.ai.generativelanguage_v1beta.types.ContextWindowCompressionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageEnables context window compression — a mechanism for managing the model’s context window so that it does not exceed a given length.
- trigger_tokens¶
The number of tokens (before running a turn) required to trigger a context window compression.
This can be used to balance quality against latency as shorter context windows may result in faster model responses. However, any compression operation will cause a temporary latency increase, so they should not be triggered frequently.
If not set, the default is 80% of the model’s context window limit. This leaves 20% for the next user request/model response.
This field is a member of oneof
_trigger_tokens.- Type
- class SlidingWindow(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe SlidingWindow method operates by discarding content at the beginning of the context window. The resulting context will always begin at the start of a USER role turn. System instructions and any
BidiGenerateContentSetup.prefix_turnswill always remain at the beginning of the result.
- class google.ai.generativelanguage_v1beta.types.Corpus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Corpusis a collection ofDocuments. A project can create up to 5 corpora.- name¶
Immutable. Identifier. The
Corpusresource name. The ID (name excluding the “corpora/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_namealong with a 12 character random suffix. Example:corpora/my-awesome-corpora-123a456b789c- Type
- display_name¶
Optional. The human-readable display name for the
Corpus. The display name must be no more than 512 characters in length, including spaces. Example: “Docs on Semantic Retriever”.- Type
- create_time¶
Output only. The Timestamp of when the
Corpuswas created.
- update_time¶
Output only. The Timestamp of when the
Corpuswas last updated.
- class google.ai.generativelanguage_v1beta.types.CountMessageTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- prompt¶
Required. The prompt, whose token count is to be returned.
- class google.ai.generativelanguage_v1beta.types.CountMessageTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountMessageTokens.It returns the model’s
token_countfor theprompt.
- class google.ai.generativelanguage_v1beta.types.CountTextTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- prompt¶
Required. The free-form input text given to the model as a prompt.
- class google.ai.generativelanguage_v1beta.types.CountTextTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountTextTokens.It returns the model’s
token_countfor theprompt.
- class google.ai.generativelanguage_v1beta.types.CountTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCounts the number of tokens in the
promptsent to a model.Models may tokenize text differently, so each model may return a different
token_count.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- contents¶
Optional. The input given to the model as a prompt. This field is ignored when
generate_content_requestis set.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- generate_content_request¶
Optional. The overall input given to the
Model. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling.Models/Contents andgenerate_content_requests are mutually exclusive. You can either sendModel+Contents or agenerate_content_request, but never both.
- class google.ai.generativelanguage_v1beta.types.CountTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA response from
CountTokens.It returns the model’s
token_countfor theprompt.- total_tokens¶
The number of tokens that the
Modeltokenizes thepromptinto. Always non-negative.- Type
- cached_content_token_count¶
Number of tokens in the cached part of the prompt (the cached content).
- Type
- prompt_tokens_details¶
Output only. List of modalities that were processed in the request input.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- cache_tokens_details¶
Output only. List of modalities that were processed in the cached content.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- class google.ai.generativelanguage_v1beta.types.CreateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create CachedContent.
- cached_content¶
Required. The cached content to create.
- class google.ai.generativelanguage_v1beta.types.CreateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Chunk.- parent¶
Required. The name of the
Documentwhere thisChunkwill be created. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- chunk¶
Required. The
Chunkto create.
- class google.ai.generativelanguage_v1beta.types.CreateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Corpus.- corpus¶
Required. The
Corpusto create.
- class google.ai.generativelanguage_v1beta.types.CreateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Document.- parent¶
Required. The name of the
Corpuswhere thisDocumentwill be created. Example:corpora/my-corpus-123- Type
- document¶
Required. The
Documentto create.
- class google.ai.generativelanguage_v1beta.types.CreateFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
CreateFile.- file¶
Optional. Metadata for the file to create.
- class google.ai.generativelanguage_v1beta.types.CreateFileResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse for
CreateFile.- file¶
Metadata for the created file.
- class google.ai.generativelanguage_v1beta.types.CreatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a
Permission.- parent¶
Required. The parent resource of the
Permission. Formats:tunedModels/{tuned_model}corpora/{corpus}- Type
- permission¶
Required. The permission to create.
- class google.ai.generativelanguage_v1beta.types.CreateTunedModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata about the state and progress of creating a tuned model returned from the long-running operation
- snapshots¶
Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]
- class google.ai.generativelanguage_v1beta.types.CreateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to create a TunedModel.
- tuned_model_id¶
Optional. The unique id for the tuned model if specified. This value should be up to 40 characters, the first character must be a letter, the last could be a letter or a number. The id must match the regular expression:
[a-z]([a-z0-9-]{0,38}[a-z0-9])?.This field is a member of oneof
_tuned_model_id.- Type
- tuned_model¶
Required. The tuned model to create.
- class google.ai.generativelanguage_v1beta.types.CustomMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided metadata stored as key-value pairs.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- string_value¶
The string value of the metadata to store.
This field is a member of oneof
value.- Type
- string_list_value¶
The StringList value of the metadata to store.
This field is a member of oneof
value.
- class google.ai.generativelanguage_v1beta.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageDataset for training or validation.
- class google.ai.generativelanguage_v1beta.types.DeleteCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete CachedContent.
- class google.ai.generativelanguage_v1beta.types.DeleteChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Chunk.
- class google.ai.generativelanguage_v1beta.types.DeleteCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Corpus.
- class google.ai.generativelanguage_v1beta.types.DeleteDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a
Document.- name¶
Required. The resource name of the
Documentto delete. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- class google.ai.generativelanguage_v1beta.types.DeleteFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
DeleteFile.
- class google.ai.generativelanguage_v1beta.types.DeletePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete the
Permission.
- class google.ai.generativelanguage_v1beta.types.DeleteTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to delete a TunedModel.
- class google.ai.generativelanguage_v1beta.types.Document(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA
Documentis a collection ofChunks. ACorpuscan have a maximum of 10,000Documents.- name¶
Immutable. Identifier. The
Documentresource name. The ID (name excluding the “corpora/*/documents/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be derived fromdisplay_namealong with a 12 character random suffix. Example:corpora/{corpus_id}/documents/my-awesome-doc-123a456b789c- Type
- display_name¶
Optional. The human-readable display name for the
Document. The display name must be no more than 512 characters in length, including spaces. Example: “Semantic Retriever Documentation”.- Type
- custom_metadata¶
Optional. User provided custom metadata stored as key-value pairs used for querying. A
Documentcan have a maximum of 20CustomMetadata.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CustomMetadata]
- update_time¶
Output only. The Timestamp of when the
Documentwas last updated.
- create_time¶
Output only. The Timestamp of when the
Documentwas created.
- class google.ai.generativelanguage_v1beta.types.DownloadFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
DownloadFile.
- class google.ai.generativelanguage_v1beta.types.DownloadFileResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse for
DownloadFile.
- class google.ai.generativelanguage_v1beta.types.DynamicRetrievalConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageDescribes the options to customize dynamic retrieval.
- mode¶
The mode of the predictor to be used in dynamic retrieval.
- dynamic_threshold¶
The threshold to be used in dynamic retrieval. If not set, a system default value is used.
This field is a member of oneof
_dynamic_threshold.- Type
- class Mode(value)[source]¶
Bases:
proto.enums.EnumThe mode of the predictor to be used in dynamic retrieval.
- Values:
- MODE_UNSPECIFIED (0):
Always trigger retrieval.
- MODE_DYNAMIC (1):
Run retrieval only when system decides it is necessary.
- class google.ai.generativelanguage_v1beta.types.EmbedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest containing the
Contentfor the model to embed.- model¶
Required. The model’s resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the
ListModelsmethod.Format:
models/{model}- Type
- content¶
Required. The content to embed. Only the
parts.textfields will be counted.
- task_type¶
Optional. Optional task type for which the embeddings will be used. Not supported on earlier models (
models/embedding-001).This field is a member of oneof
_task_type.
- title¶
Optional. An optional title for the text. Only applicable when TaskType is
RETRIEVAL_DOCUMENT.Note: Specifying a
titleforRETRIEVAL_DOCUMENTprovides better quality embeddings for retrieval.This field is a member of oneof
_title.- Type
- output_dimensionality¶
Optional. Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end. Supported by newer models since 2024 only. You cannot set this value if using the earlier model (
models/embedding-001).This field is a member of oneof
_output_dimensionality.- Type
- class google.ai.generativelanguage_v1beta.types.EmbedContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to an
EmbedContentRequest.- embedding¶
Output only. The embedding generated from the input content.
- class google.ai.generativelanguage_v1beta.types.EmbedTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to get a text embedding from the model.
- class google.ai.generativelanguage_v1beta.types.EmbedTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response to a EmbedTextRequest.
- class google.ai.generativelanguage_v1beta.types.Embedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA list of floats representing the embedding.
- class google.ai.generativelanguage_v1beta.types.Example(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAn input/output example used to instruct the Model.
It demonstrates how the model should respond or format its response.
- input¶
Required. An example of an input
Messagefrom the user.
- output¶
Required. An example of what the model should output given the input.
- class google.ai.generativelanguage_v1beta.types.ExecutableCode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCode generated by the model that is meant to be executed, and the result returned to the model.
Only generated when using the
CodeExecutiontool, in which the code will be automatically executed, and a correspondingCodeExecutionResultwill also be generated.- language¶
Required. Programming language of the
code.
- class Language(value)[source]¶
Bases:
proto.enums.EnumSupported programming languages for the generated code.
- Values:
- LANGUAGE_UNSPECIFIED (0):
Unspecified language. This value should not be used.
- PYTHON (1):
Python >= 3.10, with numpy and simpy available.
- class google.ai.generativelanguage_v1beta.types.File(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA file uploaded to the API. Next ID: 15
- name¶
Immutable. Identifier. The
Fileresource name. The ID (name excluding the “files/” prefix) can contain up to 40 characters that are lowercase alphanumeric or dashes (-). The ID cannot start or end with a dash. If the name is empty on create, a unique name will be generated. Example:files/123-456- Type
- display_name¶
Optional. The human-readable display name for the
File. The display name must be no more than 512 characters in length, including spaces. Example: “Welcome Image”.- Type
- create_time¶
Output only. The timestamp of when the
Filewas created.
- update_time¶
Output only. The timestamp of when the
Filewas last updated.
- expiration_time¶
Output only. The timestamp of when the
Filewill be deleted. Only set if theFileis scheduled to expire.
- state¶
Output only. Processing state of the File.
- source¶
Source of the File.
- error¶
Output only. Error status if File processing failed.
- Type
google.rpc.status_pb2.Status
- class Source(value)[source]¶
Bases:
proto.enums.Enum- Values:
- SOURCE_UNSPECIFIED (0):
Used if source is not specified.
- UPLOADED (1):
Indicates the file is uploaded by the user.
- GENERATED (2):
Indicates the file is generated by Google.
- REGISTERED (3):
Indicates the file is a registered, i.e. a Google Cloud Storage file.
- class State(value)[source]¶
Bases:
proto.enums.EnumStates for the lifecycle of a File.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is used if the state is omitted.
- PROCESSING (1):
File is being processed and cannot be used for inference yet.
- ACTIVE (2):
File is processed and available for inference.
- FAILED (10):
File failed processing.
- class google.ai.generativelanguage_v1beta.types.FileData(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageURI based data.
- class google.ai.generativelanguage_v1beta.types.FunctionCall(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA predicted
FunctionCallreturned from the model that contains a string representing theFunctionDeclaration.namewith the arguments and their values.- id¶
Optional. The unique id of the function call. If populated, the client to execute the
function_calland return the response with the matchingid.- Type
- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- Type
- class google.ai.generativelanguage_v1beta.types.FunctionCallingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration for specifying function calling behavior.
- mode¶
Optional. Specifies the mode in which function calling should execute. If unspecified, the default value will be set to AUTO.
- allowed_function_names¶
Optional. A set of function names that, when provided, limits the functions the model will call.
This should only be set when the Mode is ANY or VALIDATED. Function names should match [FunctionDeclaration.name]. When set, model will predict a function call from only allowed function names.
- Type
MutableSequence[str]
- class Mode(value)[source]¶
Bases:
proto.enums.EnumDefines the execution behavior for function calling by defining the execution mode.
- Values:
- MODE_UNSPECIFIED (0):
Unspecified function calling mode. This value should not be used.
- AUTO (1):
Default model behavior, model decides to predict either a function call or a natural language response.
- ANY (2):
Model is constrained to always predicting a function call only. If “allowed_function_names” are set, the predicted function call will be limited to any one of “allowed_function_names”, else the predicted function call will be any one of the provided “function_declarations”.
- NONE (3):
Model will not predict any function call. Model behavior is same as when not passing any function declarations.
- VALIDATED (4):
Model decides to predict either a function call or a natural language response, but will validate function calls with constrained decoding. If “allowed_function_names” are set, the predicted function call will be limited to any one of “allowed_function_names”, else the predicted function call will be any one of the provided “function_declarations”.
- class google.ai.generativelanguage_v1beta.types.FunctionDeclaration(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageStructured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a
Toolby the model and executed by the client.- name¶
Required. The name of the function. Must be a-z, A-Z, 0-9, or contain underscores, colons, dots, and dashes, with a maximum length of 64.
- Type
- parameters¶
Optional. Describes the parameters to this function. Reflects the Open API 3.03 Parameter Object string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter.
This field is a member of oneof
_parameters.
- parameters_json_schema¶
Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example:
{ "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] }
This field is mutually exclusive with
parameters.This field is a member of oneof
_parameters_json_schema.
- response¶
Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
This field is a member of oneof
_response.
- response_json_schema¶
Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function.
This field is mutually exclusive with
response.This field is a member of oneof
_response_json_schema.
- behavior¶
Optional. Specifies the function Behavior. Currently only supported by the BidiGenerateContent method.
- class Behavior(value)[source]¶
Bases:
proto.enums.EnumDefines the function behavior. Defaults to
BLOCKING.- Values:
- UNSPECIFIED (0):
This value is unused.
- BLOCKING (1):
If set, the system will wait to receive the function response before continuing the conversation.
- NON_BLOCKING (2):
If set, the system will not wait to receive the function response. Instead, it will attempt to handle function responses as they become available while maintaining the conversation between the user and the model.
- class google.ai.generativelanguage_v1beta.types.FunctionResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe result output from a
FunctionCallthat contains a string representing theFunctionDeclaration.nameand a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCallmade based on model prediction.- id¶
Optional. The id of the function call this response is for. Populated by the client to match the corresponding function call
id.- Type
- name¶
Required. The name of the function to call. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- Type
- response¶
Required. The function response in JSON object format. Callers can use any keys of their choice that fit the function’s syntax to return the function output, e.g. “output”, “result”, etc. In particular, if the function call failed to execute, the response can have an “error” key to return error details to the model.
- parts¶
Optional. Ordered
Partsthat constitute a function response. Parts may have different IANA MIME types.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionResponsePart]
- will_continue¶
Optional. Signals that function call continues, and more responses will be returned, turning the function call into a generator. Is only applicable to NON_BLOCKING function calls, is ignored otherwise. If set to false, future responses will not be considered. It is allowed to return empty
responsewithwill_continue=Falseto signal that the function call is finished. This may still trigger the model generation. To avoid triggering the generation and finish the function call, additionally setschedulingtoSILENT.- Type
- scheduling¶
Optional. Specifies how the response should be scheduled in the conversation. Only applicable to NON_BLOCKING function calls, is ignored otherwise. Defaults to WHEN_IDLE.
This field is a member of oneof
_scheduling.
- class Scheduling(value)[source]¶
Bases:
proto.enums.EnumSpecifies how the response should be scheduled in the conversation.
- Values:
- SCHEDULING_UNSPECIFIED (0):
This value is unused.
- SILENT (1):
Only add the result to the conversation context, do not interrupt or trigger generation.
- WHEN_IDLE (2):
Add the result to the conversation context, and prompt to generate output without interrupting ongoing generation.
- INTERRUPT (3):
Add the result to the conversation context, interrupt ongoing generation and prompt to generate output.
- class google.ai.generativelanguage_v1beta.types.FunctionResponseBlob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRaw media bytes for function response.
Text should not be sent as raw bytes, use the ‘FunctionResponse.response’ field.
- mime_type¶
The IANA standard MIME type of the source data. Examples:
image/png
image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.
- Type
- class google.ai.generativelanguage_v1beta.types.FunctionResponsePart(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA datatype containing media that is part of a
FunctionResponsemessage.A
FunctionResponsePartconsists of data which has an associated datatype. AFunctionResponsePartcan only contain one of the accepted types inFunctionResponsePart.data.A
FunctionResponsePartmust have a fixed IANA MIME type identifying the type and subtype of the media if theinline_datafield is filled with raw bytes.
- class google.ai.generativelanguage_v1beta.types.GenerateAnswerRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a grounded answer from the
Model.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- inline_passages¶
Passages provided inline with the request.
This field is a member of oneof
grounding_source.
- semantic_retriever¶
Content retrieved from resources created via the Semantic Retriever API.
This field is a member of oneof
grounding_source.
- model¶
Required. The name of the
Modelto use for generating the grounded response.Format:
model=models/{model}.- Type
- contents¶
Required. The content of the current conversation with the
Model. For single-turn queries, this is a single question to answer. For multi-turn queries, this is a repeated field that contains conversation history and the lastContentin the list containing the question.Note:
GenerateAnsweronly supports queries in English.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- answer_style¶
Required. Style in which answers should be returned.
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.This will be enforced on the
GenerateAnswerRequest.contentsandGenerateAnswerResponse.candidate. There should not be more than one setting for eachSafetyCategorytype. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- temperature¶
Optional. Controls the randomness of the output.
Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. A low temperature (~0.2) is usually recommended for Attributed-Question-Answering use cases.
This field is a member of oneof
_temperature.- Type
- class AnswerStyle(value)[source]¶
Bases:
proto.enums.EnumStyle for grounded answers.
- Values:
- ANSWER_STYLE_UNSPECIFIED (0):
Unspecified answer style.
- ABSTRACTIVE (1):
Succinct but abstract style.
- EXTRACTIVE (2):
Very brief and extractive style.
- VERBOSE (3):
Verbose style including extra details. The response may be formatted as a sentence, paragraph, multiple paragraphs, or bullet points, etc.
- class google.ai.generativelanguage_v1beta.types.GenerateAnswerResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from the model for a grounded answer.
- answer¶
Candidate answer from the model.
Note: The model always attempts to provide a grounded answer, even when the answer is unlikely to be answerable from the given passages. In that case, a low-quality or ungrounded answer may be provided, along with a low
answerable_probability.
- answerable_probability¶
Output only. The model’s estimate of the probability that its answer is correct and grounded in the input passages.
A low
answerable_probabilityindicates that the answer might not be grounded in the sources.When
answerable_probabilityis low, you may want to:Display a message to the effect of “We couldn’t answer that question” to the user.
Fall back to a general-purpose LLM that answers the question from world knowledge. The threshold and nature of such fallbacks will depend on individual use cases.
0.5is a good starting threshold.
This field is a member of oneof
_answerable_probability.- Type
- input_feedback¶
Output only. Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
The input data can be one or more of the following:
Question specified by the last entry in
GenerateAnswerRequest.contentConversation history specified by the other entries in
GenerateAnswerRequest.contentGrounding sources (
GenerateAnswerRequest.semantic_retrieverorGenerateAnswerRequest.inline_passages)
This field is a member of oneof
_input_feedback.
- class InputFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageFeedback related to the input data used to answer the question, as opposed to the model-generated response to the question.
- block_reason¶
Optional. If set, the input was blocked and no candidates are returned. Rephrase the input.
This field is a member of oneof
_block_reason.
- safety_ratings¶
Ratings for safety of the input. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.EnumSpecifies what was the reason why input was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Input was blocked due to safety reasons. Inspect
safety_ratingsto understand which safety category blocked it.- OTHER (2):
Input was blocked due to other reasons.
- class google.ai.generativelanguage_v1beta.types.GenerateContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a completion from the model. NEXT ID: 18
- model¶
Required. The name of the
Modelto use for generating the completion.Format:
models/{model}.- Type
- system_instruction¶
Optional. Developer set system instruction(s). Currently, text only.
This field is a member of oneof
_system_instruction.
- contents¶
Required. The content of the current conversation with the model.
For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Content]
- tools¶
Optional. A list of
ToolstheModelmay use to generate the next response.A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of theModel. SupportedTools areFunctionandcode_execution. Refer to the Function calling and the Code execution guides to learn more.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Tool]
- tool_config¶
Optional. Tool configuration for any
Toolspecified in the request. Refer to the Function calling guide for a usage example.
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.This will be enforced on the
GenerateContentRequest.contentsandGenerateContentResponse.candidates. There should not be more than one setting for eachSafetyCategorytype. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- class google.ai.generativelanguage_v1beta.types.GenerateContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from the model supporting multiple candidate responses.
Safety ratings and content filtering are reported for both prompt in
GenerateContentResponse.prompt_feedbackand for each candidate infinish_reasonand insafety_ratings. The API:Returns either all requested candidates or none of them
Returns no candidates at all only if there was something wrong with the prompt (check
prompt_feedback)Reports feedback on each candidate in
finish_reasonandsafety_ratings.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Candidate]
- prompt_feedback¶
Returns the prompt’s feedback related to the content filters.
- usage_metadata¶
Output only. Metadata on the generation requests’ token usage.
- class PromptFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA set of the feedback metadata the prompt specified in
GenerateContentRequest.content.- block_reason¶
Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.
- safety_ratings¶
Ratings for safety of the prompt. There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class BlockReason(value)[source]¶
Bases:
proto.enums.EnumSpecifies the reason why the prompt was blocked.
- Values:
- BLOCK_REASON_UNSPECIFIED (0):
Default value. This value is unused.
- SAFETY (1):
Prompt was blocked due to safety reasons. Inspect
safety_ratingsto understand which safety category blocked it.- OTHER (2):
Prompt was blocked due to unknown reasons.
- BLOCKLIST (3):
Prompt was blocked due to the terms which are included from the terminology blocklist.
- PROHIBITED_CONTENT (4):
Prompt was blocked due to prohibited content.
- IMAGE_SAFETY (5):
Candidates blocked due to unsafe image generation content.
- class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata on the generation request’s token usage.
- prompt_token_count¶
Number of tokens in the prompt. When
cached_contentis set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.- Type
- cached_content_token_count¶
Number of tokens in the cached part of the prompt (the cached content)
- Type
- candidates_token_count¶
Total number of tokens across all the generated response candidates.
- Type
- total_token_count¶
Total token count for the generation request (prompt + response candidates).
- Type
- prompt_tokens_details¶
Output only. List of modalities that were processed in the request input.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- cache_tokens_details¶
Output only. List of modalities of the cached content in the request input.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- candidates_tokens_details¶
Output only. List of modalities that were returned in the response.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- tool_use_prompt_tokens_details¶
Output only. List of modalities that were processed for tool-use request inputs.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- class google.ai.generativelanguage_v1beta.types.GenerateMessageRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a message response from the model.
- prompt¶
Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion.
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0], inclusive. A value closer to1.0will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model.This field is a member of oneof
_temperature.- Type
- candidate_count¶
Optional. The number of generated response messages to return.
This value must be between
[1, 8], inclusive. If unset, this will default to1.This field is a member of oneof
_candidate_count.- Type
- class google.ai.generativelanguage_v1beta.types.GenerateMessageResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response from the model.
This includes candidate messages and conversation history in the form of chronologically-ordered messages.
- candidates¶
Candidate response messages from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- messages¶
The conversation history used by the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory(s) blocked a candidate from this response, the lowestHarmProbabilitythat triggered a block, and the HarmThreshold setting for that category.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]
- class google.ai.generativelanguage_v1beta.types.GenerateTextRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to generate a text completion response from the model.
- model¶
Required. The name of the
ModelorTunedModelto use for generating the completion. Examples: models/text-bison-001 tunedModels/sentence-translator-u3b7m- Type
- prompt¶
Required. The free-form input text given to the model as a prompt. Given a prompt, the model will generate a TextCompletion response it predicts as the completion of the input text.
- temperature¶
Optional. Controls the randomness of the output. Note: The default value varies by model, see the
Model.temperatureattribute of theModelreturned thegetModelfunction.Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.
This field is a member of oneof
_temperature.- Type
- candidate_count¶
Optional. Number of generated responses to return.
This value must be between [1, 8], inclusive. If unset, this will default to 1.
This field is a member of oneof
_candidate_count.- Type
- max_output_tokens¶
Optional. The maximum number of tokens to include in a candidate.
If unset, this will default to output_token_limit specified in the
Modelspecification.This field is a member of oneof
_max_output_tokens.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.
Note: The default value varies by model, see the
Model.top_pattribute of theModelreturned thegetModelfunction.This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
The model uses combined Top-k and nucleus sampling.
Top-k sampling considers the set of
top_kmost probable tokens. Defaults to 40.Note: The default value varies by model, see the
Model.top_kattribute of theModelreturned thegetModelfunction.This field is a member of oneof
_top_k.- Type
- safety_settings¶
Optional. A list of unique
SafetySettinginstances for blocking unsafe content.that will be enforced on the
GenerateTextRequest.promptandGenerateTextResponse.candidates. There should not be more than one setting for eachSafetyCategorytype. The API will block any prompts and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for eachSafetyCategoryspecified in the safety_settings. If there is noSafetySettingfor a givenSafetyCategoryprovided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY, HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL, HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported in text service.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetySetting]
- class google.ai.generativelanguage_v1beta.types.GenerateTextResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe response from the model, including candidate completions.
- candidates¶
Candidate responses from the model.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TextCompletion]
- filters¶
A set of content filtering metadata for the prompt and response text.
This indicates which
SafetyCategory(s) blocked a candidate from this response, the lowestHarmProbabilitythat triggered a block, and the HarmThreshold setting for that category. This indicates the smallest change to theSafetySettingsthat would be necessary to unblock at least 1 response.The blocking is configured by the
SafetySettingsin the request (or the defaultSafetySettingsof the API).- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ContentFilter]
- safety_feedback¶
Returns any safety feedback related to content filtering.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyFeedback]
- class google.ai.generativelanguage_v1beta.types.GenerationConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration options for model generation and outputs. Not all parameters are configurable for every model. Next ID: 29
- candidate_count¶
Optional. Number of generated responses to return. If unset, this will default to 1. Please note that this doesn’t work for previous generation models (Gemini 1.0 family)
This field is a member of oneof
_candidate_count.- Type
- stop_sequences¶
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a
stop_sequence. The stop sequence will not be included as part of the response.- Type
MutableSequence[str]
- max_output_tokens¶
Optional. The maximum number of tokens to include in a response candidate.
Note: The default value varies by model, see the
Model.output_token_limitattribute of theModelreturned from thegetModelfunction.This field is a member of oneof
_max_output_tokens.- Type
- temperature¶
Optional. Controls the randomness of the output.
Note: The default value varies by model, see the
Model.temperatureattribute of theModelreturned from thegetModelfunction.Values can range from [0.0, 2.0].
This field is a member of oneof
_temperature.- Type
- top_p¶
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and Top-p (nucleus) sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.
Note: The default value varies by
Modeland is specified by theModel.top_pattribute returned from thegetModelfunction. An emptytop_kattribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_kon requests.This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. The maximum number of tokens to consider when sampling.
Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of
top_kmost probable tokens. Models running with nucleus sampling don’t allow top_k setting.Note: The default value varies by
Modeland is specified by theModel.top_pattribute returned from thegetModelfunction. An emptytop_kattribute indicates that the model doesn’t apply top-k sampling and doesn’t allow settingtop_kon requests.This field is a member of oneof
_top_k.- Type
- seed¶
Optional. Seed used in decoding. If not set, the request uses a randomly generated seed.
This field is a member of oneof
_seed.- Type
- response_mime_type¶
Optional. MIME type of the generated candidate text. Supported MIME types are:
text/plain: (default) Text output.application/json: JSON response in the response candidates.text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.- Type
- response_schema¶
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays.
If set, a compatible
response_mime_typemust also be set. Compatible MIME types:application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.
- response_json_schema¶
Optional. Output schema of the generated response. This is an alternative to
response_schemathat accepts JSON Schema.If set,
response_schemamust be omitted, butresponse_mime_typeis required.While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported:
$id$defs$ref$anchortypeformattitledescriptionenum(for strings and numbers)itemsprefixItemsminItemsmaxItemsminimummaximumanyOfoneOf(interpreted the same asanyOf)propertiesadditionalPropertiesrequired
The non-standard
propertyOrderingproperty may also be set.Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If
$refis set on a sub-schema, no other properties, except for than those starting as a$, may be set.
- response_json_schema_ordered¶
Optional. An internal detail. Use
responseJsonSchemarather than this field.
- presence_penalty¶
Optional. Presence penalty applied to the next token’s logprobs if the token has already been seen in the response.
This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use [frequency_penalty][google.ai.generativelanguage.v1beta.GenerationConfig.frequency_penalty] for a penalty that increases with each use.
A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.
A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
This field is a member of oneof
_presence_penalty.- Type
- frequency_penalty¶
Optional. Frequency penalty applied to the next token’s logprobs, multiplied by the number of times each token has been seen in the respponse so far.
A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses.
Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the [max_output_tokens][google.ai.generativelanguage.v1beta.GenerationConfig.max_output_tokens] limit.
This field is a member of oneof
_frequency_penalty.- Type
- response_logprobs¶
Optional. If true, export the logprobs results in response.
This field is a member of oneof
_response_logprobs.- Type
- logprobs¶
Optional. Only valid if [response_logprobs=True][google.ai.generativelanguage.v1beta.GenerationConfig.response_logprobs]. This sets the number of top logprobs to return at each decoding step in the [Candidate.logprobs_result][google.ai.generativelanguage.v1beta.Candidate.logprobs_result]. The number must be in the range of [0, 20].
This field is a member of oneof
_logprobs.- Type
- enable_enhanced_civic_answers¶
Optional. Enables enhanced civic answers. It may not be available for all models.
This field is a member of oneof
_enable_enhanced_civic_answers.- Type
- response_modalities¶
Optional. The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response.
A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned.
An empty list is equivalent to requesting only text.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GenerationConfig.Modality]
- speech_config¶
Optional. The speech generation config.
This field is a member of oneof
_speech_config.
- thinking_config¶
Optional. Config for thinking features. An error will be returned if this field is set for models that don’t support thinking.
This field is a member of oneof
_thinking_config.
- image_config¶
Optional. Config for image generation. An error will be returned if this field is set for models that don’t support these config options.
This field is a member of oneof
_image_config.
- media_resolution¶
Optional. If specified, the media resolution specified will be used.
This field is a member of oneof
_media_resolution.
- class MediaResolution(value)[source]¶
Bases:
proto.enums.EnumMedia resolution for the input media.
- Values:
- MEDIA_RESOLUTION_UNSPECIFIED (0):
Media resolution has not been set.
- MEDIA_RESOLUTION_LOW (1):
Media resolution set to low (64 tokens).
- MEDIA_RESOLUTION_MEDIUM (2):
Media resolution set to medium (256 tokens).
- MEDIA_RESOLUTION_HIGH (3):
Media resolution set to high (zoomed reframing with 256 tokens).
- class Modality(value)[source]¶
Bases:
proto.enums.EnumSupported modalities of the response.
- Values:
- MODALITY_UNSPECIFIED (0):
Default value.
- TEXT (1):
Indicates the model should return text.
- IMAGE (2):
Indicates the model should return images.
- AUDIO (3):
Indicates the model should return audio.
- class google.ai.generativelanguage_v1beta.types.GetCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to read CachedContent.
- class google.ai.generativelanguage_v1beta.types.GetChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Chunk.
- class google.ai.generativelanguage_v1beta.types.GetCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Corpus.
- class google.ai.generativelanguage_v1beta.types.GetDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Document.
- class google.ai.generativelanguage_v1beta.types.GetFileRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
GetFile.
- class google.ai.generativelanguage_v1beta.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific Model.
- class google.ai.generativelanguage_v1beta.types.GetPermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific
Permission.
- class google.ai.generativelanguage_v1beta.types.GetTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for getting information about a specific Model.
- class google.ai.generativelanguage_v1beta.types.GoAway(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA notice that the server will soon disconnect.
- time_left¶
The remaining time before the connection will be terminated as ABORTED. This duration will never be less than a model-specific minimum, which will be specified together with the rate limits for the model.
- class google.ai.generativelanguage_v1beta.types.GoogleSearchRetrieval(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool to retrieve public web data for grounding, powered by Google.
- dynamic_retrieval_config¶
Specifies the dynamic retrieval configuration for the given source.
- class google.ai.generativelanguage_v1beta.types.GroundingAttribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAttribution for a source that contributed to an answer.
- source_id¶
Output only. Identifier for the source contributing to this attribution.
- content¶
Grounding source content that makes up this attribution.
- class google.ai.generativelanguage_v1beta.types.GroundingChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGrounding chunk.
- class google.ai.generativelanguage_v1beta.types.GroundingMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata returned to client when grounding is enabled.
- search_entry_point¶
Optional. Google search entry for the following-up web searches.
This field is a member of oneof
_search_entry_point.
- grounding_chunks¶
List of supporting references retrieved from specified grounding source.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingChunk]
- grounding_supports¶
List of grounding support.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingSupport]
- class google.ai.generativelanguage_v1beta.types.GroundingPassage(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessagePassage included inline with a grounding configuration.
- content¶
Content of the passage.
- class google.ai.generativelanguage_v1beta.types.GroundingPassages(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA repeated list of passages.
- passages¶
List of passages.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.GroundingPassage]
- class google.ai.generativelanguage_v1beta.types.GroundingSupport(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGrounding support.
- class google.ai.generativelanguage_v1beta.types.HarmCategory(value)[source]¶
Bases:
proto.enums.EnumThe category of a rating.
These categories cover various kinds of harms that developers may wish to adjust.
- Values:
- HARM_CATEGORY_UNSPECIFIED (0):
Category is unspecified.
- HARM_CATEGORY_DEROGATORY (1):
PaLM - Negative or harmful comments targeting identity and/or protected attribute.
- HARM_CATEGORY_TOXICITY (2):
PaLM - Content that is rude, disrespectful, or profane.
- HARM_CATEGORY_VIOLENCE (3):
PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore.
- HARM_CATEGORY_SEXUAL (4):
PaLM - Contains references to sexual acts or other lewd content.
- HARM_CATEGORY_MEDICAL (5):
PaLM - Promotes unchecked medical advice.
- HARM_CATEGORY_DANGEROUS (6):
PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts.
- HARM_CATEGORY_HARASSMENT (7):
Gemini - Harassment content.
- HARM_CATEGORY_HATE_SPEECH (8):
Gemini - Hate speech and content.
- HARM_CATEGORY_SEXUALLY_EXPLICIT (9):
Gemini - Sexually explicit content.
- HARM_CATEGORY_DANGEROUS_CONTENT (10):
Gemini - Dangerous content.
- HARM_CATEGORY_CIVIC_INTEGRITY (11):
Gemini - Content that may be used to harm civic integrity. DEPRECATED: use enable_enhanced_civic_answers instead.
- class google.ai.generativelanguage_v1beta.types.Hyperparameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageHyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- learning_rate¶
Optional. Immutable. The learning rate hyperparameter for tuning. If not set, a default of 0.001 or 0.0002 will be calculated based on the number of training examples.
This field is a member of oneof
learning_rate_option.- Type
- learning_rate_multiplier¶
Optional. Immutable. The learning rate multiplier is used to calculate a final learning_rate based on the default (recommended) value. Actual learning rate := learning_rate_multiplier * default learning rate Default learning rate is dependent on base model and dataset size. If not set, a default of 1.0 will be used.
This field is a member of oneof
learning_rate_option.- Type
- class google.ai.generativelanguage_v1beta.types.ImageConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfig for image generation features.
- class google.ai.generativelanguage_v1beta.types.ListCachedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to list CachedContents.
- page_size¶
Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
- Type
- class google.ai.generativelanguage_v1beta.types.ListCachedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse with CachedContents list.
- cached_contents¶
List of cached contents.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.CachedContent]
- class google.ai.generativelanguage_v1beta.types.ListChunksRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Chunks.- parent¶
Required. The name of the
DocumentcontainingChunks. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- page_size¶
Optional. The maximum number of
Chunks to return (per page). The service may return fewerChunks.If unspecified, at most 10
Chunks will be returned. The maximum size limit is 100Chunks per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListChunkscall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListChunksmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListChunksResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListChunkscontaining a paginated list ofChunks. TheChunks are sorted by ascendingchunk.create_time.- chunks¶
The returned
Chunks.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Chunk]
- class google.ai.generativelanguage_v1beta.types.ListCorporaRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Corpora.- page_size¶
Optional. The maximum number of
Corporato return (per page). The service may return fewerCorpora.If unspecified, at most 10
Corporawill be returned. The maximum size limit is 20Corporaper page.- Type
- page_token¶
Optional. A page token, received from a previous
ListCorporacall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListCorporamust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListCorporaResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListCorporacontaining a paginated list ofCorpora. The results are sorted by ascendingcorpus.create_time.- corpora¶
The returned corpora.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Corpus]
- class google.ai.generativelanguage_v1beta.types.ListDocumentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing
Documents.- parent¶
Required. The name of the
CorpuscontainingDocuments. Example:corpora/my-corpus-123- Type
- page_size¶
Optional. The maximum number of
Documents to return (per page). The service may return fewerDocuments.If unspecified, at most 10
Documents will be returned. The maximum size limit is 20Documents per page.- Type
- page_token¶
Optional. A page token, received from a previous
ListDocumentscall.Provide the
next_page_tokenreturned in the response as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListDocumentsmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListDocumentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListDocumentscontaining a paginated list ofDocuments. TheDocuments are sorted by ascendingdocument.create_time.- documents¶
The returned
Documents.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Document]
- class google.ai.generativelanguage_v1beta.types.ListFilesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for
ListFiles.- page_size¶
Optional. Maximum number of
Files to return per page. If unspecified, defaults to 10. Maximumpage_sizeis 100.- Type
- class google.ai.generativelanguage_v1beta.types.ListFilesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse for
ListFiles.- files¶
The list of
Files.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.File]
- class google.ai.generativelanguage_v1beta.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing all Models.
- page_size¶
The maximum number of
Modelsto return (per page).If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- class google.ai.generativelanguage_v1beta.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListModelcontaining a paginated list of Models.- models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Model]
- class google.ai.generativelanguage_v1beta.types.ListPermissionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing permissions.
- parent¶
Required. The parent resource of the permissions. Formats:
tunedModels/{tuned_model}corpora/{corpus}- Type
- page_size¶
Optional. The maximum number of
Permissions to return (per page). The service may return fewer permissions.If unspecified, at most 10 permissions will be returned. This method returns at most 1000 permissions per page, even if you pass larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListPermissionscall.Provide the
page_tokenreturned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListPermissionsmust match the call that provided the page token.- Type
- class google.ai.generativelanguage_v1beta.types.ListPermissionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListPermissionscontaining a paginated list of permissions.- permissions¶
Returned permissions.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Permission]
- class google.ai.generativelanguage_v1beta.types.ListTunedModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for listing TunedModels.
- page_size¶
Optional. The maximum number of
TunedModelsto return (per page). The service may return fewer tuned models.If unspecified, at most 10 tuned models will be returned. This method returns at most 1000 models per page, even if you pass a larger page_size.
- Type
- page_token¶
Optional. A page token, received from a previous
ListTunedModelscall.Provide the
page_tokenreturned by one request as an argument to the next request to retrieve the next page.When paginating, all other parameters provided to
ListTunedModelsmust match the call that provided the page token.- Type
- filter¶
Optional. A filter is a full text search over the tuned model’s description and display name. By default, results will not include tuned models shared with everyone.
Additional operators:
owner:me
writers:me
readers:me
readers:everyone
Examples
“owner:me” returns all tuned models to which
caller has owner role “readers:me” returns all tuned models to which caller has reader role “readers:everyone” returns all tuned models that are shared with everyone
- Type
- class google.ai.generativelanguage_v1beta.types.ListTunedModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
ListTunedModelscontaining a paginated list of Models.- tuned_models¶
The returned Models.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TunedModel]
- class google.ai.generativelanguage_v1beta.types.LogprobsResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageLogprobs Result
- log_probability_sum¶
Sum of log probabilities for all tokens.
This field is a member of oneof
_log_probability_sum.- Type
- top_candidates¶
Length = total number of decoding steps.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.TopCandidates]
- chosen_candidates¶
Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]
- class Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCandidate for the logprobs token and score.
- class TopCandidates(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageCandidates with top log probabilities at each decoding step.
- candidates¶
Sorted by log probability in descending order.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.LogprobsResult.Candidate]
- class google.ai.generativelanguage_v1beta.types.Media(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA proto encapsulate various type of media.
- class google.ai.generativelanguage_v1beta.types.Message(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe base unit of structured text.
A
Messageincludes anauthorand thecontentof theMessage.The
authoris used to tag messages when they are fed to the model as text.- author¶
Optional. The author of this Message.
This serves as a key for tagging the content of this Message when it is fed to the model as text.
The author can be any alphanumeric string.
- Type
- citation_metadata¶
Output only. Citation information for model-generated
contentin thisMessage.If this
Messagewas generated as output from the model, this field may be populated with attribution information for any text included in thecontent. This field is used only on output.This field is a member of oneof
_citation_metadata.
- class google.ai.generativelanguage_v1beta.types.MessagePrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageAll of the structured input text passed to the model as a prompt.
A
MessagePromptcontains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.- context¶
Optional. Text that should be provided to the model first to ground the response.
If not empty, this
contextwill be given to the model first before theexamplesandmessages. When using acontextbe sure to provide it with every request to maintain continuity.This field can be a description of your prompt to the model to help provide context and guide the responses. Examples: “Translate the phrase from English to French.” or “Given a statement, classify the sentiment as happy, sad or neutral.”
Anything included in this field will take precedence over message history if the total input size exceeds the model’s
input_token_limitand the input request is truncated.- Type
- examples¶
Optional. Examples of what the model should generate.
This includes both user input and the response that the model should emulate.
These
examplesare treated identically to conversation messages except that they take precedence over the history inmessages: If the total input size exceeds the model’sinput_token_limitthe input will be truncated. Items will be dropped frommessagesbeforeexamples.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Example]
- messages¶
Required. A snapshot of the recent conversation history sorted chronologically.
Turns alternate between two authors.
If the total input size exceeds the model’s
input_token_limitthe input will be truncated: The oldest items will be dropped frommessages.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Message]
- class google.ai.generativelanguage_v1beta.types.MetadataFilter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided filter to limit retrieval based on
ChunkorDocumentlevel metadata values. Example (genre = drama OR genre = action): key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]- conditions¶
Required. The
Conditions for the given key that will trigger this filter. MultipleConditions are joined by logical ORs.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Condition]
- class google.ai.generativelanguage_v1beta.types.Modality(value)[source]¶
Bases:
proto.enums.EnumContent Part modality
- Values:
- MODALITY_UNSPECIFIED (0):
Unspecified modality.
- TEXT (1):
Plain text.
- IMAGE (2):
Image.
- VIDEO (3):
Video.
- AUDIO (4):
Audio.
- DOCUMENT (5):
Document, e.g. PDF.
- class google.ai.generativelanguage_v1beta.types.ModalityTokenCount(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRepresents token counting info for a single modality.
- modality¶
The modality associated with this token count.
- class google.ai.generativelanguage_v1beta.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageInformation about a Generative Language Model.
- name¶
Required. The resource name of the
Model. Refer to Model variants for all allowed values.Format:
models/{model}with a{model}naming convention of:“{base_model_id}-{version}”
Examples:
models/gemini-1.5-flash-001
- Type
- base_model_id¶
Required. The name of the base model, pass this to the generation request.
Examples:
gemini-1.5-flash
- Type
- version¶
Required. The version number of the model.
This represents the major version (
1.0or1.5)- Type
- display_name¶
The human-readable name of the model. E.g. “Gemini 1.5 Flash”. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Type
- supported_generation_methods¶
The model’s supported generation methods.
The corresponding API method names are defined as Pascal case strings, such as
generateMessageandgenerateContent.- Type
MutableSequence[str]
- temperature¶
Controls the randomness of the output.
Values can range over
[0.0,max_temperature], inclusive. A higher value will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_temperature.- Type
- max_temperature¶
The maximum temperature this model can use.
This field is a member of oneof
_max_temperature.- Type
- top_p¶
For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p. This value specifies default to be used by the backend while making the call to the model.This field is a member of oneof
_top_p.- Type
- top_k¶
For Top-k sampling.
Top-k sampling considers the set of
top_kmost probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn’t use top-k sampling, andtop_kisn’t allowed as a generation parameter.This field is a member of oneof
_top_k.- Type
- class google.ai.generativelanguage_v1beta.types.MultiSpeakerVoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for the multi-speaker setup.
- speaker_voice_configs¶
Required. All the enabled speaker voices.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SpeakerVoiceConfig]
- class google.ai.generativelanguage_v1beta.types.Part(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA datatype containing media that is part of a multi-part
Contentmessage.A
Partconsists of data which has an associated datatype. APartcan only contain one of the accepted types inPart.data.A
Partmust have a fixed IANA MIME type identifying the type and subtype of the media if theinline_datafield is filled with raw bytes.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- function_call¶
A predicted
FunctionCallreturned from the model that contains a string representing theFunctionDeclaration.namewith the arguments and their values.This field is a member of oneof
data.
- function_response¶
The result output of a
FunctionCallthat contains a string representing theFunctionDeclaration.nameand a structured JSON object containing any output from the function is used as context to the model.This field is a member of oneof
data.
- executable_code¶
Code generated by the model that is meant to be executed.
This field is a member of oneof
data.
- code_execution_result¶
Result of executing the
ExecutableCode.This field is a member of oneof
data.
- video_metadata¶
Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
This field is a member of oneof
metadata.
- thought_signature¶
Optional. An opaque signature for the thought so it can be reused in subsequent requests.
- Type
- part_metadata¶
Custom metadata associated with the Part. Agents using genai.Part as content representation may need to keep track of the additional information. For example it can be name of a file/source from which the Part originates or a way to multiplex multiple Part streams.
- class google.ai.generativelanguage_v1beta.types.Permission(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessagePermission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus).
A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.
There are three concentric roles. Each role is a superset of the previous role’s permitted operations:
reader can use the resource (e.g. tuned model, corpus) for inference
writer has reader’s permissions and additionally can edit and share
owner has writer’s permissions and additionally can delete
- name¶
Output only. Identifier. The permission name. A unique name will be generated on create. Examples: tunedModels/{tuned_model}/permissions/{permission} corpora/{corpus}/permissions/{permission} Output only.
- Type
- grantee_type¶
Optional. Immutable. The type of the grantee.
This field is a member of oneof
_grantee_type.
- email_address¶
Optional. Immutable. The email address of the user of group which this permission refers. Field is not set when permission’s grantee type is EVERYONE.
This field is a member of oneof
_email_address.- Type
- class GranteeType(value)[source]¶
Bases:
proto.enums.EnumDefines types of the grantee of this permission.
- Values:
- GRANTEE_TYPE_UNSPECIFIED (0):
The default value. This value is unused.
- USER (1):
Represents a user. When set, you must provide email_address for the user.
- GROUP (2):
Represents a group. When set, you must provide email_address for the group.
- EVERYONE (3):
Represents access to everyone. No extra information is required.
- class Role(value)[source]¶
Bases:
proto.enums.EnumDefines the role granted by this permission.
- Values:
- ROLE_UNSPECIFIED (0):
The default value. This value is unused.
- OWNER (1):
Owner can use, update, share and delete the resource.
- WRITER (2):
Writer can use, update and share the resource.
- READER (3):
Reader can use the resource.
- class google.ai.generativelanguage_v1beta.types.PrebuiltVoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for the prebuilt speaker to use.
- class google.ai.generativelanguage_v1beta.types.PredictLongRunningGeneratedVideoResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageVeo response.
- generated_samples¶
The generated samples.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Media]
- class google.ai.generativelanguage_v1beta.types.PredictLongRunningMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata for PredictLongRunning long running operations.
- class google.ai.generativelanguage_v1beta.types.PredictLongRunningRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest message for [PredictionService.PredictLongRunning].
- instances¶
Required. The instances that are the input to the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- parameters¶
Optional. The parameters that govern the prediction call.
- class google.ai.generativelanguage_v1beta.types.PredictLongRunningResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse message for [PredictionService.PredictLongRunning]
- class google.ai.generativelanguage_v1beta.types.PredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest message for [PredictionService.Predict][google.ai.generativelanguage.v1beta.PredictionService.Predict].
- instances¶
Required. The instances that are the input to the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- parameters¶
Optional. The parameters that govern the prediction call.
- class google.ai.generativelanguage_v1beta.types.PredictResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse message for [PredictionService.Predict].
- predictions¶
The outputs of the prediction call.
- Type
MutableSequence[google.protobuf.struct_pb2.Value]
- class google.ai.generativelanguage_v1beta.types.QueryCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for querying a
Corpus.- metadata_filters¶
Optional. Filter for
ChunkandDocumentmetadata. EachMetadataFilterobject should correspond to a unique key. MultipleMetadataFilterobjects are joined by logical “AND”s.Example query at document level: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilterobject list: metadata_filters = [ {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}]}, {key = “document.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}]}]Example query at chunk level for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.QueryCorpusResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
QueryCorpuscontaining a list of relevant chunks.- relevant_chunks¶
The relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]
- class google.ai.generativelanguage_v1beta.types.QueryDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest for querying a
Document.- name¶
Required. The name of the
Documentto query. Example:corpora/my-corpus-123/documents/the-doc-abc- Type
- results_count¶
Optional. The maximum number of
Chunks to return. The service may return fewerChunks.If unspecified, at most 10
Chunks will be returned. The maximum specified result count is 100.- Type
- metadata_filters¶
Optional. Filter for
Chunkmetadata. EachMetadataFilterobject should correspond to a unique key. MultipleMetadataFilterobjects are joined by logical “AND”s.Note:
Document-level filtering is not supported for this request because aDocumentname is already specified.Example query: (year >= 2020 OR year < 2010) AND (genre = drama OR genre = action)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = GREATER_EQUAL}, {int_value = 2010, operation = LESS}}, {key = “chunk.custom_metadata.genre” conditions = [{string_value = “drama”, operation = EQUAL}, {string_value = “action”, operation = EQUAL}}]Example query for a numeric range of values: (year > 2015 AND year <= 2020)
MetadataFilterobject list: metadata_filters = [ {key = “chunk.custom_metadata.year” conditions = [{int_value = 2015, operation = GREATER}]}, {key = “chunk.custom_metadata.year” conditions = [{int_value = 2020, operation = LESS_EQUAL}]}]Note: “AND”s for the same key are only supported for numeric values. String values only support “OR”s for the same key.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.QueryDocumentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
QueryDocumentcontaining a list of relevant chunks.- relevant_chunks¶
The returned relevant chunks.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.RelevantChunk]
- class google.ai.generativelanguage_v1beta.types.RealtimeInputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfigures the realtime input behavior in
BidiGenerateContent.- automatic_activity_detection¶
Optional. If not set, automatic activity detection is enabled by default. If automatic voice detection is disabled, the client must send activity signals.
- activity_handling¶
Optional. Defines what effect activity has.
This field is a member of oneof
_activity_handling.
- turn_coverage¶
Optional. Defines which input is included in the user’s turn.
This field is a member of oneof
_turn_coverage.
- class ActivityHandling(value)[source]¶
Bases:
proto.enums.EnumThe different ways of handling user activity.
- Values:
- ACTIVITY_HANDLING_UNSPECIFIED (0):
If unspecified, the default behavior is
START_OF_ACTIVITY_INTERRUPTS.- START_OF_ACTIVITY_INTERRUPTS (1):
If true, start of activity will interrupt the model’s response (also called “barge in”). The model’s current response will be cut-off in the moment of the interruption. This is the default behavior.
- NO_INTERRUPTION (2):
The model’s response will not be interrupted.
- class AutomaticActivityDetection(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfigures automatic detection of activity.
- disabled¶
Optional. If enabled (the default), detected voice and text input count as activity. If disabled, the client must send activity signals.
This field is a member of oneof
_disabled.- Type
- start_of_speech_sensitivity¶
Optional. Determines how likely speech is to be detected.
This field is a member of oneof
_start_of_speech_sensitivity.
- prefix_padding_ms¶
Optional. The required duration of detected speech before start-of-speech is committed. The lower this value, the more sensitive the start-of-speech detection is and shorter speech can be recognized. However, this also increases the probability of false positives.
This field is a member of oneof
_prefix_padding_ms.- Type
- end_of_speech_sensitivity¶
Optional. Determines how likely detected speech is ended.
This field is a member of oneof
_end_of_speech_sensitivity.
- silence_duration_ms¶
Optional. The required duration of detected non-speech (e.g. silence) before end-of-speech is committed. The larger this value, the longer speech gaps can be without interrupting the user’s activity but this will increase the model’s latency.
This field is a member of oneof
_silence_duration_ms.- Type
- class EndSensitivity(value)[source]¶
Bases:
proto.enums.EnumDetermines how end of speech is detected.
- Values:
- END_SENSITIVITY_UNSPECIFIED (0):
The default is END_SENSITIVITY_HIGH.
- END_SENSITIVITY_HIGH (1):
Automatic detection ends speech more often.
- END_SENSITIVITY_LOW (2):
Automatic detection ends speech less often.
- class StartSensitivity(value)[source]¶
Bases:
proto.enums.EnumDetermines how start of speech is detected.
- Values:
- START_SENSITIVITY_UNSPECIFIED (0):
The default is START_SENSITIVITY_HIGH.
- START_SENSITIVITY_HIGH (1):
Automatic detection will detect the start of speech more often.
- START_SENSITIVITY_LOW (2):
Automatic detection will detect the start of speech less often.
- class TurnCoverage(value)[source]¶
Bases:
proto.enums.EnumOptions about which input is included in the user’s turn.
- Values:
- TURN_COVERAGE_UNSPECIFIED (0):
If unspecified, the default behavior is
TURN_INCLUDES_ONLY_ACTIVITY.- TURN_INCLUDES_ONLY_ACTIVITY (1):
The users turn only includes activity since the last turn, excluding inactivity (e.g. silence on the audio stream). This is the default behavior.
- TURN_INCLUDES_ALL_INPUT (2):
The users turn includes all realtime input since the last turn, including inactivity (e.g. silence on the audio stream).
- class google.ai.generativelanguage_v1beta.types.RelevantChunk(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe information for a chunk relevant to a query.
- chunk¶
Chunkassociated with the query.
- document¶
Documentassociated with the chunk.
- class google.ai.generativelanguage_v1beta.types.RetrievalMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata related to retrieval in the grounding flow.
- google_search_dynamic_retrieval_score¶
Optional. Score indicating how likely information from google search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when google search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger google search.
- Type
- class google.ai.generativelanguage_v1beta.types.SafetyFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety feedback for an entire request.
This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.
- rating¶
Safety rating evaluated from content.
- setting¶
Safety settings applied to the request.
- class google.ai.generativelanguage_v1beta.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety rating for a piece of content.
The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.
- category¶
Required. The category for this rating.
- probability¶
Required. The probability of harm for this content.
- class HarmProbability(value)[source]¶
Bases:
proto.enums.EnumThe probability that a piece of content is harmful.
The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.
- Values:
- HARM_PROBABILITY_UNSPECIFIED (0):
Probability is unspecified.
- NEGLIGIBLE (1):
Content has a negligible chance of being unsafe.
- LOW (2):
Content has a low chance of being unsafe.
- MEDIUM (3):
Content has a medium chance of being unsafe.
- HIGH (4):
Content has a high chance of being unsafe.
- class google.ai.generativelanguage_v1beta.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSafety setting, affecting the safety-blocking behavior.
Passing a safety setting for a category changes the allowed probability that content is blocked.
- category¶
Required. The category for this setting.
- threshold¶
Required. Controls the probability threshold at which harm is blocked.
- class HarmBlockThreshold(value)[source]¶
Bases:
proto.enums.EnumBlock at and beyond a specified harm probability.
- Values:
- HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):
Threshold is unspecified.
- BLOCK_LOW_AND_ABOVE (1):
Content with NEGLIGIBLE will be allowed.
- BLOCK_MEDIUM_AND_ABOVE (2):
Content with NEGLIGIBLE and LOW will be allowed.
- BLOCK_ONLY_HIGH (3):
Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.
- BLOCK_NONE (4):
All content will be allowed.
- OFF (5):
Turn off the safety filter.
- class google.ai.generativelanguage_v1beta.types.Schema(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe
Schemaobject allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object.- type_¶
Required. Data type.
- format_¶
Optional. The format of the data. Any value is allowed, but most do not trigger any special functionality.
- Type
- description¶
Optional. A brief description of the parameter. This could contain examples of use. Parameter description may be formatted as Markdown.
- Type
- enum¶
Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:[“EAST”, NORTH”, “SOUTH”, “WEST”]}
- Type
MutableSequence[str]
- properties¶
Optional. Properties of Type.OBJECT.
- Type
MutableMapping[str, google.ai.generativelanguage_v1beta.types.Schema]
- minimum¶
Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
This field is a member of oneof
_minimum.- Type
- maximum¶
Optional. Maximum value of the Type.INTEGER and Type.NUMBER
This field is a member of oneof
_maximum.- Type
- pattern¶
Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
- Type
- example¶
Optional. Example of the object. Will only populated when the object is the root.
- any_of¶
Optional. The value should be validated against any (one or more) of the subschemas in the list.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.Schema]
- property_ordering¶
Optional. The order of the properties. Not a standard field in open api spec. Used to determine the order of the properties in the response.
- Type
MutableSequence[str]
- default¶
Optional. Default value of the field. Per JSON Schema, this field is intended for documentation generators and doesn’t affect validation. Thus it’s included here and ignored so that developers who send schemas with a
defaultfield don’t get unknown-field errors.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.ai.generativelanguage_v1beta.types.SearchEntryPoint(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGoogle search entry point.
- rendered_content¶
Optional. Web content snippet that can be embedded in a web page or an app webview.
- Type
- class google.ai.generativelanguage_v1beta.types.Segment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSegment of the content.
- start_index¶
Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
- Type
- end_index¶
Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
- Type
- class google.ai.generativelanguage_v1beta.types.SemanticRetrieverConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfiguration for retrieving grounding content from a
CorpusorDocumentcreated using the Semantic Retriever API.- source¶
Required. Name of the resource for retrieval. Example:
corpora/123orcorpora/123/documents/abc.- Type
- query¶
Required. Query to use for matching
Chunks in the given resource by similarity.
- metadata_filters¶
Optional. Filters for selecting
Documents and/orChunks from the resource.- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.MetadataFilter]
- class google.ai.generativelanguage_v1beta.types.SessionResumptionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageSession resumption configuration.
This message is included in the session configuration as
BidiGenerateContentSetup.session_resumption. If configured, the server will sendSessionResumptionUpdatemessages.
- class google.ai.generativelanguage_v1beta.types.SessionResumptionUpdate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUpdate of the session resumption state.
Only sent if
BidiGenerateContentSetup.session_resumptionwas set.- new_handle¶
New handle that represents a state that can be resumed. Empty if
resumable=false.- Type
- resumable¶
True if the current session can be resumed at this point.
Resumption is not possible at some points in the session. For example, when the model is executing function calls or generating. Resuming the session (using a previous session token) in such a state will result in some data loss. In these cases,
new_handlewill be empty andresumablewill be false.- Type
- class google.ai.generativelanguage_v1beta.types.SpeakerVoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for a single speaker in a multi speaker setup.
- voice_config¶
Required. The configuration for the voice to use.
- class google.ai.generativelanguage_v1beta.types.SpeechConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe speech generation config.
- voice_config¶
The configuration in case of single-voice output.
- multi_speaker_voice_config¶
Optional. The configuration for the multi-speaker setup. It is mutually exclusive with the voice_config field.
- language_code¶
Optional. Language code (in BCP 47 format, e.g. “en-US”) for speech synthesis.
Valid values are: de-DE, en-AU, en-GB, en-IN, en-US, es-US, fr-FR, hi-IN, pt-BR, ar-XA, es-ES, fr-CA, id-ID, it-IT, ja-JP, tr-TR, vi-VN, bn-IN, gu-IN, kn-IN, ml-IN, mr-IN, ta-IN, te-IN, nl-NL, ko-KR, cmn-CN, pl-PL, ru-RU, and th-TH.
- Type
- class google.ai.generativelanguage_v1beta.types.StringList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUser provided string values assigned to a single metadata key.
- class google.ai.generativelanguage_v1beta.types.TaskType(value)[source]¶
Bases:
proto.enums.EnumType of task for which the embedding will be used.
- Values:
- TASK_TYPE_UNSPECIFIED (0):
Unset value, which will default to one of the other enum values.
- RETRIEVAL_QUERY (1):
Specifies the given text is a query in a search/retrieval setting.
- RETRIEVAL_DOCUMENT (2):
Specifies the given text is a document from the corpus being searched.
- SEMANTIC_SIMILARITY (3):
Specifies the given text will be used for STS.
- CLASSIFICATION (4):
Specifies that the given text will be classified.
- CLUSTERING (5):
Specifies that the embeddings will be used for clustering.
- QUESTION_ANSWERING (6):
Specifies that the given text will be used for question answering.
- FACT_VERIFICATION (7):
Specifies that the given text will be used for fact verification.
- CODE_RETRIEVAL_QUERY (8):
Specifies that the given text will be used for code retrieval.
- class google.ai.generativelanguage_v1beta.types.TextCompletion(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageOutput text returned from a model.
- safety_ratings¶
Ratings for the safety of a response.
There is at most one rating per category.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.SafetyRating]
- class google.ai.generativelanguage_v1beta.types.TextPrompt(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageText given to the model as a prompt.
The Model will use this TextPrompt to Generate a text completion.
- class google.ai.generativelanguage_v1beta.types.ThinkingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageConfig for thinking features.
- class google.ai.generativelanguage_v1beta.types.Tool(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool details that the model may use to generate response.
A
Toolis a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.Next ID: 12
- function_declarations¶
Optional. A list of
FunctionDeclarationsavailable to the model that can be used for function calling.The model or system does not execute the function. Instead the defined function may be returned as a [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] with arguments to the client side for execution. The model may decide to call a subset of these functions by populating [FunctionCall][google.ai.generativelanguage.v1beta.Part.function_call] in the response. The next conversation turn may contain a [FunctionResponse][google.ai.generativelanguage.v1beta.Part.function_response] with the [Content.role][google.ai.generativelanguage.v1beta.Content.role] “function” generation context for the next model turn.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.FunctionDeclaration]
- google_search_retrieval¶
Optional. Retrieval tool that is powered by Google search.
- code_execution¶
Optional. Enables the model to execute code as part of generation.
- google_search¶
Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
- computer_use¶
Optional. Tool to support the model interacting directly with the computer. If enabled, it automatically populates computer-use specific Function Declarations.
- url_context¶
Optional. Tool to support URL context retrieval.
- class ComputerUse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageComputer Use tool type.
- environment¶
Required. The environment being operated.
- excluded_predefined_functions¶
Optional. By default, predefined functions are included in the final model call. Some of them can be explicitly excluded from being automatically included. This can serve two purposes:
- Using a more restricted / different action
space.
- Improving the definitions / instructions of
predefined functions.
- Type
MutableSequence[str]
- class Environment(value)[source]¶
Bases:
proto.enums.EnumRepresents the environment being operated, such as a web browser.
- Values:
- ENVIRONMENT_UNSPECIFIED (0):
Defaults to browser.
- ENVIRONMENT_BROWSER (1):
Operates in a web browser.
- class GoogleSearch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageGoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
- time_range_filter¶
Optional. Filter search results to a specific time range. If customers set a start time, they must set an end time (and vice versa).
- Type
google.type.interval_pb2.Interval
- class google.ai.generativelanguage_v1beta.types.ToolConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe Tool configuration containing parameters for specifying
Tooluse in the request.- function_calling_config¶
Optional. Function calling config.
- class google.ai.generativelanguage_v1beta.types.TransferOwnershipRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to transfer the ownership of the tuned model.
- name¶
Required. The resource name of the tuned model to transfer ownership.
Format:
tunedModels/my-model-id- Type
- class google.ai.generativelanguage_v1beta.types.TransferOwnershipResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageResponse from
TransferOwnership.
- class google.ai.generativelanguage_v1beta.types.TunedModel(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA fine-tuned model created using ModelService.CreateTunedModel.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- tuned_model_source¶
Optional. TunedModel to use as the starting point for training the new model.
This field is a member of oneof
source_model.
- base_model¶
Immutable. The name of the
Modelto tune. Example:models/gemini-1.5-flash-001This field is a member of oneof
source_model.- Type
- name¶
Output only. The tuned model name. A unique name will be generated on create. Example:
tunedModels/az2mb0bpw6iIf display_name is set on create, the id portion of the name will be set by concatenating the words of the display_name with hyphens and adding a random portion for uniqueness.Example:
display_name =
Sentence Translatorname =
tunedModels/sentence-translator-u3b7m
- Type
- display_name¶
Optional. The name to display for this model in user interfaces. The display name must be up to 40 characters including spaces.
- Type
- temperature¶
Optional. Controls the randomness of the output.
Values can range over
[0.0,1.0], inclusive. A value closer to1.0will produce responses that are more varied, while a value closer to0.0will typically result in less surprising responses from the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_temperature.- Type
- top_p¶
Optional. For Nucleus sampling.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least
top_p.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_p.- Type
- top_k¶
Optional. For Top-k sampling.
Top-k sampling considers the set of
top_kmost probable tokens. This value specifies default to be used by the backend while making the call to the model.This value specifies default to be the one used by the base model while creating the model.
This field is a member of oneof
_top_k.- Type
- state¶
Output only. The state of the tuned model.
- create_time¶
Output only. The timestamp when this model was created.
- update_time¶
Output only. The timestamp when this model was updated.
- tuning_task¶
Required. The tuning task that creates the tuned model.
- reader_project_numbers¶
Optional. List of project numbers that have read access to the tuned model.
- Type
MutableSequence[int]
- class State(value)[source]¶
Bases:
proto.enums.EnumThe state of the tuned model.
- Values:
- STATE_UNSPECIFIED (0):
The default value. This value is unused.
- CREATING (1):
The model is being created.
- ACTIVE (2):
The model is ready to be used.
- FAILED (3):
The model failed to be created.
- class google.ai.generativelanguage_v1beta.types.TunedModelSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTuned model as a source for training a new model.
- tuned_model¶
Immutable. The name of the
TunedModelto use as the starting point for training the new model. Example:tunedModels/my-tuned-model- Type
- class google.ai.generativelanguage_v1beta.types.TuningExample(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA single example for tuning.
- class google.ai.generativelanguage_v1beta.types.TuningExamples(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageA set of tuning examples. Can be training or validation data.
- examples¶
The examples. Example input can be for text or discuss, but all examples in a set must be of the same type.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningExample]
- class google.ai.generativelanguage_v1beta.types.TuningSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRecord for a single tuning step.
- compute_time¶
Output only. The timestamp when this metric was computed.
- class google.ai.generativelanguage_v1beta.types.TuningTask(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTuning tasks that create tuned models.
- start_time¶
Output only. The timestamp when tuning this model started.
- complete_time¶
Output only. The timestamp when tuning this model completed.
- snapshots¶
Output only. Metrics collected during tuning.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.TuningSnapshot]
- training_data¶
Required. Input only. Immutable. The model training data.
- hyperparameters¶
Immutable. Hyperparameters controlling the tuning process. If not provided, default values will be used.
- class google.ai.generativelanguage_v1beta.types.Type(value)[source]¶
Bases:
proto.enums.EnumType contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types
- Values:
- TYPE_UNSPECIFIED (0):
Not specified, should not be used.
- STRING (1):
String type.
- NUMBER (2):
Number type.
- INTEGER (3):
Integer type.
- BOOLEAN (4):
Boolean type.
- ARRAY (5):
Array type.
- OBJECT (6):
Object type.
- NULL (7):
Null type.
- class google.ai.generativelanguage_v1beta.types.UpdateCachedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update CachedContent.
- cached_content¶
Required. The content cache entry to update
- update_mask¶
The list of fields to update.
- class google.ai.generativelanguage_v1beta.types.UpdateChunkRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Chunk.- chunk¶
Required. The
Chunkto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
custom_metadataanddata.
- class google.ai.generativelanguage_v1beta.types.UpdateCorpusRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Corpus.- corpus¶
Required. The
Corpusto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_name.
- class google.ai.generativelanguage_v1beta.types.UpdateDocumentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a
Document.- document¶
Required. The
Documentto update.
- update_mask¶
Required. The list of fields to update. Currently, this only supports updating
display_nameandcustom_metadata.
- class google.ai.generativelanguage_v1beta.types.UpdatePermissionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update the
Permission.- permission¶
Required. The permission to update.
The permission’s
namefield is used to identify the permission to update.
- update_mask¶
Required. The list of fields to update. Accepted ones:
role (
Permission.rolefield)
- class google.ai.generativelanguage_v1beta.types.UpdateTunedModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRequest to update a TunedModel.
- tuned_model¶
Required. The tuned model to update.
- update_mask¶
Optional. The list of fields to update.
- class google.ai.generativelanguage_v1beta.types.UrlContext(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageTool to support URL context retrieval.
- class google.ai.generativelanguage_v1beta.types.UrlContextMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata related to url context retrieval tool.
- url_metadata¶
List of url context.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.UrlMetadata]
- class google.ai.generativelanguage_v1beta.types.UrlMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageContext of the a single url retrieval.
- url_retrieval_status¶
Status of the url retrieval.
- class UrlRetrievalStatus(value)[source]¶
Bases:
proto.enums.EnumStatus of the url retrieval.
- Values:
- URL_RETRIEVAL_STATUS_UNSPECIFIED (0):
Default value. This value is unused.
- URL_RETRIEVAL_STATUS_SUCCESS (1):
Url retrieval is successful.
- URL_RETRIEVAL_STATUS_ERROR (2):
Url retrieval is failed due to error.
- URL_RETRIEVAL_STATUS_PAYWALL (3):
Url retrieval is failed because the content is behind paywall.
- URL_RETRIEVAL_STATUS_UNSAFE (4):
Url retrieval is failed because the content is unsafe.
- class google.ai.generativelanguage_v1beta.types.UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageUsage metadata about response(s).
- prompt_token_count¶
Output only. Number of tokens in the prompt. When
cached_contentis set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.- Type
- cached_content_token_count¶
Number of tokens in the cached part of the prompt (the cached content)
- Type
- response_token_count¶
Output only. Total number of tokens across all the generated response candidates.
- Type
- total_token_count¶
Output only. Total token count for the generation request (prompt + response candidates).
- Type
- prompt_tokens_details¶
Output only. List of modalities that were processed in the request input.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- cache_tokens_details¶
Output only. List of modalities of the cached content in the request input.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- response_tokens_details¶
Output only. List of modalities that were returned in the response.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- tool_use_prompt_tokens_details¶
Output only. List of modalities that were processed for tool-use request inputs.
- Type
MutableSequence[google.ai.generativelanguage_v1beta.types.ModalityTokenCount]
- class google.ai.generativelanguage_v1beta.types.Video(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageRepresentation of a video.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- class google.ai.generativelanguage_v1beta.types.VideoFileMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata for a video
File.- video_duration¶
Duration of the video.
- class google.ai.generativelanguage_v1beta.types.VideoMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageMetadata describes the input video content.
- start_offset¶
Optional. The start offset of the video.
- end_offset¶
Optional. The end offset of the video.
- class google.ai.generativelanguage_v1beta.types.VoiceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.MessageThe configuration for the voice to use.