As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Ai Generativelanguage v1 API

class google.ai.generativelanguage_v1.types.BatchEmbedContentsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Batch request to get embeddings from the model for a list of prompts.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

requests

Required. Embed requests for the batch. The model in each of these requests must match the model specified BatchEmbedContentsRequest.model.

Type

MutableSequence[google.ai.generativelanguage_v1.types.EmbedContentRequest]

class google.ai.generativelanguage_v1.types.BatchEmbedContentsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to a BatchEmbedContentsRequest.

embeddings

Output only. The embeddings for each request, in the same order as provided in the batch request.

Type

MutableSequence[google.ai.generativelanguage_v1.types.ContentEmbedding]

class google.ai.generativelanguage_v1.types.Blob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Raw media bytes.

Text should not be sent as raw bytes, use the ‘text’ field.

mime_type

The IANA standard MIME type of the source data. Examples:

  • image/png

  • image/jpeg If an unsupported MIME type is provided, an error will be returned. For a complete list of supported types, see Supported file formats.

Type

str

data

Raw bytes for media formats.

Type

bytes

class google.ai.generativelanguage_v1.types.Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response candidate generated from the model.

index

Output only. Index of the candidate in the list of response candidates.

This field is a member of oneof _index.

Type

int

content

Output only. Generated content returned from the model.

Type

google.ai.generativelanguage_v1.types.Content

finish_reason

Optional. Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating tokens.

Type

google.ai.generativelanguage_v1.types.Candidate.FinishReason

safety_ratings

List of ratings for the safety of a response candidate. There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1.types.SafetyRating]

citation_metadata

Output only. Citation information for model-generated candidate.

This field may be populated with recitation information for any text included in the content. These are passages that are “recited” from copyrighted material in the foundational LLM’s training data.

Type

google.ai.generativelanguage_v1.types.CitationMetadata

token_count

Output only. Token count for this candidate.

Type

int

avg_logprobs

Output only.

Type

float

logprobs_result

Output only. Log-likelihood scores for the response tokens and top tokens

Type

google.ai.generativelanguage_v1.types.LogprobsResult

class FinishReason(value)[source]

Bases: proto.enums.Enum

Defines the reason why the model stopped generating tokens.

Values:
FINISH_REASON_UNSPECIFIED (0):

Default value. This value is unused.

STOP (1):

Natural stop point of the model or provided stop sequence.

MAX_TOKENS (2):

The maximum number of tokens as specified in the request was reached.

SAFETY (3):

The response candidate content was flagged for safety reasons.

RECITATION (4):

The response candidate content was flagged for recitation reasons.

LANGUAGE (6):

The response candidate content was flagged for using an unsupported language.

OTHER (5):

Unknown reason.

BLOCKLIST (7):

Token generation stopped because the content contains forbidden terms.

PROHIBITED_CONTENT (8):

Token generation stopped for potentially containing prohibited content.

SPII (9):

Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).

MALFORMED_FUNCTION_CALL (10):

The function call generated by the model is invalid.

class google.ai.generativelanguage_v1.types.CitationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A collection of source attributions for a piece of content.

citation_sources

Citations to sources for a specific response.

Type

MutableSequence[google.ai.generativelanguage_v1.types.CitationSource]

class google.ai.generativelanguage_v1.types.CitationSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A citation to a source for a portion of a specific response.

start_index

Optional. Start of segment of the response that is attributed to this source.

Index indicates the start of the segment, measured in bytes.

This field is a member of oneof _start_index.

Type

int

end_index

Optional. End of the attributed segment, exclusive.

This field is a member of oneof _end_index.

Type

int

uri

Optional. URI that is attributed as a source for a portion of the text.

This field is a member of oneof _uri.

Type

str

license_

Optional. License for the GitHub project that is attributed as a source for segment.

License info is required for code citations.

This field is a member of oneof _license.

Type

str

class google.ai.generativelanguage_v1.types.Content(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The base structured datatype containing multi-part content of a message.

A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.

parts

Ordered Parts that constitute a single message. Parts may have different MIME types.

Type

MutableSequence[google.ai.generativelanguage_v1.types.Part]

role

Optional. The producer of the content. Must be either ‘user’ or ‘model’. Useful to set for multi-turn conversations, otherwise can be left blank or unset.

Type

str

class google.ai.generativelanguage_v1.types.ContentEmbedding(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of floats representing an embedding.

values

The embedding values.

Type

MutableSequence[float]

class google.ai.generativelanguage_v1.types.CountTokensRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

contents

Optional. The input given to the model as a prompt. This field is ignored when generate_content_request is set.

Type

MutableSequence[google.ai.generativelanguage_v1.types.Content]

generate_content_request

Optional. The overall input given to the Model. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling. Models/Contents and generate_content_requests are mutually exclusive. You can either send Model + Contents or a generate_content_request, but never both.

Type

google.ai.generativelanguage_v1.types.GenerateContentRequest

class google.ai.generativelanguage_v1.types.CountTokensResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response from CountTokens.

It returns the model’s token_count for the prompt.

total_tokens

The number of tokens that the Model tokenizes the prompt into. Always non-negative.

Type

int

class google.ai.generativelanguage_v1.types.EmbedContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request containing the Content for the model to embed.

model

Required. The model’s resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

content

Required. The content to embed. Only the parts.text fields will be counted.

Type

google.ai.generativelanguage_v1.types.Content

task_type

Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001.

This field is a member of oneof _task_type.

Type

google.ai.generativelanguage_v1.types.TaskType

title

Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT.

Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval.

This field is a member of oneof _title.

Type

str

output_dimensionality

Optional. Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end. Supported by newer models since 2024 only. You cannot set this value if using the earlier model (models/embedding-001).

This field is a member of oneof _output_dimensionality.

Type

int

class google.ai.generativelanguage_v1.types.EmbedContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The response to an EmbedContentRequest.

embedding

Output only. The embedding generated from the input content.

Type

google.ai.generativelanguage_v1.types.ContentEmbedding

class google.ai.generativelanguage_v1.types.GenerateContentRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to generate a completion from the model.

model

Required. The name of the Model to use for generating the completion.

Format: name=models/{model}.

Type

str

contents

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request.

Type

MutableSequence[google.ai.generativelanguage_v1.types.Content]

safety_settings

Optional. A list of unique SafetySetting instances for blocking unsafe content.

This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safety_settings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.

Type

MutableSequence[google.ai.generativelanguage_v1.types.SafetySetting]

generation_config

Optional. Configuration options for model generation and outputs.

This field is a member of oneof _generation_config.

Type

google.ai.generativelanguage_v1.types.GenerationConfig

class google.ai.generativelanguage_v1.types.GenerateContentResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from the model supporting multiple candidate responses.

Safety ratings and content filtering are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finish_reason and in safety_ratings. The API:

  • Returns either all requested candidates or none of them

  • Returns no candidates at all only if there was something wrong with the prompt (check prompt_feedback)

  • Reports feedback on each candidate in finish_reason and safety_ratings.

candidates

Candidate responses from the model.

Type

MutableSequence[google.ai.generativelanguage_v1.types.Candidate]

prompt_feedback

Returns the prompt’s feedback related to the content filters.

Type

google.ai.generativelanguage_v1.types.GenerateContentResponse.PromptFeedback

usage_metadata

Output only. Metadata on the generation requests’ token usage.

Type

google.ai.generativelanguage_v1.types.GenerateContentResponse.UsageMetadata

class PromptFeedback(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A set of the feedback metadata the prompt specified in GenerateContentRequest.content.

block_reason

Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.

Type

google.ai.generativelanguage_v1.types.GenerateContentResponse.PromptFeedback.BlockReason

safety_ratings

Ratings for safety of the prompt. There is at most one rating per category.

Type

MutableSequence[google.ai.generativelanguage_v1.types.SafetyRating]

class BlockReason(value)[source]

Bases: proto.enums.Enum

Specifies the reason why the prompt was blocked.

Values:
BLOCK_REASON_UNSPECIFIED (0):

Default value. This value is unused.

SAFETY (1):

Prompt was blocked due to safety reasons. Inspect safety_ratings to understand which safety category blocked it.

OTHER (2):

Prompt was blocked due to unknown reasons.

BLOCKLIST (3):

Prompt was blocked due to the terms which are included from the terminology blocklist.

PROHIBITED_CONTENT (4):

Prompt was blocked due to prohibited content.

class UsageMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata on the generation request’s token usage.

prompt_token_count

Number of tokens in the prompt. When cached_content is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.

Type

int

candidates_token_count

Total number of tokens across all the generated response candidates.

Type

int

total_token_count

Total token count for the generation request (prompt + response candidates).

Type

int

class google.ai.generativelanguage_v1.types.GenerationConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration options for model generation and outputs. Not all parameters are configurable for every model.

candidate_count

Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.

This field is a member of oneof _candidate_count.

Type

int

stop_sequences

Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.

Type

MutableSequence[str]

max_output_tokens

Optional. The maximum number of tokens to include in a response candidate.

Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.

This field is a member of oneof _max_output_tokens.

Type

int

temperature

Optional. Controls the randomness of the output.

Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function.

Values can range from [0.0, 2.0].

This field is a member of oneof _temperature.

Type

float

top_p

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and Top-p (nucleus) sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.

Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty top_k attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow setting top_k on requests.

This field is a member of oneof _top_p.

Type

float

top_k

Optional. The maximum number of tokens to consider when sampling.

Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of top_k most probable tokens. Models running with nucleus sampling don’t allow top_k setting.

Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty top_k attribute indicates that the model doesn’t apply top-k sampling and doesn’t allow setting top_k on requests.

This field is a member of oneof _top_k.

Type

int

presence_penalty

Optional. Presence penalty applied to the next token’s logprobs if the token has already been seen in the response.

This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use [frequency_penalty][google.ai.generativelanguage.v1.GenerationConfig.frequency_penalty] for a penalty that increases with each use.

A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.

A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.

This field is a member of oneof _presence_penalty.

Type

float

frequency_penalty

Optional. Frequency penalty applied to the next token’s logprobs, multiplied by the number of times each token has been seen in the respponse so far.

A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more dificult it is for the model to use that token again increasing the vocabulary of responses.

Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the [max_output_tokens][google.ai.generativelanguage.v1.GenerationConfig.max_output_tokens] limit: “…the the the the the…”.

This field is a member of oneof _frequency_penalty.

Type

float

response_logprobs

Optional. If true, export the logprobs results in response.

This field is a member of oneof _response_logprobs.

Type

bool

logprobs

Optional. Only valid if [response_logprobs=True][google.ai.generativelanguage.v1.GenerationConfig.response_logprobs]. This sets the number of top logprobs to return at each decoding step in the [Candidate.logprobs_result][google.ai.generativelanguage.v1.Candidate.logprobs_result].

This field is a member of oneof _logprobs.

Type

int

class google.ai.generativelanguage_v1.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for getting information about a specific Model.

name

Required. The resource name of the model.

This name should match a model name returned by the ListModels method.

Format: models/{model}

Type

str

class google.ai.generativelanguage_v1.types.HarmCategory(value)[source]

Bases: proto.enums.Enum

The category of a rating.

These categories cover various kinds of harms that developers may wish to adjust.

Values:
HARM_CATEGORY_UNSPECIFIED (0):

Category is unspecified.

HARM_CATEGORY_DEROGATORY (1):

PaLM - Negative or harmful comments targeting identity and/or protected attribute.

HARM_CATEGORY_TOXICITY (2):

PaLM - Content that is rude, disrespectful, or profane.

HARM_CATEGORY_VIOLENCE (3):

PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore.

HARM_CATEGORY_SEXUAL (4):

PaLM - Contains references to sexual acts or other lewd content.

HARM_CATEGORY_MEDICAL (5):

PaLM - Promotes unchecked medical advice.

HARM_CATEGORY_DANGEROUS (6):

PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts.

HARM_CATEGORY_HARASSMENT (7):

Gemini - Harassment content.

HARM_CATEGORY_HATE_SPEECH (8):

Gemini - Hate speech and content.

HARM_CATEGORY_SEXUALLY_EXPLICIT (9):

Gemini - Sexually explicit content.

HARM_CATEGORY_DANGEROUS_CONTENT (10):

Gemini - Dangerous content.

HARM_CATEGORY_CIVIC_INTEGRITY (11):

Gemini - Content that may be used to harm civic integrity.

class google.ai.generativelanguage_v1.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request for listing all Models.

page_size

The maximum number of Models to return (per page).

If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size.

Type

int

page_token

A page token, received from a previous ListModels call.

Provide the page_token returned by one request as an argument to the next request to retrieve the next page.

When paginating, all other parameters provided to ListModels must match the call that provided the page token.

Type

str

class google.ai.generativelanguage_v1.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Response from ListModel containing a paginated list of Models.

models

The returned Models.

Type

MutableSequence[google.ai.generativelanguage_v1.types.Model]

next_page_token

A token, which can be sent as page_token to retrieve the next page.

If this field is omitted, there are no more pages.

Type

str

class google.ai.generativelanguage_v1.types.LogprobsResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Logprobs Result

top_candidates

Length = total number of decoding steps.

Type

MutableSequence[google.ai.generativelanguage_v1.types.LogprobsResult.TopCandidates]

chosen_candidates

Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.

Type

MutableSequence[google.ai.generativelanguage_v1.types.LogprobsResult.Candidate]

class Candidate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Candidate for the logprobs token and score.

token

The candidate’s token string value.

This field is a member of oneof _token.

Type

str

token_id

The candidate’s token id value.

This field is a member of oneof _token_id.

Type

int

log_probability

The candidate’s log probability.

This field is a member of oneof _log_probability.

Type

float

class TopCandidates(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Candidates with top log probabilities at each decoding step.

candidates

Sorted by log probability in descending order.

Type

MutableSequence[google.ai.generativelanguage_v1.types.LogprobsResult.Candidate]

class google.ai.generativelanguage_v1.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Information about a Generative Language Model.

name

Required. The resource name of the Model. Refer to Model variants for all allowed values.

Format: models/{model} with a {model} naming convention of:

  • “{base_model_id}-{version}”

Examples:

  • models/gemini-1.5-flash-001

Type

str

base_model_id

Required. The name of the base model, pass this to the generation request.

Examples:

  • gemini-1.5-flash

Type

str

version

Required. The version number of the model.

This represents the major version (1.0 or 1.5)

Type

str

display_name

The human-readable name of the model. E.g. “Gemini 1.5 Flash”. The name can be up to 128 characters long and can consist of any UTF-8 characters.

Type

str

description

A short description of the model.

Type

str

input_token_limit

Maximum number of input tokens allowed for this model.

Type

int

output_token_limit

Maximum number of output tokens available for this model.

Type

int

supported_generation_methods

The model’s supported generation methods.

The corresponding API method names are defined as Pascal case strings, such as generateMessage and generateContent.

Type

MutableSequence[str]

temperature

Controls the randomness of the output.

Values can range over [0.0,max_temperature], inclusive. A higher value will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _temperature.

Type

float

max_temperature

The maximum temperature this model can use.

This field is a member of oneof _max_temperature.

Type

float

top_p

For Nucleus sampling.

Nucleus sampling considers the smallest set of tokens whose probability sum is at least top_p. This value specifies default to be used by the backend while making the call to the model.

This field is a member of oneof _top_p.

Type

float

top_k

For Top-k sampling.

Top-k sampling considers the set of top_k most probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn’t use top-k sampling, and top_k isn’t allowed as a generation parameter.

This field is a member of oneof _top_k.

Type

int

class google.ai.generativelanguage_v1.types.Part(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A datatype containing media that is part of a multi-part Content message.

A Part consists of data which has an associated datatype. A Part can only contain one of the accepted types in Part.data.

A Part must have a fixed IANA MIME type identifying the type and subtype of the media if the inline_data field is filled with raw bytes.

This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

text

Inline text.

This field is a member of oneof data.

Type

str

inline_data

Inline media bytes.

This field is a member of oneof data.

Type

google.ai.generativelanguage_v1.types.Blob

class google.ai.generativelanguage_v1.types.SafetyRating(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety rating for a piece of content.

The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

category

Required. The category for this rating.

Type

google.ai.generativelanguage_v1.types.HarmCategory

probability

Required. The probability of harm for this content.

Type

google.ai.generativelanguage_v1.types.SafetyRating.HarmProbability

blocked

Was this content blocked because of this rating?

Type

bool

class HarmProbability(value)[source]

Bases: proto.enums.Enum

The probability that a piece of content is harmful.

The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.

Values:
HARM_PROBABILITY_UNSPECIFIED (0):

Probability is unspecified.

NEGLIGIBLE (1):

Content has a negligible chance of being unsafe.

LOW (2):

Content has a low chance of being unsafe.

MEDIUM (3):

Content has a medium chance of being unsafe.

HIGH (4):

Content has a high chance of being unsafe.

class google.ai.generativelanguage_v1.types.SafetySetting(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Safety setting, affecting the safety-blocking behavior.

Passing a safety setting for a category changes the allowed probability that content is blocked.

category

Required. The category for this setting.

Type

google.ai.generativelanguage_v1.types.HarmCategory

threshold

Required. Controls the probability threshold at which harm is blocked.

Type

google.ai.generativelanguage_v1.types.SafetySetting.HarmBlockThreshold

class HarmBlockThreshold(value)[source]

Bases: proto.enums.Enum

Block at and beyond a specified harm probability.

Values:
HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

Threshold is unspecified.

BLOCK_LOW_AND_ABOVE (1):

Content with NEGLIGIBLE will be allowed.

BLOCK_MEDIUM_AND_ABOVE (2):

Content with NEGLIGIBLE and LOW will be allowed.

BLOCK_ONLY_HIGH (3):

Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed.

BLOCK_NONE (4):

All content will be allowed.

OFF (5):

Turn off the safety filter.

class google.ai.generativelanguage_v1.types.TaskType(value)[source]

Bases: proto.enums.Enum

Type of task for which the embedding will be used.

Values:
TASK_TYPE_UNSPECIFIED (0):

Unset value, which will default to one of the other enum values.

RETRIEVAL_QUERY (1):

Specifies the given text is a query in a search/retrieval setting.

RETRIEVAL_DOCUMENT (2):

Specifies the given text is a document from the corpus being searched.

SEMANTIC_SIMILARITY (3):

Specifies the given text will be used for STS.

CLASSIFICATION (4):

Specifies that the given text will be classified.

CLUSTERING (5):

Specifies that the embeddings will be used for clustering.

QUESTION_ANSWERING (6):

Specifies that the given text will be used for question answering.

FACT_VERIFICATION (7):

Specifies that the given text will be used for fact verification.