Namespace Google.Apis.Vision.v1.Data
Classes
AddProductToProductSetRequest
Request message for the AddProductToProductSet
method.
AnnotateFileRequest
A request to annotate one single file, e.g. a PDF, TIFF or GIF file.
AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
AnnotateImageRequest
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information.
AnnotateImageResponse
Response to an image annotation request.
AsyncAnnotateFileRequest
An offline file annotation request.
AsyncAnnotateFileResponse
The response for a single offline file annotation request.
AsyncBatchAnnotateFilesRequest
Multiple async file annotation requests are batched into a single service call.
AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
AsyncBatchAnnotateImagesRequest
Request for async image annotation for a list of images.
AsyncBatchAnnotateImagesResponse
Response to an async batch image annotation request.
BatchAnnotateFilesRequest
A list of requests to annotate files using the BatchAnnotateFiles API.
BatchAnnotateFilesResponse
A list of file annotation responses.
BatchAnnotateImagesRequest
Multiple image annotation requests are batched into a single service call.
BatchAnnotateImagesResponse
Response to a batch image annotation request.
BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the
Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Block
Logical element on the page.
BoundingPoly
A bounding polygon for the detected image annotation.
CancelOperationRequest
The request message for Operations.CancelOperation.
Color
Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and
from color representations in various languages over compactness. For example, the fields of this representation
can be trivially provided to the constructor of java.awt.Color
in Java; it can also be trivially provided to
UIColor's +colorWithRed:green:blue:alpha
method in iOS; and, with just a little work, it can be easily
formatted into a CSS rgba()
string in JavaScript. This reference page doesn't have information about the
absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and
BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided,
implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha
values each differ by at most 1e-5
. Example (Java): import com.google.type.Color; // ... public static
java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ?
protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(),
protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float)
color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator
= 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green /
denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha(
FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); }
// ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor
red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor
alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor
colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red,
green, blue, alpha; if (![color getRed:&red green:&green blue:&blue
alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result
setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result
setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): //
... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac =
rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green =
Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return
rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green,
blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red,
green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var
hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var
i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return
resultBuilder.join(''); }; // ...
ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
CropHint
Single crop hint that is used to generate a new crop when serving an image.
CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
CropHintsParams
Parameters for crop hints annotation request.
DetectedBreak
Detected start or end of a structural component.
DetectedLanguage
Detected language for a structural component.
DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
Empty
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
EntityAnnotation
Set of detected entity features.
FaceAnnotation
A face annotation object contains the results of face detection.
Feature
The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that
type. Multiple Feature
objects can be specified in the features
list.
GcsDestination
The Google Cloud Storage location where the output will be written to.
GcsSource
The Google Cloud Storage location where the input will be read from.
GoogleCloudVisionV1p1beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
GoogleCloudVisionV1p1beta1AnnotateImageResponse
Response to an image annotation request.
GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
GoogleCloudVisionV1p1beta1Block
Logical element on the page.
GoogleCloudVisionV1p1beta1BoundingPoly
A bounding polygon for the detected image annotation.
GoogleCloudVisionV1p1beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
GoogleCloudVisionV1p1beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
GoogleCloudVisionV1p1beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
GoogleCloudVisionV1p1beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
GoogleCloudVisionV1p1beta1EntityAnnotation
Set of detected entity features.
GoogleCloudVisionV1p1beta1FaceAnnotation
A face annotation object contains the results of face detection.
GoogleCloudVisionV1p1beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of the
image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <= x < width
or 0 <= y < height
.
GoogleCloudVisionV1p1beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
GoogleCloudVisionV1p1beta1GcsSource
The Google Cloud Storage location where the input will be read from.
GoogleCloudVisionV1p1beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
GoogleCloudVisionV1p1beta1ImageProperties
Stores image properties, such as dominant colors.
GoogleCloudVisionV1p1beta1InputConfig
The desired input location and metadata.
GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
GoogleCloudVisionV1p1beta1LocationInfo
Detected entity location information.
GoogleCloudVisionV1p1beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
GoogleCloudVisionV1p1beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
GoogleCloudVisionV1p1beta1OutputConfig
The desired output location and metadata.
GoogleCloudVisionV1p1beta1Page
Detected page from OCR.
GoogleCloudVisionV1p1beta1Paragraph
Structural unit of text representing a number of words in certain order.
GoogleCloudVisionV1p1beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
GoogleCloudVisionV1p1beta1Product
A Product contains ReferenceImages.
GoogleCloudVisionV1p1beta1ProductKeyValue
A product label represented as a key-value pair.
GoogleCloudVisionV1p1beta1ProductSearchResults
Results for a product search request.
GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
GoogleCloudVisionV1p1beta1ProductSearchResultsResult
Information about a product.
GoogleCloudVisionV1p1beta1Property
A Property
consists of a user-supplied name/value pair.
GoogleCloudVisionV1p1beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
GoogleCloudVisionV1p1beta1Symbol
A single symbol representation.
GoogleCloudVisionV1p1beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
Additional information detected on the structural component.
GoogleCloudVisionV1p1beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
GoogleCloudVisionV1p1beta1WebDetection
Relevant information for the image from the Internet.
GoogleCloudVisionV1p1beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
GoogleCloudVisionV1p1beta1WebDetectionWebImage
Metadata for online images.
GoogleCloudVisionV1p1beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
GoogleCloudVisionV1p1beta1WebDetectionWebPage
Metadata for web pages.
GoogleCloudVisionV1p1beta1Word
A word representation.
GoogleCloudVisionV1p2beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
GoogleCloudVisionV1p2beta1AnnotateImageResponse
Response to an image annotation request.
GoogleCloudVisionV1p2beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
GoogleCloudVisionV1p2beta1Block
Logical element on the page.
GoogleCloudVisionV1p2beta1BoundingPoly
A bounding polygon for the detected image annotation.
GoogleCloudVisionV1p2beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
GoogleCloudVisionV1p2beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
GoogleCloudVisionV1p2beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
GoogleCloudVisionV1p2beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
GoogleCloudVisionV1p2beta1EntityAnnotation
Set of detected entity features.
GoogleCloudVisionV1p2beta1FaceAnnotation
A face annotation object contains the results of face detection.
GoogleCloudVisionV1p2beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of the
image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <= x < width
or 0 <= y < height
.
GoogleCloudVisionV1p2beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
GoogleCloudVisionV1p2beta1GcsSource
The Google Cloud Storage location where the input will be read from.
GoogleCloudVisionV1p2beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
GoogleCloudVisionV1p2beta1ImageProperties
Stores image properties, such as dominant colors.
GoogleCloudVisionV1p2beta1InputConfig
The desired input location and metadata.
GoogleCloudVisionV1p2beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
GoogleCloudVisionV1p2beta1LocationInfo
Detected entity location information.
GoogleCloudVisionV1p2beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
GoogleCloudVisionV1p2beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
GoogleCloudVisionV1p2beta1OutputConfig
The desired output location and metadata.
GoogleCloudVisionV1p2beta1Page
Detected page from OCR.
GoogleCloudVisionV1p2beta1Paragraph
Structural unit of text representing a number of words in certain order.
GoogleCloudVisionV1p2beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
GoogleCloudVisionV1p2beta1Product
A Product contains ReferenceImages.
GoogleCloudVisionV1p2beta1ProductKeyValue
A product label represented as a key-value pair.
GoogleCloudVisionV1p2beta1ProductSearchResults
Results for a product search request.
GoogleCloudVisionV1p2beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
GoogleCloudVisionV1p2beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
GoogleCloudVisionV1p2beta1ProductSearchResultsResult
Information about a product.
GoogleCloudVisionV1p2beta1Property
A Property
consists of a user-supplied name/value pair.
GoogleCloudVisionV1p2beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
GoogleCloudVisionV1p2beta1Symbol
A single symbol representation.
GoogleCloudVisionV1p2beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
GoogleCloudVisionV1p2beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
GoogleCloudVisionV1p2beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
GoogleCloudVisionV1p2beta1TextAnnotationTextProperty
Additional information detected on the structural component.
GoogleCloudVisionV1p2beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
GoogleCloudVisionV1p2beta1WebDetection
Relevant information for the image from the Internet.
GoogleCloudVisionV1p2beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
GoogleCloudVisionV1p2beta1WebDetectionWebImage
Metadata for online images.
GoogleCloudVisionV1p2beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
GoogleCloudVisionV1p2beta1WebDetectionWebPage
Metadata for web pages.
GoogleCloudVisionV1p2beta1Word
A word representation.
GoogleCloudVisionV1p3beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
GoogleCloudVisionV1p3beta1AnnotateImageResponse
Response to an image annotation request.
GoogleCloudVisionV1p3beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
GoogleCloudVisionV1p3beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
GoogleCloudVisionV1p3beta1BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the
Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
GoogleCloudVisionV1p3beta1Block
Logical element on the page.
GoogleCloudVisionV1p3beta1BoundingPoly
A bounding polygon for the detected image annotation.
GoogleCloudVisionV1p3beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
GoogleCloudVisionV1p3beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
GoogleCloudVisionV1p3beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
GoogleCloudVisionV1p3beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
GoogleCloudVisionV1p3beta1EntityAnnotation
Set of detected entity features.
GoogleCloudVisionV1p3beta1FaceAnnotation
A face annotation object contains the results of face detection.
GoogleCloudVisionV1p3beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of the
image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <= x < width
or 0 <= y < height
.
GoogleCloudVisionV1p3beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
GoogleCloudVisionV1p3beta1GcsSource
The Google Cloud Storage location where the input will be read from.
GoogleCloudVisionV1p3beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
GoogleCloudVisionV1p3beta1ImageProperties
Stores image properties, such as dominant colors.
GoogleCloudVisionV1p3beta1ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the
google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
GoogleCloudVisionV1p3beta1InputConfig
The desired input location and metadata.
GoogleCloudVisionV1p3beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
GoogleCloudVisionV1p3beta1LocationInfo
Detected entity location information.
GoogleCloudVisionV1p3beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
GoogleCloudVisionV1p3beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
GoogleCloudVisionV1p3beta1OutputConfig
The desired output location and metadata.
GoogleCloudVisionV1p3beta1Page
Detected page from OCR.
GoogleCloudVisionV1p3beta1Paragraph
Structural unit of text representing a number of words in certain order.
GoogleCloudVisionV1p3beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
GoogleCloudVisionV1p3beta1Product
A Product contains ReferenceImages.
GoogleCloudVisionV1p3beta1ProductKeyValue
A product label represented as a key-value pair.
GoogleCloudVisionV1p3beta1ProductSearchResults
Results for a product search request.
GoogleCloudVisionV1p3beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
GoogleCloudVisionV1p3beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
GoogleCloudVisionV1p3beta1ProductSearchResultsResult
Information about a product.
GoogleCloudVisionV1p3beta1Property
A Property
consists of a user-supplied name/value pair.
GoogleCloudVisionV1p3beta1ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
GoogleCloudVisionV1p3beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
GoogleCloudVisionV1p3beta1Symbol
A single symbol representation.
GoogleCloudVisionV1p3beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
GoogleCloudVisionV1p3beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
GoogleCloudVisionV1p3beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
GoogleCloudVisionV1p3beta1TextAnnotationTextProperty
Additional information detected on the structural component.
GoogleCloudVisionV1p3beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
GoogleCloudVisionV1p3beta1WebDetection
Relevant information for the image from the Internet.
GoogleCloudVisionV1p3beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
GoogleCloudVisionV1p3beta1WebDetectionWebImage
Metadata for online images.
GoogleCloudVisionV1p3beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
GoogleCloudVisionV1p3beta1WebDetectionWebPage
Metadata for web pages.
GoogleCloudVisionV1p3beta1Word
A word representation.
GoogleCloudVisionV1p4beta1AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
GoogleCloudVisionV1p4beta1AnnotateImageResponse
Response to an image annotation request.
GoogleCloudVisionV1p4beta1AsyncAnnotateFileResponse
The response for a single offline file annotation request.
GoogleCloudVisionV1p4beta1AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
GoogleCloudVisionV1p4beta1AsyncBatchAnnotateImagesResponse
Response to an async batch image annotation request.
GoogleCloudVisionV1p4beta1BatchAnnotateFilesResponse
A list of file annotation responses.
GoogleCloudVisionV1p4beta1BatchOperationMetadata
Metadata for the batch operations such as the current state. This is included in the metadata
field of the
Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
GoogleCloudVisionV1p4beta1Block
Logical element on the page.
GoogleCloudVisionV1p4beta1BoundingPoly
A bounding polygon for the detected image annotation.
GoogleCloudVisionV1p4beta1Celebrity
A Celebrity is a group of Faces with an identity.
GoogleCloudVisionV1p4beta1ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
GoogleCloudVisionV1p4beta1CropHint
Single crop hint that is used to generate a new crop when serving an image.
GoogleCloudVisionV1p4beta1CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
GoogleCloudVisionV1p4beta1DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
GoogleCloudVisionV1p4beta1EntityAnnotation
Set of detected entity features.
GoogleCloudVisionV1p4beta1FaceAnnotation
A face annotation object contains the results of face detection.
GoogleCloudVisionV1p4beta1FaceAnnotationLandmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of the
image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <= x < width
or 0 <= y < height
.
GoogleCloudVisionV1p4beta1FaceRecognitionResult
Information about a face's identity.
GoogleCloudVisionV1p4beta1GcsDestination
The Google Cloud Storage location where the output will be written to.
GoogleCloudVisionV1p4beta1GcsSource
The Google Cloud Storage location where the input will be read from.
GoogleCloudVisionV1p4beta1ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
GoogleCloudVisionV1p4beta1ImageProperties
Stores image properties, such as dominant colors.
GoogleCloudVisionV1p4beta1ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the
google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
GoogleCloudVisionV1p4beta1InputConfig
The desired input location and metadata.
GoogleCloudVisionV1p4beta1LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
GoogleCloudVisionV1p4beta1LocationInfo
Detected entity location information.
GoogleCloudVisionV1p4beta1NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
GoogleCloudVisionV1p4beta1OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
GoogleCloudVisionV1p4beta1OutputConfig
The desired output location and metadata.
GoogleCloudVisionV1p4beta1Page
Detected page from OCR.
GoogleCloudVisionV1p4beta1Paragraph
Structural unit of text representing a number of words in certain order.
GoogleCloudVisionV1p4beta1Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
GoogleCloudVisionV1p4beta1Product
A Product contains ReferenceImages.
GoogleCloudVisionV1p4beta1ProductKeyValue
A product label represented as a key-value pair.
GoogleCloudVisionV1p4beta1ProductSearchResults
Results for a product search request.
GoogleCloudVisionV1p4beta1ProductSearchResultsGroupedResult
Information about the products similar to a single product in a query image.
GoogleCloudVisionV1p4beta1ProductSearchResultsObjectAnnotation
Prediction for what the object in the bounding box is.
GoogleCloudVisionV1p4beta1ProductSearchResultsResult
Information about a product.
GoogleCloudVisionV1p4beta1Property
A Property
consists of a user-supplied name/value pair.
GoogleCloudVisionV1p4beta1ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
GoogleCloudVisionV1p4beta1SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
GoogleCloudVisionV1p4beta1Symbol
A single symbol representation.
GoogleCloudVisionV1p4beta1TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
GoogleCloudVisionV1p4beta1TextAnnotationDetectedBreak
Detected start or end of a structural component.
GoogleCloudVisionV1p4beta1TextAnnotationDetectedLanguage
Detected language for a structural component.
GoogleCloudVisionV1p4beta1TextAnnotationTextProperty
Additional information detected on the structural component.
GoogleCloudVisionV1p4beta1Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
GoogleCloudVisionV1p4beta1WebDetection
Relevant information for the image from the Internet.
GoogleCloudVisionV1p4beta1WebDetectionWebEntity
Entity deduced from similar images on the Internet.
GoogleCloudVisionV1p4beta1WebDetectionWebImage
Metadata for online images.
GoogleCloudVisionV1p4beta1WebDetectionWebLabel
Label to provide extra metadata for the web detection.
GoogleCloudVisionV1p4beta1WebDetectionWebPage
Metadata for web pages.
GoogleCloudVisionV1p4beta1Word
A word representation.
GroupedResult
Information about the products similar to a single product in a query image.
Image
Client image to perform Google Cloud Vision API tasks over.
ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
ImageContext
Image context and/or feature-specific parameters.
ImageProperties
Stores image properties, such as dominant colors.
ImageSource
External image source (Google Cloud Storage or web URL image location).
ImportProductSetsGcsSource
The Google Cloud Storage location for a csv file which preserves a list of ImportProductSetRequests in each line.
ImportProductSetsInputConfig
The input content for the ImportProductSets
method.
ImportProductSetsRequest
Request message for the ImportProductSets
method.
ImportProductSetsResponse
Response message for the ImportProductSets
method. This message is returned by the
google.longrunning.Operations.GetOperation method in the returned google.longrunning.Operation.response field.
InputConfig
The desired input location and metadata.
KeyValue
A product label represented as a key-value pair.
Landmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of the
image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <= x < width
or 0 <= y < height
.
LatLng
An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
LatLongRect
Rectangle determined by min and max LatLng
pairs.
ListOperationsResponse
The response message for Operations.ListOperations.
ListProductSetsResponse
Response message for the ListProductSets
method.
ListProductsInProductSetResponse
Response message for the ListProductsInProductSet
method.
ListProductsResponse
Response message for the ListProducts
method.
ListReferenceImagesResponse
Response message for the ListReferenceImages
method.
LocalizedObjectAnnotation
Set of detected objects with bounding boxes.
LocationInfo
Detected entity location information.
NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
ObjectAnnotation
Prediction for what the object in the bounding box is.
Operation
This resource represents a long-running operation that is the result of a network API call.
OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
OutputConfig
The desired output location and metadata.
Page
Detected page from OCR.
Paragraph
Structural unit of text representing a number of words in certain order.
Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Product
A Product contains ReferenceImages.
ProductSearchParams
Parameters for a product search request.
ProductSearchResults
Results for a product search request.
ProductSet
A ProductSet contains Products. A ProductSet can contain a maximum of 1 million reference images. If the limit is exceeded, periodic indexing will fail.
ProductSetPurgeConfig
Config to control which ProductSet contains the Products to be deleted.
Property
A Property
consists of a user-supplied name/value pair.
PurgeProductsRequest
Request message for the PurgeProducts
method.
ReferenceImage
A ReferenceImage
represents a product image and its associated metadata, such as bounding boxes.
RemoveProductFromProductSetRequest
Request message for the RemoveProductFromProductSet
method.
Result
Information about a product.
SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Status
The Status
type defines a logical error model that is suitable for different programming environments,
including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains
three pieces of data: error code, error message, and error details. You can find out more about this error model
and how to work with it in the API Design Guide.
Symbol
A single symbol representation.
TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
TextDetectionParams
Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
TextProperty
Additional information detected on the structural component.
Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
WebDetection
Relevant information for the image from the Internet.
WebDetectionParams
Parameters for web detection request.
WebEntity
Entity deduced from similar images on the Internet.
WebImage
Metadata for online images.
WebLabel
Label to provide extra metadata for the web detection.
WebPage
Metadata for web pages.
Word
A word representation.