public interface FaceAnnotationOrBuilder extends MessageOrBuilder
Modifier and Type | Method and Description |
---|---|
Likelihood |
getAngerLikelihood()
Anger likelihood.
|
int |
getAngerLikelihoodValue()
Anger likelihood.
|
Likelihood |
getBlurredLikelihood()
Blurred likelihood.
|
int |
getBlurredLikelihoodValue()
Blurred likelihood.
|
BoundingPoly |
getBoundingPoly()
The bounding polygon around the face.
|
BoundingPolyOrBuilder |
getBoundingPolyOrBuilder()
The bounding polygon around the face.
|
float |
getDetectionConfidence()
Detection confidence.
|
BoundingPoly |
getFdBoundingPoly()
The `fd_bounding_poly` bounding polygon is tighter than the
`boundingPoly`, and encloses only the skin part of the face.
|
BoundingPolyOrBuilder |
getFdBoundingPolyOrBuilder()
The `fd_bounding_poly` bounding polygon is tighter than the
`boundingPoly`, and encloses only the skin part of the face.
|
Likelihood |
getHeadwearLikelihood()
Headwear likelihood.
|
int |
getHeadwearLikelihoodValue()
Headwear likelihood.
|
Likelihood |
getJoyLikelihood()
Joy likelihood.
|
int |
getJoyLikelihoodValue()
Joy likelihood.
|
float |
getLandmarkingConfidence()
Face landmarking confidence.
|
FaceAnnotation.Landmark |
getLandmarks(int index)
Detected face landmarks.
|
int |
getLandmarksCount()
Detected face landmarks.
|
List<FaceAnnotation.Landmark> |
getLandmarksList()
Detected face landmarks.
|
FaceAnnotation.LandmarkOrBuilder |
getLandmarksOrBuilder(int index)
Detected face landmarks.
|
List<? extends FaceAnnotation.LandmarkOrBuilder> |
getLandmarksOrBuilderList()
Detected face landmarks.
|
float |
getPanAngle()
Yaw angle, which indicates the leftward/rightward angle that the face is
pointing relative to the vertical plane perpendicular to the image.
|
float |
getRollAngle()
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation
of the face relative to the image vertical about the axis perpendicular to
the face.
|
Likelihood |
getSorrowLikelihood()
Sorrow likelihood.
|
int |
getSorrowLikelihoodValue()
Sorrow likelihood.
|
Likelihood |
getSurpriseLikelihood()
Surprise likelihood.
|
int |
getSurpriseLikelihoodValue()
Surprise likelihood.
|
float |
getTiltAngle()
Pitch angle, which indicates the upwards/downwards angle that the face is
pointing relative to the image's horizontal plane.
|
Likelihood |
getUnderExposedLikelihood()
Under-exposed likelihood.
|
int |
getUnderExposedLikelihoodValue()
Under-exposed likelihood.
|
boolean |
hasBoundingPoly()
The bounding polygon around the face.
|
boolean |
hasFdBoundingPoly()
The `fd_bounding_poly` bounding polygon is tighter than the
`boundingPoly`, and encloses only the skin part of the face.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
isInitialized
boolean hasBoundingPoly()
The bounding polygon around the face. The coordinates of the bounding box are in the original image's scale, as returned in `ImageParams`. The bounding box is computed to "frame" the face in accordance with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordinates may not be generated in the `BoundingPoly` (the polygon will be unbounded) if only a partial face appears in the image to be annotated.
.google.cloud.vision.v1p1beta1.BoundingPoly bounding_poly = 1;
BoundingPoly getBoundingPoly()
The bounding polygon around the face. The coordinates of the bounding box are in the original image's scale, as returned in `ImageParams`. The bounding box is computed to "frame" the face in accordance with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordinates may not be generated in the `BoundingPoly` (the polygon will be unbounded) if only a partial face appears in the image to be annotated.
.google.cloud.vision.v1p1beta1.BoundingPoly bounding_poly = 1;
BoundingPolyOrBuilder getBoundingPolyOrBuilder()
The bounding polygon around the face. The coordinates of the bounding box are in the original image's scale, as returned in `ImageParams`. The bounding box is computed to "frame" the face in accordance with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordinates may not be generated in the `BoundingPoly` (the polygon will be unbounded) if only a partial face appears in the image to be annotated.
.google.cloud.vision.v1p1beta1.BoundingPoly bounding_poly = 1;
boolean hasFdBoundingPoly()
The `fd_bounding_poly` bounding polygon is tighter than the `boundingPoly`, and encloses only the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects the "amount of skin" visible in an image. It is not based on the landmarker results, only on the initial face detection, hence the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1p1beta1.BoundingPoly fd_bounding_poly = 2;
BoundingPoly getFdBoundingPoly()
The `fd_bounding_poly` bounding polygon is tighter than the `boundingPoly`, and encloses only the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects the "amount of skin" visible in an image. It is not based on the landmarker results, only on the initial face detection, hence the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1p1beta1.BoundingPoly fd_bounding_poly = 2;
BoundingPolyOrBuilder getFdBoundingPolyOrBuilder()
The `fd_bounding_poly` bounding polygon is tighter than the `boundingPoly`, and encloses only the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects the "amount of skin" visible in an image. It is not based on the landmarker results, only on the initial face detection, hence the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1p1beta1.BoundingPoly fd_bounding_poly = 2;
List<FaceAnnotation.Landmark> getLandmarksList()
Detected face landmarks.
repeated .google.cloud.vision.v1p1beta1.FaceAnnotation.Landmark landmarks = 3;
FaceAnnotation.Landmark getLandmarks(int index)
Detected face landmarks.
repeated .google.cloud.vision.v1p1beta1.FaceAnnotation.Landmark landmarks = 3;
int getLandmarksCount()
Detected face landmarks.
repeated .google.cloud.vision.v1p1beta1.FaceAnnotation.Landmark landmarks = 3;
List<? extends FaceAnnotation.LandmarkOrBuilder> getLandmarksOrBuilderList()
Detected face landmarks.
repeated .google.cloud.vision.v1p1beta1.FaceAnnotation.Landmark landmarks = 3;
FaceAnnotation.LandmarkOrBuilder getLandmarksOrBuilder(int index)
Detected face landmarks.
repeated .google.cloud.vision.v1p1beta1.FaceAnnotation.Landmark landmarks = 3;
float getRollAngle()
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
float roll_angle = 4;
float getPanAngle()
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
float pan_angle = 5;
float getTiltAngle()
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
float tilt_angle = 6;
float getDetectionConfidence()
Detection confidence. Range [0, 1].
float detection_confidence = 7;
float getLandmarkingConfidence()
Face landmarking confidence. Range [0, 1].
float landmarking_confidence = 8;
int getJoyLikelihoodValue()
Joy likelihood.
.google.cloud.vision.v1p1beta1.Likelihood joy_likelihood = 9;
Likelihood getJoyLikelihood()
Joy likelihood.
.google.cloud.vision.v1p1beta1.Likelihood joy_likelihood = 9;
int getSorrowLikelihoodValue()
Sorrow likelihood.
.google.cloud.vision.v1p1beta1.Likelihood sorrow_likelihood = 10;
Likelihood getSorrowLikelihood()
Sorrow likelihood.
.google.cloud.vision.v1p1beta1.Likelihood sorrow_likelihood = 10;
int getAngerLikelihoodValue()
Anger likelihood.
.google.cloud.vision.v1p1beta1.Likelihood anger_likelihood = 11;
Likelihood getAngerLikelihood()
Anger likelihood.
.google.cloud.vision.v1p1beta1.Likelihood anger_likelihood = 11;
int getSurpriseLikelihoodValue()
Surprise likelihood.
.google.cloud.vision.v1p1beta1.Likelihood surprise_likelihood = 12;
Likelihood getSurpriseLikelihood()
Surprise likelihood.
.google.cloud.vision.v1p1beta1.Likelihood surprise_likelihood = 12;
int getUnderExposedLikelihoodValue()
Under-exposed likelihood.
.google.cloud.vision.v1p1beta1.Likelihood under_exposed_likelihood = 13;
Likelihood getUnderExposedLikelihood()
Under-exposed likelihood.
.google.cloud.vision.v1p1beta1.Likelihood under_exposed_likelihood = 13;
int getBlurredLikelihoodValue()
Blurred likelihood.
.google.cloud.vision.v1p1beta1.Likelihood blurred_likelihood = 14;
Likelihood getBlurredLikelihood()
Blurred likelihood.
.google.cloud.vision.v1p1beta1.Likelihood blurred_likelihood = 14;
int getHeadwearLikelihoodValue()
Headwear likelihood.
.google.cloud.vision.v1p1beta1.Likelihood headwear_likelihood = 15;
Likelihood getHeadwearLikelihood()
Headwear likelihood.
.google.cloud.vision.v1p1beta1.Likelihood headwear_likelihood = 15;
Copyright © 2022 Google LLC. All rights reserved.