Package | Description |
---|---|
com.google.api.services.language.v2.model |
Modifier and Type | Method and Description |
---|---|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.clone() |
XPSConfidenceMetricsEntry |
XPSTextExtractionEvaluationMetrics.getBestF1ConfidenceMetrics()
Values are at the highest F1 score on the precision-recall curve.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.set(String fieldName,
Object value) |
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setConfidenceThreshold(Float confidenceThreshold)
Metrics are computed with an assumption that the model never return predictions with score
lower than this value.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setF1Score(Float f1Score)
The harmonic mean of recall and precision.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setF1ScoreAt1(Float f1ScoreAt1)
The harmonic mean of recall_at1 and precision_at1.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setFalseNegativeCount(Long falseNegativeCount)
The number of ground truth labels that are not matched by a model created label.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setFalsePositiveCount(Long falsePositiveCount)
The number of model created labels that do not match a ground truth label.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setFalsePositiveRate(Float falsePositiveRate)
False Positive Rate for the given confidence threshold.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setFalsePositiveRateAt1(Float falsePositiveRateAt1)
The False Positive Rate when only considering the label that has the highest prediction score
and not below the confidence threshold for each example.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setPositionThreshold(Integer positionThreshold)
Metrics are computed with an assumption that the model always returns at most this many
predictions (ordered by their score, descendingly), but they all still need to meet the
confidence_threshold.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setPrecision(Float precision)
Precision for the given confidence threshold.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setPrecisionAt1(Float precisionAt1)
The precision when only considering the label that has the highest prediction score and not
below the confidence threshold for each example.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setRecall(Float recall)
Recall (true positive rate) for the given confidence threshold.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setRecallAt1(Float recallAt1)
The recall (true positive rate) when only considering the label that has the highest prediction
score and not below the confidence threshold for each example.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setTrueNegativeCount(Long trueNegativeCount)
The number of labels that were not created by the model, but if they would, they would not
match a ground truth label.
|
XPSConfidenceMetricsEntry |
XPSConfidenceMetricsEntry.setTruePositiveCount(Long truePositiveCount)
The number of model created labels that match a ground truth label.
|
Modifier and Type | Method and Description |
---|---|
List<XPSConfidenceMetricsEntry> |
XPSClassificationEvaluationMetrics.getConfidenceMetricsEntries()
Metrics that have confidence thresholds.
|
List<XPSConfidenceMetricsEntry> |
XPSTextExtractionEvaluationMetrics.getConfidenceMetricsEntries()
If the enclosing EvaluationMetrics.label is empty, confidence_metrics_entries is an evaluation
of the entire model across all labels.
|
Map<String,XPSConfidenceMetricsEntry> |
XPSTextExtractionEvaluationMetrics.getPerLabelConfidenceMetrics()
Only recall, precision, and f1_score will be set.
|
Modifier and Type | Method and Description |
---|---|
XPSTextExtractionEvaluationMetrics |
XPSTextExtractionEvaluationMetrics.setBestF1ConfidenceMetrics(XPSConfidenceMetricsEntry bestF1ConfidenceMetrics)
Values are at the highest F1 score on the precision-recall curve.
|
Modifier and Type | Method and Description |
---|---|
XPSClassificationEvaluationMetrics |
XPSClassificationEvaluationMetrics.setConfidenceMetricsEntries(List<XPSConfidenceMetricsEntry> confidenceMetricsEntries)
Metrics that have confidence thresholds.
|
XPSTextExtractionEvaluationMetrics |
XPSTextExtractionEvaluationMetrics.setConfidenceMetricsEntries(List<XPSConfidenceMetricsEntry> confidenceMetricsEntries)
If the enclosing EvaluationMetrics.label is empty, confidence_metrics_entries is an evaluation
of the entire model across all labels.
|
XPSTextExtractionEvaluationMetrics |
XPSTextExtractionEvaluationMetrics.setPerLabelConfidenceMetrics(Map<String,XPSConfidenceMetricsEntry> perLabelConfidenceMetrics)
Only recall, precision, and f1_score will be set.
|
Copyright © 2011–2025 Google. All rights reserved.