Class: Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Inherits:
-
Object
- Object
- Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/aiplatform_v1beta1/classes.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb
Instance Attribute Summary collapse
-
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.
-
#confusion_matrix ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
-
#f1_score ⇒ Float
The harmonic mean of recall and precision.
-
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
-
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
-
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
-
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created label.
-
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
-
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
-
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the
confidenceThreshold. -
#precision ⇒ Float
Precision for the given confidence threshold.
-
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
-
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would, they would not match a ground truth label.
-
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
constructor
A new instance of GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
Returns a new instance of GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
25459 25460 25461 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25459 def initialize(**args) update!(**args) end |
Instance Attribute Details
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns
predictions with score lower than this value.
Corresponds to the JSON property confidenceThreshold
25368 25369 25370 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25368 def confidence_threshold @confidence_threshold end |
#confusion_matrix ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
Corresponds to the JSON property confusionMatrix
25373 25374 25375 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25373 def confusion_matrix @confusion_matrix end |
#f1_score ⇒ Float
The harmonic mean of recall and precision. For summary metrics, it computes
the micro-averaged F1 score.
Corresponds to the JSON property f1Score
25379 25380 25381 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25379 def f1_score @f1_score end |
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
Corresponds to the JSON property f1ScoreAt1
25384 25385 25386 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25384 def f1_score_at1 @f1_score_at1 end |
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMacro
25389 25390 25391 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25389 def f1_score_macro @f1_score_macro end |
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMicro
25394 25395 25396 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25394 def f1_score_micro @f1_score_micro end |
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created
label.
Corresponds to the JSON property falseNegativeCount
25400 25401 25402 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25400 def false_negative_count @false_negative_count end |
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
Corresponds to the JSON property falsePositiveCount
25405 25406 25407 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25405 def false_positive_count @false_positive_count end |
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
Corresponds to the JSON property falsePositiveRate
25410 25411 25412 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25410 def false_positive_rate @false_positive_rate end |
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest
prediction score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property falsePositiveRateAt1
25416 25417 25418 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25416 def false_positive_rate_at1 @false_positive_rate_at1 end |
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most
this many predictions (ordered by their score, descendingly), but they all
still need to meet the confidenceThreshold.
Corresponds to the JSON property maxPredictions
25423 25424 25425 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25423 def max_predictions @max_predictions end |
#precision ⇒ Float
Precision for the given confidence threshold.
Corresponds to the JSON property precision
25428 25429 25430 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25428 def precision @precision end |
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction
score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property precisionAt1
25434 25435 25436 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25434 def precision_at1 @precision_at1 end |
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
Corresponds to the JSON property recall
25439 25440 25441 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25439 def recall @recall end |
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the
highest prediction score and not below the confidence threshold for each
DataItem.
Corresponds to the JSON property recallAt1
25446 25447 25448 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25446 def recall_at1 @recall_at1 end |
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would,
they would not match a ground truth label.
Corresponds to the JSON property trueNegativeCount
25452 25453 25454 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25452 def true_negative_count @true_negative_count end |
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Corresponds to the JSON property truePositiveCount
25457 25458 25459 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25457 def true_positive_count @true_positive_count end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
25464 25465 25466 25467 25468 25469 25470 25471 25472 25473 25474 25475 25476 25477 25478 25479 25480 25481 25482 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25464 def update!(**args) @confidence_threshold = args[:confidence_threshold] if args.key?(:confidence_threshold) @confusion_matrix = args[:confusion_matrix] if args.key?(:confusion_matrix) @f1_score = args[:f1_score] if args.key?(:f1_score) @f1_score_at1 = args[:f1_score_at1] if args.key?(:f1_score_at1) @f1_score_macro = args[:f1_score_macro] if args.key?(:f1_score_macro) @f1_score_micro = args[:f1_score_micro] if args.key?(:f1_score_micro) @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count) @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count) @false_positive_rate = args[:false_positive_rate] if args.key?(:false_positive_rate) @false_positive_rate_at1 = args[:false_positive_rate_at1] if args.key?(:false_positive_rate_at1) @max_predictions = args[:max_predictions] if args.key?(:max_predictions) @precision = args[:precision] if args.key?(:precision) @precision_at1 = args[:precision_at1] if args.key?(:precision_at1) @recall = args[:recall] if args.key?(:recall) @recall_at1 = args[:recall_at1] if args.key?(:recall_at1) @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count) @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count) end |