Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb

Overview

Metrics for general pairwise text generation evaluation results.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.



22486
22487
22488
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22486

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accuracyFloat

Fraction of cases where the autorater agreed with the human raters. Corresponds to the JSON property accuracy

Returns:

  • (Float)


22414
22415
22416
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22414

def accuracy
  @accuracy
end

#baseline_model_win_rateFloat

Percentage of time the autorater decided the baseline model had the better response. Corresponds to the JSON property baselineModelWinRate

Returns:

  • (Float)


22420
22421
22422
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22420

def baseline_model_win_rate
  @baseline_model_win_rate
end

#cohens_kappaFloat

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. Corresponds to the JSON property cohensKappa

Returns:

  • (Float)


22426
22427
22428
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22426

def cohens_kappa
  @cohens_kappa
end

#f1_scoreFloat

Harmonic mean of precision and recall. Corresponds to the JSON property f1Score

Returns:

  • (Float)


22431
22432
22433
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22431

def f1_score
  @f1_score
end

#false_negative_countFixnum

Number of examples where the autorater chose the baseline model, but humans preferred the model. Corresponds to the JSON property falseNegativeCount

Returns:

  • (Fixnum)


22437
22438
22439
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22437

def false_negative_count
  @false_negative_count
end

#false_positive_countFixnum

Number of examples where the autorater chose the model, but humans preferred the baseline model. Corresponds to the JSON property falsePositiveCount

Returns:

  • (Fixnum)


22443
22444
22445
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22443

def false_positive_count
  @false_positive_count
end

#human_preference_baseline_model_win_rateFloat

Percentage of time humans decided the baseline model had the better response. Corresponds to the JSON property humanPreferenceBaselineModelWinRate

Returns:

  • (Float)


22448
22449
22450
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22448

def human_preference_baseline_model_win_rate
  @human_preference_baseline_model_win_rate
end

#human_preference_model_win_rateFloat

Percentage of time humans decided the model had the better response. Corresponds to the JSON property humanPreferenceModelWinRate

Returns:

  • (Float)


22453
22454
22455
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22453

def human_preference_model_win_rate
  @human_preference_model_win_rate
end

#model_win_rateFloat

Percentage of time the autorater decided the model had the better response. Corresponds to the JSON property modelWinRate

Returns:

  • (Float)


22458
22459
22460
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22458

def model_win_rate
  @model_win_rate
end

#precisionFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. Corresponds to the JSON property precision

Returns:

  • (Float)


22465
22466
22467
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22465

def precision
  @precision
end

#recallFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. Corresponds to the JSON property recall

Returns:

  • (Float)


22472
22473
22474
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22472

def recall
  @recall
end

#true_negative_countFixnum

Number of examples where both the autorater and humans decided that the model had the worse response. Corresponds to the JSON property trueNegativeCount

Returns:

  • (Fixnum)


22478
22479
22480
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22478

def true_negative_count
  @true_negative_count
end

#true_positive_countFixnum

Number of examples where both the autorater and humans decided that the model had the better response. Corresponds to the JSON property truePositiveCount

Returns:

  • (Fixnum)


22484
22485
22486
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22484

def true_positive_count
  @true_positive_count
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



22491
22492
22493
22494
22495
22496
22497
22498
22499
22500
22501
22502
22503
22504
22505
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22491

def update!(**args)
  @accuracy = args[:accuracy] if args.key?(:accuracy)
  @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate)
  @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa)
  @f1_score = args[:f1_score] if args.key?(:f1_score)
  @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count)
  @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count)
  @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate)
  @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate)
  @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate)
  @precision = args[:precision] if args.key?(:precision)
  @recall = args[:recall] if args.key?(:recall)
  @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count)
  @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count)
end