Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb

Overview

Metrics for general pairwise text generation evaluation results.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.



23084
23085
23086
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23084

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accuracyFloat

Fraction of cases where the autorater agreed with the human raters. Corresponds to the JSON property accuracy

Returns:

  • (Float)


23012
23013
23014
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23012

def accuracy
  @accuracy
end

#baseline_model_win_rateFloat

Percentage of time the autorater decided the baseline model had the better response. Corresponds to the JSON property baselineModelWinRate

Returns:

  • (Float)


23018
23019
23020
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23018

def baseline_model_win_rate
  @baseline_model_win_rate
end

#cohens_kappaFloat

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. Corresponds to the JSON property cohensKappa

Returns:

  • (Float)


23024
23025
23026
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23024

def cohens_kappa
  @cohens_kappa
end

#f1_scoreFloat

Harmonic mean of precision and recall. Corresponds to the JSON property f1Score

Returns:

  • (Float)


23029
23030
23031
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23029

def f1_score
  @f1_score
end

#false_negative_countFixnum

Number of examples where the autorater chose the baseline model, but humans preferred the model. Corresponds to the JSON property falseNegativeCount

Returns:

  • (Fixnum)


23035
23036
23037
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23035

def false_negative_count
  @false_negative_count
end

#false_positive_countFixnum

Number of examples where the autorater chose the model, but humans preferred the baseline model. Corresponds to the JSON property falsePositiveCount

Returns:

  • (Fixnum)


23041
23042
23043
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23041

def false_positive_count
  @false_positive_count
end

#human_preference_baseline_model_win_rateFloat

Percentage of time humans decided the baseline model had the better response. Corresponds to the JSON property humanPreferenceBaselineModelWinRate

Returns:

  • (Float)


23046
23047
23048
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23046

def human_preference_baseline_model_win_rate
  @human_preference_baseline_model_win_rate
end

#human_preference_model_win_rateFloat

Percentage of time humans decided the model had the better response. Corresponds to the JSON property humanPreferenceModelWinRate

Returns:

  • (Float)


23051
23052
23053
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23051

def human_preference_model_win_rate
  @human_preference_model_win_rate
end

#model_win_rateFloat

Percentage of time the autorater decided the model had the better response. Corresponds to the JSON property modelWinRate

Returns:

  • (Float)


23056
23057
23058
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23056

def model_win_rate
  @model_win_rate
end

#precisionFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. Corresponds to the JSON property precision

Returns:

  • (Float)


23063
23064
23065
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23063

def precision
  @precision
end

#recallFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. Corresponds to the JSON property recall

Returns:

  • (Float)


23070
23071
23072
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23070

def recall
  @recall
end

#true_negative_countFixnum

Number of examples where both the autorater and humans decided that the model had the worse response. Corresponds to the JSON property trueNegativeCount

Returns:

  • (Fixnum)


23076
23077
23078
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23076

def true_negative_count
  @true_negative_count
end

#true_positive_countFixnum

Number of examples where both the autorater and humans decided that the model had the better response. Corresponds to the JSON property truePositiveCount

Returns:

  • (Fixnum)


23082
23083
23084
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23082

def true_positive_count
  @true_positive_count
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



23089
23090
23091
23092
23093
23094
23095
23096
23097
23098
23099
23100
23101
23102
23103
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23089

def update!(**args)
  @accuracy = args[:accuracy] if args.key?(:accuracy)
  @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate)
  @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa)
  @f1_score = args[:f1_score] if args.key?(:f1_score)
  @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count)
  @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count)
  @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate)
  @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate)
  @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate)
  @precision = args[:precision] if args.key?(:precision)
  @recall = args[:recall] if args.key?(:recall)
  @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count)
  @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count)
end