Class: Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics
- Inherits:
-
Object
- Object
- Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/aiplatform_v1beta1/classes.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb
Overview
Metrics for general pairwise text generation evaluation results.
Instance Attribute Summary collapse
-
#accuracy ⇒ Float
Fraction of cases where the autorater agreed with the human raters.
-
#baseline_model_win_rate ⇒ Float
Percentage of time the autorater decided the baseline model had the better response.
-
#cohens_kappa ⇒ Float
A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account.
-
#f1_score ⇒ Float
Harmonic mean of precision and recall.
-
#false_negative_count ⇒ Fixnum
Number of examples where the autorater chose the baseline model, but humans preferred the model.
-
#false_positive_count ⇒ Fixnum
Number of examples where the autorater chose the model, but humans preferred the baseline model.
-
#human_preference_baseline_model_win_rate ⇒ Float
Percentage of time humans decided the baseline model had the better response.
-
#human_preference_model_win_rate ⇒ Float
Percentage of time humans decided the model had the better response.
-
#model_win_rate ⇒ Float
Percentage of time the autorater decided the model had the better response.
-
#precision ⇒ Float
Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response.
-
#recall ⇒ Float
Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response.
-
#true_negative_count ⇒ Fixnum
Number of examples where both the autorater and humans decided that the model had the worse response.
-
#true_positive_count ⇒ Fixnum
Number of examples where both the autorater and humans decided that the model had the better response.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics
constructor
A new instance of GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics
Returns a new instance of GoogleCloudAiplatformV1beta1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.
25951 25952 25953 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25951 def initialize(**args) update!(**args) end |
Instance Attribute Details
#accuracy ⇒ Float
Fraction of cases where the autorater agreed with the human raters.
Corresponds to the JSON property accuracy
25879 25880 25881 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25879 def accuracy @accuracy end |
#baseline_model_win_rate ⇒ Float
Percentage of time the autorater decided the baseline model had the better
response.
Corresponds to the JSON property baselineModelWinRate
25885 25886 25887 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25885 def baseline_model_win_rate @baseline_model_win_rate end |
#cohens_kappa ⇒ Float
A measurement of agreement between the autorater and human raters that takes
the likelihood of random agreement into account.
Corresponds to the JSON property cohensKappa
25891 25892 25893 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25891 def cohens_kappa @cohens_kappa end |
#f1_score ⇒ Float
Harmonic mean of precision and recall.
Corresponds to the JSON property f1Score
25896 25897 25898 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25896 def f1_score @f1_score end |
#false_negative_count ⇒ Fixnum
Number of examples where the autorater chose the baseline model, but humans
preferred the model.
Corresponds to the JSON property falseNegativeCount
25902 25903 25904 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25902 def false_negative_count @false_negative_count end |
#false_positive_count ⇒ Fixnum
Number of examples where the autorater chose the model, but humans preferred
the baseline model.
Corresponds to the JSON property falsePositiveCount
25908 25909 25910 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25908 def false_positive_count @false_positive_count end |
#human_preference_baseline_model_win_rate ⇒ Float
Percentage of time humans decided the baseline model had the better response.
Corresponds to the JSON property humanPreferenceBaselineModelWinRate
25913 25914 25915 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25913 def human_preference_baseline_model_win_rate @human_preference_baseline_model_win_rate end |
#human_preference_model_win_rate ⇒ Float
Percentage of time humans decided the model had the better response.
Corresponds to the JSON property humanPreferenceModelWinRate
25918 25919 25920 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25918 def human_preference_model_win_rate @human_preference_model_win_rate end |
#model_win_rate ⇒ Float
Percentage of time the autorater decided the model had the better response.
Corresponds to the JSON property modelWinRate
25923 25924 25925 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25923 def model_win_rate @model_win_rate end |
#precision ⇒ Float
Fraction of cases where the autorater and humans thought the model had a
better response out of all cases where the autorater thought the model had a
better response. True positive divided by all positive.
Corresponds to the JSON property precision
25930 25931 25932 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25930 def precision @precision end |
#recall ⇒ Float
Fraction of cases where the autorater and humans thought the model had a
better response out of all cases where the humans thought the model had a
better response.
Corresponds to the JSON property recall
25937 25938 25939 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25937 def recall @recall end |
#true_negative_count ⇒ Fixnum
Number of examples where both the autorater and humans decided that the model
had the worse response.
Corresponds to the JSON property trueNegativeCount
25943 25944 25945 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25943 def true_negative_count @true_negative_count end |
#true_positive_count ⇒ Fixnum
Number of examples where both the autorater and humans decided that the model
had the better response.
Corresponds to the JSON property truePositiveCount
25949 25950 25951 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25949 def true_positive_count @true_positive_count end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
25956 25957 25958 25959 25960 25961 25962 25963 25964 25965 25966 25967 25968 25969 25970 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 25956 def update!(**args) @accuracy = args[:accuracy] if args.key?(:accuracy) @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate) @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa) @f1_score = args[:f1_score] if args.key?(:f1_score) @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count) @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count) @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate) @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate) @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate) @precision = args[:precision] if args.key?(:precision) @recall = args[:recall] if args.key?(:recall) @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count) @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count) end |