Class: Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics
- Inherits:
-
Object
- Object
- Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics
- Extended by:
- Protobuf::MessageExts::ClassMethods
- Includes:
- Protobuf::MessageExts
- Defined in:
- proto_docs/google/cloud/automl/v1beta1/classification.rb
Overview
Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.
Defined Under Namespace
Classes: ConfidenceMetricsEntry, ConfusionMatrix
Instance Attribute Summary collapse
-
#annotation_spec_id ⇒ ::Array<::String>
Output only.
-
#au_prc ⇒ ::Float
Output only.
-
#au_roc ⇒ ::Float
Output only.
-
#base_au_prc ⇒ ::Float
deprecated
Deprecated.
This field is deprecated and may be removed in the next major version update.
-
#confidence_metrics_entry ⇒ ::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry>
Output only.
-
#confusion_matrix ⇒ ::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix
Output only.
-
#log_loss ⇒ ::Float
Output only.
Instance Attribute Details
#annotation_spec_id ⇒ ::Array<::String>
Returns Output only. The annotation spec ids used for this evaluation.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#au_prc ⇒ ::Float
Returns Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#au_roc ⇒ ::Float
Returns Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#base_au_prc ⇒ ::Float
This field is deprecated and may be removed in the next major version update.
Returns Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#confidence_metrics_entry ⇒ ::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry>
Returns Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#confusion_matrix ⇒ ::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix
Returns Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |
#log_loss ⇒ ::Float
Returns Output only. The Log Loss metric.
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'proto_docs/google/cloud/automl/v1beta1/classification.rb', line 113 class ClassificationEvaluationMetrics include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Metrics for a single confidence threshold. # @!attribute [rw] confidence_threshold # @return [::Float] # Output only. Metrics are computed with an assumption that the model # never returns predictions with score lower than this value. # @!attribute [rw] position_threshold # @return [::Integer] # Output only. Metrics are computed with an assumption that the model # always returns at most this many predictions (ordered by their score, # descendingly), but they all still need to meet the confidence_threshold. # @!attribute [rw] recall # @return [::Float] # Output only. Recall (True Positive Rate) for the given confidence # threshold. # @!attribute [rw] precision # @return [::Float] # Output only. Precision for the given confidence threshold. # @!attribute [rw] false_positive_rate # @return [::Float] # Output only. False Positive Rate for the given confidence threshold. # @!attribute [rw] f1_score # @return [::Float] # Output only. The harmonic mean of recall and precision. # @!attribute [rw] recall_at1 # @return [::Float] # Output only. The Recall (True Positive Rate) when only considering the # label that has the highest prediction score and not below the confidence # threshold for each example. # @!attribute [rw] precision_at1 # @return [::Float] # Output only. The precision when only considering the label that has the # highest prediction score and not below the confidence threshold for each # example. # @!attribute [rw] false_positive_rate_at1 # @return [::Float] # Output only. The False Positive Rate when only considering the label that # has the highest prediction score and not below the confidence threshold # for each example. # @!attribute [rw] f1_score_at1 # @return [::Float] # Output only. The harmonic mean of {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}. # @!attribute [rw] true_positive_count # @return [::Integer] # Output only. The number of model created labels that match a ground truth # label. # @!attribute [rw] false_positive_count # @return [::Integer] # Output only. The number of model created labels that do not match a # ground truth label. # @!attribute [rw] false_negative_count # @return [::Integer] # Output only. The number of ground truth labels that are not matched # by a model created label. # @!attribute [rw] true_negative_count # @return [::Integer] # Output only. The number of labels that were not created by the model, # but if they would, they would not match a ground truth label. class ConfidenceMetricsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Confusion matrix of the model running the classification. # @!attribute [rw] annotation_spec_id # @return [::Array<::String>] # Output only. IDs of the annotation specs used in the confusion matrix. # For Tables CLASSIFICATION # # [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] # only list of [annotation_spec_display_name-s][] is populated. # @!attribute [rw] display_name # @return [::Array<::String>] # Output only. Display name of the annotation specs used in the confusion # matrix, as they were at the moment of the evaluation. For Tables # CLASSIFICATION # # [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type], # distinct values of the target column at the moment of the model # evaluation are populated here. # @!attribute [rw] row # @return [::Array<::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>] # Output only. Rows in the confusion matrix. The number of rows is equal to # the size of `annotation_spec_id`. # `row[i].example_count[j]` is the number of examples that have ground # truth of the `annotation_spec_id[i]` and are predicted as # `annotation_spec_id[j]` by the model being evaluated. class ConfusionMatrix include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Output only. A row in the confusion matrix. # @!attribute [rw] example_count # @return [::Array<::Integer>] # Output only. Value of the specific cell in the confusion matrix. # The number of values each row has (i.e. the length of the row) is equal # to the length of the `annotation_spec_id` field or, if that one is not # populated, length of the {::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field. class Row include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end |