Labelling Quality Test

The Labelling Quality Test highlights data points with a higher probability of labelling errors. By setting a threshold on the provided mistake score metric, you can identify and rectify labelling inaccuracies.

Execute Test:

The following code snippet is aimed at performing a Labelling Quality Test on a specified dataset to evaluate the quality of the labelling done.

rules = LQRules()
rules.add(metric="mistake_score", label=["ALL"], metric_threshold=0.72)
edge_case_detection = labelling_quality_test(
                                test_session=test_session,
                                dataset_name="image_classification_lq_train",
                                test_name="Labeling Quality Test",
                                type="labelling_consistency",
                                output_type="image_classification",
                                mistake_score_col_name="MistakeScore",
                                embedding_col_name="ImageVectorsM1",
                                rules=rules,
                                )
test_session.add(edge_case_detection)
test_session.run()
  • LQRules():Initialises the rules for the Labelling Quality test specifically tailored for image classification.

  • rules.add():Adds a rule for evaluating the quality of labelling.

    • metric: Specifies the performance metric to evaluate, here it is "mistake_score". This metric likely measures the error or inconsistency in labelling.

    • label: Specifies the label(s) these metrics apply to. "ALL" means all labels are included in this evaluation.

    • metric_threshold: The minimum acceptable value for the specified metric. Here, a mistake score must be less than or equal to 0.72.

  • labelling_quality_test():Sets up the labelling quality test for the image classification model.

    • dataset_name: The name of the dataset for the labelling quality test.

    • test_name: A unique identifier for this test.

    • type: Specifies the type of quality test, here "labelling_consistency", focusing on how consistent the labelling is.

    • output_type: Indicates the type of model output being evaluated, which is "image_classification" in this case.

    • mistake_score_col_name: The column name in the dataset that contains the mistake scores for each label.

    • embedding_col_name: The column containing embedding vectors of images, used for analyses that require understanding the semantic space of the images.

    • rules: The set of labelling quality rules defined at the beginning.

  • test_session.add(): Registers the labelling quality test with the session.

  • test_session.run(): Starts the execution of all tests in the session, including your labelling quality test.

Analysing Test Results

Understanding Mistake Score

  • Mistake Score Metric: A quantitative measure indicating the likelihood of labelling errors in your dataset.

Test Overview

  • Pie Chart Overview: Shows the proportion of labels that passed or failed based on the Mistake Score threshold.

Mistake Score Distribution

  • Bar Graph Visualisation: Displays average Mistake Scores for failed labels, class-wise, and the volume of failed data points per class.

Interpreting Results

  • Passed Data Points: Identified by meeting the Mistake Score threshold, indicating accurate labelling.

  • Failed Data Points: Exceeding the threshold, suggesting potential labelling inaccuracies.

Visualisation and Assessment

  • Visualising Annotations: Arranges images by descending Mistake Score for label assessment.

Image View

  • In-Depth Analysis: Analyse Mistake Scores for each label in an image, with interactive features for annotations and original image viewing.

  • Information Card: Provides details like Mistake Score, threshold, and confidence score for each label.

By following these steps, you can effectively utilise the Labelling Quality Test to identify and address labelling inaccuracies in your datasets, enhancing the overall quality and reliability of your models.

Last updated