Labelling Quality Test

Labelling Quality Test is helps in identifying potential labelling errors in datasets.

The Labelling Quality Test highlights data points with a higher probability of labelling errors. By setting a threshold on the provided mistake score metric, you can identify and rectify labelling inaccuracies.

Execute Test:

The following code snippet is aimed at performing a Labelling Quality Test on a specified dataset to evaluate the quality of the labelling done.

rules = LQRules()
rules.add(metric="mistake_score", label=["ALL"], metric_threshold=0.5)


edge_case_detection = labelling_quality_test(test_session=test_session,
                                             dataset_name="labeling_all_the_instance_mar_18",
                                             test_name="Labeling Quality Test",
                                             type="labelling_quality",
                                             output_type="instance_segmentation",
                                             mistake_score_col_name="MistakeScores",
                                             gt="GT",
                                             rules=rules)

test_session.add(edge_case_detection)
test_session.run()
  • LQRules(): Initialises the labelling quality rules.

  • rules.add(): Adds a new rule for assessing labelling quality:

    • metric: The performance metric to evaluate, "mistake_score" in this case.

    • label: Specifies the label(s) these metrics apply to. ["ALL"] means all labels in the dataset.

    • metric_threshold: The threshold for the mistake score, above which the label is considered incorrect.

  • labelling_quality_test(): Prepares the labelling quality test with the following parameters:

    • test_session: The current session linked to your project.

    • dataset_name: The name of the dataset to be evaluated.

    • type: The type of test, "labelling_quality" here, which focuses on how consistently the labelling is done.

    • output_type: The type of output expected, "instance_segmentation" in this context.

    • mistake_score_col_name: The column in the dataset that contains the mistake scores.

    • rules: The previously defined rules for the test.

test_session.add(): Registers the labelling quality test with the session.

test_session.run(): Starts the execution of all tests in the session, including your labelling quality test.

By following the steps outlined above, you have successfully set up a Labelling Quality Test in RagaAI.

Analysing Test Results

Understanding Mistake Score

  • Mistake Score Metric: A quantitative measure indicating the likelihood of labelling errors in your dataset.

Test Overview

  • Pie Chart Overview: Shows the proportion of labels that passed or failed based on the Mistake Score threshold.

Mistake Score Distribution

  • Bar Graph Visualisation: Displays average Mistake Scores for failed labels, class-wise, and the volume of failed data points per class.

Interpreting Results

  • Passed Data Points: Identified by meeting the Mistake Score threshold, indicating accurate labelling.

  • Failed Data Points: Exceeding the threshold, suggesting potential labelling inaccuracies.

Visualisation and Assessment

  • Visualising Annotations: Arranges images by descending Mistake Score for label assessment.

Image View

  • In-Depth Analysis: Analyse Mistake Scores for each label in an image, with interactive features for annotations and original image viewing.

  • Information Card: Provides details like Mistake Score, threshold, area percentage, and confidence score for each label.

Note: Mistake Scores are not calculated for annotations covering less than 5% of an image.

By following these steps, you can effectively utilise the Labelling Quality Test to identify and address labelling inaccuracies in your datasets, enhancing the overall quality and reliability of your models.

Last updated