Image Classification
This page provides examples of how RagaAI's Testing Platform can add value to teams building Image Classification models. It is a companion piece to the Product Demo available on the RagaAI Platform.
The Image Classification Project on the sample workspace is an example of how the RagaAI Testing Platform can help with the following tasks -
Data Quality Checks before training a new model
Model Quality Checks to identify performance gaps and perform regression analysis
The RagaAI Testing Platform is designed to add science to the art of detection AI issues, performing root cause analysis and providing actionable recommendations. This is done as an automated suite of tests on the platform.
An overview of all tests for the sample project is available here -
1. Failure Mode Analysis
Goal - Identify specific image scenarios where the Image Classification model underperforms, despite overall acceptable performance metrics.
Methodology - RagaAI automatically detections scenarios within the dataset and brings any model vulnerabilities on such scenarios to the fore
Insight - In this case, we see that the model struggles with specific types of images in a cluster. Analysing the images within the failing cluster can reveal common characteristics that lead to misclassification. This helps understand the model's limitations and potential biases.
Impact - Early identification of failure modes allows for targeted interventions, such as collecting more data for the under-represented cluster to improve model training or fine-tuning the model with specific emphasis on the failing cluster.
For more details, please refer to the detailed Failure Mode analysis documentation.
Labelling Quality Test
Goal - Identify mislabelled images within the dataset to ensure high-quality training data for our image classification model.
Methodology - RagaAI analyses each labelled image and calculates a "Mistake Score" based on various factors and sets a threshold on the user-defined parameters.
Insight - In this case, we see that by analysing the flagged images, you can discover specific inconsistencies in the labelling, such as misidentified objects or missing labels
Impact - This would allow us to reduce the risk of model bias based on inconsistent or inaccurate labelling and save time and resources by identifying and correcting labelling errors before deploying the model.
For more details, please refer to the detailed Labelling Quality Test documentation.
Last updated