Semantic Segmentation
This page provides examples of how RagaAI's Testing Platform can add value to teams building Semantic Segmentation models. It is a companion piece to the Product Demo available on the RagaAI Platform.
Last updated
This page provides examples of how RagaAI's Testing Platform can add value to teams building Semantic Segmentation models. It is a companion piece to the Product Demo available on the RagaAI Platform.
Last updated
The Semantic Segmentation - Cityscapes Project on the sample workspace is an example of how the RagaAI Testing Platform can help with the following tasks -
Data Quality Checks before training a new model
Model Quality Checks to identify performance gaps and perform regression analysis
The RagaAI Testing Platform is designed to add science to the art of detection AI issues, performing root cause analysis and providing actionable recommendations. This is done as an automated suite of tests on the platform.
An overview of all tests for the sample project is available here -
Goal - Identify scenarios in the field data which are drastically different (out-of-distribution) with respect to the training dataset. The AI model is prone to generating erroneous predictions on such datapoints.
Methodology - RagaAI automatically detection OOD datapoints using the embeddings from the RagaAI DNA technology
Insight - For this case, we see that the platform correctly identifies data drift for nighttime scenarios given the model has only been trained on daytime scenarios.
Impact - This automated test helps users access if the data in the production setting has shifted and the model needs to be retrained.
For more details, please refer to the detailed data drift documentation.
Goal - Identify scenarios where the model performs poorly on the test dataset post training/re-training.
Methodology - RagaAI automatically detections scenarios within the dataset and brings any model vulnerabilities on such scenarios to the fore
Insight - In this case, we see that the model performs really poorly on nighttime scenarios even when the aggregate performance is above threshold.
Impact - This test helps users identify 90% of the vulnerabilities within a models Operational Design Domain (ODD) early in the model development lifecycle.
For more details, please refer to the detailed failure mode analysis documentation.
Labelling Quality Test
Goal - Assess and enhance the accuracy of labelled data to improve model performance.
Methodology - RagaAI employs advanced algorithms to assess inconsistencies within the annotations and highlight errors in annotations.
Insight - In this case, we see that the model perform poorly in annotating road labels
Impact - Enhance model training by providing high-quality labeled data, reducing biases and errors.
For more details, please refer to the detailed Labelling Quality Test documentation.