Super Resolution
This page provides examples of how RagaAI's Testing Platform can add value to teams building Super Resolution models. It is a companion piece to the Product Demo available on the RagaAI Platform.
Last updated
This page provides examples of how RagaAI's Testing Platform can add value to teams building Super Resolution models. It is a companion piece to the Product Demo available on the RagaAI Platform.
Last updated
The Super Resolution Project on the sample workspace is an example of how the RagaAI Testing Platform can help with the following tasks -
Data Quality Checks before training a new model
Model Quality Checks to identify performance gaps and perform regression analysis
End-to-end pipeline level tests beyond AI models
The RagaAI Testing Platform is designed to add science to the art of detection AI issues, performing root cause analysis and providing actionable recommendations. This is done as an automated suite of tests on the platform.
An overview of all tests for the sample project is available here -
1. Active Learning Test
Goal - Identify datapoints in the dataset that will add maximum value to the model training / re-training process.
Methodology - RagaAI quantifies the information value of each datapoint in the dataset and helps optimise data diversity while removing similar data points
Insight - For this case, we see that the platform helps select 103 datapoints to annotate out of a dataset of 800 while capturing dataset diversity and avoiding similar datapoints.
Impact - This technique helps users often reduce data annotation and training costs by 8x as about ~95% of the datasets value can often be realised by training on 8x fewer samples
For more details, please refer to the detailed active learning documentation.
Semantic Similarity Test:
Goal - Assess the fidelity of super-resolution reconstructions by measuring semantic similarity between high-resolution and low- resolution image.
Methodology - Compares semantic representations of high-resolution and low-resolution images using a similarity metric, flagging those with scores below a threshold as containing potentially impactful artefacts.
Insight - The test provides insights into the model's ability to preserve semantic content during upscaling. Images with low similarity scores are likely candidates for further investigation, potentially revealing problematic artefacts or distortions introduced by the model. RagaAI highlights successful and unsuccessful cases, offering a clear picture of the model's strengths and weaknesses in terms of semantic preservation.
Impact - This technique helps improve model performance and reliability by addressing issues that could affect user trust and application effectiveness.
For more details, please refer to the detailed Semantic Similarity Test.
Outlier Detection Test:
Goal - The primary goal of the test is to detect and flag images in your dataset that deviate significantly from the expected distribution. These outliers can negatively impact model performance, leading to inaccurate predictions or reduced generalisability.
Methodology - The RagaAI platform extracts image features, calculates distance metrics [eg. Mahalanobis] to the data centre and flags outliers.
Insight - In this case, we can see that the presence of potentially anomalous data points that may negatively impact model performance if not addressed. The Distance Metric value provides a quantitative measure of the anomaly's deviation from the expected distribution.
Impact - By identifying and analysing outliers, developers can improve model robustness by addressing potential biases or blind spots in the training data. We can also filter out anomalous data points during training, leading to more accurate and generalisable models.
For more details, please refer to the detailed Outlier Detection Test.
Near Duplicates Test:
Goal - Identify and remove near-duplicate images from your super-resolution training dataset to improve data efficiency and model performance.
Methodology - The platform automatically identifies pairs of near-duplicate images based on their hash similarity.
Insight - Removing near duplicates helps mitigate overfitting, where the model memorises redundant data instead of learning generalisable features. This can lead to improved performance on unseen data.
Impact - Using the Near Duplicates Test reduces training time and resource consumption by eliminating redundant data.It improves model generalisation and robustness by preventing overfitting on near-duplicate examples.
For more details, please refer to the detailed Near Duplicates Detection Test.