Model Comparison Test
The Model Comparison Test results provide insights into the performance of multiple machine learning models across various evaluation metrics.
Model Comparison Test enables users to compare the effectiveness and robustness of different models, facilitating informed decision-making in model selection and optimization..
Execute Test
The following code snippet is designed to perform a Model Comparison Test on a specified dataset within the RagaAI environment.
Configure Test Parameters:
Initialise the Model Comparison Test with the
model_ab_test()
function.Add specific rules for comparison using the
rules.add()
function with the following parameters:metric
: Specify the metric to be used for comparison, such as "precision_diff_all".IoU
: Set the IoU (Intersection over Union) threshold, if applicable._class
: Specify the class or label to which the rule applies, using "ALL" for all classes.threshold
: Define the threshold for the metric, indicating the level of difference required to flag a significant change.conf_threshold
: Set the confidence threshold for model predictions, if applicable.
Provide the following parameters:
test_session
: Define the test session containing project details and authentication credentials.dataset_name
: Specify the name of the dataset to be used for comparison.test_name
: Name the test run to identify it later.modelA
: Specify the first model to be compared (e.g., "ModelA").modelB
: Specify the second model to be compared (e.g., "ModelB").type
: Specify the type of test, such as "labelled".gt
: Provide the ground truth data against which model inferences will be compared.rules
: Define the rules or metrics to be used for comparison.aggregation_level
: Specify the level of aggregation for comparison, if applicable (e.g., "Weather").
Add Test to Session:
Use the
test_session.add()
function to register the Model Comparison Test with the test session.
Run Test:
Use the
test_session.run()
function to start the execution of all tests added to the session, including the Model Comparison Test.
By following these steps, you can effectively compare the performance of different machine learning models using the Model Comparison Test.
Analysing Test Results
Metadata Configuration
Navigate to the config table: Find the combinations of different metadata forming scenarios
Identifying Underperforming Scenarios: Identify the scenario where the difference between both the is the high based on difference in performance metric
Visualising Data
Grid View: Access the grid view to see data points within the selected clusters.
Data Filtering: Use this feature to focus on specific subsets of your dataset that meet certain conditions, helping to extract meaningful patterns and trends.
Navigating and Interpreting Results
Directly Look at Problematic Clusters: Users can quickly identify clusters responsible for underperformance and assess their impact on the overall model.
In-Depth Analysis: Dive deeper into specific clusters or data points to understand the root causes of underperformance.
Data Analysis
Switch to Analysis Tab: To get a detailed performance report, go to the Analysis tab.
View Performance Metrics: Examine metrics and detections on a temporal chart
Confusion Matrix: The class-based confusion matrix in Failure Mode Analysis provides a detailed breakdown of performance for each class.
Practical Tips
Set Realistic Thresholds: Choose thresholds that reflect the expected performance of your model.
Leverage Visual Tools: Make full use of RagaAI’s visualisation capabilities to gain insights that might not be apparent from raw data alone.
By following these steps, users can efficiently leverage the Failure Mode Analysis test to gain a comprehensive understanding of their model's performance, identify key areas for improvement, and make data-driven decisions to enhance model accuracy and reliability.
Last updated