Label Drift
Detect shifts in labels over time. Ensure annotation consistency in evolving datasets.
Execute Test:
rules = LDTRules()
rules.add(metric="js_divergence", label=["ALL"], metric_threshold=0.10)
rules.add(metric="chi_squared_test", label=["ALL"], metric_threshold=0.10)
run_name = f"Label Drift v1"
test_session = TestSession(
project_name="Instance Segmentation",run_name=run_name,access_key="8Sxdx2ELb70quckrkklZ",secret_key="UeIWErIbh8sAFVxpLqtfJA0dMW7QsaiApuRmOYz8",host="https://backend.platform.raga.ai")
ref_dataset_name = "training_dataset"
eval_dataset_name = "validation_dataset"
distribution_test = label_drift_test(test_session=test_session,
referenceDataset=ref_dataset_name,
evalDataset=eval_dataset_name,
test_name=run_name,
type="label_drift",
output_type="semantic_segmentation",
gt="GT",
rules=rules)
test_session.add(distribution_test)
test_session.run()Interpreting Test Results for Label Drift

Bar Chart Comparison
Data Grid Views
Image View

Last updated
Was this helpful?

