Event Detection
Last updated
Last updated
RagaAI Terminology:
Project: In RagaAI Prism, a project is the highest level of abstraction for an AI task/application. It encompasses all related test suites, datasets, and models. A project should be created at the start of a distinct AI evaluation task or use case, typically at the beginning of a new AI development or testing cycle. This abstraction helps organise and manage multiple tests and configurations under a single entity, ensuring clarity and structure as the project progresses.
Run: In RagaAI Prism, a run refers to the execution of a specific test suite on a defined dataset and model configuration.If the same dataset and model configuration are used for multiple evaluations, the run name should remain consistent to track these iterations/tests. A run serves as a record of how the application performs under given conditions and provides results for comparison and analysis.
Confidence Threshold: The confidence threshold sets a baseline score that a predicted bounding box must exceed to be considered a true positive. Predictions with scores below this threshold are ignored, which can lead to them being classified as false negatives if they correspond to actual objects in the scene.
Frame IoU Threshold: The Frame IoU threshold is a specific value that determines whether a predicted bounding box is considered a true positive detection. It is calculated as the ratio of the area of overlap between the predicted bounding box and the ground truth bounding box to the area of their union.
Metric Threshold: The metric threshold defines the pass or fail criteria for evaluating AI model performance during A/B testing. It specifies acceptable bounds for various supported metrics, such as the difference count or percentage as Pass or Fail.
Note: metric supported are difference count and difference percentage
What happens if you try to create a run before inferences are generated?:
If you attempt to create a run before inferences are generated in RagaAI Prism, the job will be placed in a queue. The system will not display results until the necessary inferences are completed, meaning the run's output will only be available once the inference process has successfully finished.