Instance segmentation results improvement
If your Intersection over Union, Precision, Recall, and Average Precision values are lower than expected or undefined (marked in the table as a dash “-”), don’t worry! In most cases, there are solutions to improve your results.
What to expect from this page?
- 1 Case 1 - An image or a region of interest contains annotated and labeled instances, but the app prediction doesn’t overlap with them.
- 2 Case 2 - An image or a region of interest contains no annotated and labeled instances, and the app doesn’t predict anything.
- 3 Case 3 - An image or a region of interest contains annotated and labeled instances, but the app doesn’t predict anything (Recall equals 0.00 or is unexpectedly low).
- 4 Case 4 - An image or a region of interest contains no annotated and labeled instances, but the app predicts something (Precision equals 0.00 or is unexpectedly low).
- 5 Related articles
Case 1 - An image or a region of interest contains annotated and labeled instances, but the app prediction doesn’t overlap with them.
Your results
| Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) |
---|---|---|---|---|---|
| 0.00 | 0.00 | 0.00 | 0.00 | n |
Meaning | there is no true-positive GREENpredicted instance | there is no true-positive GREENpredicted instance all predicted instances are false-positive RED | there is no true-positive GREENpredicted instance all annotated instances are false-negative BLUE | there is no true-positive GREENpredicted instance, the AP curve stays at zero | There are n (n - number) predictions in the image(s) or the ROI(s), but none of them correspond to annotations, so all of them are false-positive RED |
What does this mean?
This is a valid case, where the model is unable to identify correct instances. It could be due to insufficient training data preparation, i.e. incomplete or incorrect labeling and/or annotation.
What can you do?
Check if all instances in your training and validation images/ROIs were annotated and labeled correctly. Wrong annotations and mislabeling negatively affect the application.
If this issue occurs on single images/ROIs only, add more annotated and labeled images/ROIs with a similar appearance to the ones with wrong predictions to your training set.
Case 2 - An image or a region of interest contains no annotated and labeled instances, and the app doesn’t predict anything.
Your results
| Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) |
---|---|---|---|---|---|
| - | - | - | - | 0 |
Meaning | without annotated instances and predicted instances, the IoU value cannot be estimated | without predicted instances, the Precision value cannot be estimated | without annotated instances, the Recall-value cannot be estimated | without defined values for Precision and Recall, the AP curve cannot be generated | without predicted instances, the number of false-positive instances is per definition zero |
What does this mean?
This is a valid case where the application does not recognize anything in an image/ROI without annotations.
What can you do?
If this is an unexpected situation:
Add more annotated and labeled images/ROIs to your validation set.
Select manual data split between the training and the validation images/ROIs and increase the number of validation images.
Case 3 - An image or a region of interest contains annotated and labeled instances, but the app doesn’t predict anything (Recall equals 0.00 or is unexpectedly low).
Your results
| Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) |
---|---|---|---|---|---|
| 0.00 | - | 0.00 | - | 0 |
Meaning | without predicted instances, there is no intersecting instance | without predicted instances, the Precision value cannot be estimated | without predicted instances, the number of true-positive GREEN instances is zero all annotated instances are false-negative BLUE | without a defined value for Precision, the AP curve cannot be generated | without predicted instances, the number of false-positive instances is per definition zero |
What does this mean?
This is a valid case where the application cannot recognize the targeted instances.
On occasions where the Recall-value is greater than 0.00, but still unexpectedly low, your trained model seems to perform poorly when recognizing your annotated instances.
What can you do?
Add more images/ROIs containing annotated and labeled instances of the labels that were poorly recognized to your training set.
If the annotations are correct and complete, include new images/ROIs containing annotated and labeled instances in the training set.
Check if all instances in your validation images/ROIs were annotated and labeled correctly. If not, annotate and label all missing instances and correct any inaccuracies in your validation set.
Case 4 - An image or a region of interest contains no annotated and labeled instances, but the app predicts something (Precision equals 0.00 or is unexpectedly low).
Your results
| Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) |
---|---|---|---|---|---|
| 0.00 | 0.00 | - | - | n |
Meaning | without annotated instances, there is no intersecting instance | without annotated instances, there is no true-positive GREENpredicted instance all predicted instances are false-positive RED | without annotated instances, the Recall value cannot be estimated | without a defined value for Recall, the AP curve cannot be generated | there are n (n - number) predictions in the image or the ROI, but none of them corresponds to annotations, so all of them are false-positive RED |
What does this mean?
This is a false-positive prediction by your trained application, meaning it predicts instances that shouldn’t be predicted.
If the Precision value is greater than 0.00 but still unexpectedly low, your trained application seems to predict instances in your validation images that have not been annotated.
This can either be because:
the app has made false-positive predictions, or
not all instances are annotated and labeled on your validation images.
What can you do?
Check if all instances in your validation images/ROIs were annotated and labeled correctly:
If not, annotate and label all missing instances and correct any inaccuracies in your validation set.
If yes, add background images/ROIs to the training set.
Add images/ROIs containing instances with a similar appearance as the false-positive predictions to the training set.
Having successfully alleviated all frustrating false predictions, you can now use the potential of your trained applications to full advantage!
If you still have questions regarding your application training, feel free to send us an email at support@ikosa.ai. Copy-paste your training ID into the subject line of your email.
Related articles
Copyright 2024 KOLAIDO GmbH. IKOSA® is a registered EU trademark.