Instance segmentation results improvement
If your Intersection over Union, Precision, Recall, and Average Precision values are lower than expected or undefined (marked in the table as a dash “-”), don’t worry! In most cases, there are solutions to improve your results.
What to expect from this page?
- 1 Case 1 - An image or a region of interest contains annotated and labeled instances, but the app prediction doesn’t overlap with them.
- 2 Case 2 - An image or a region of interest contains no annotated and labeled instances, and the app doesn’t predict anything.
- 3 Case 3 - An image or a region of interest contains annotated and labeled instances, but the app doesn’t predict anything (Recall equals 0.00 or is unexpectedly low).
- 4 Case 4 - An image or a region of interest contains no annotated and labeled instances, but the app predicts something (Precision equals 0.00 or is unexpectedly low).
- 5 Related articles
Case 1 - An image or a region of interest contains annotated and labeled instances, but the app prediction doesn’t overlap with them.
Your results
| Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) |
---|---|---|---|---|---|
| 0.00 | 0.00 | 0.00 | 0.00 | n |
Meaning | there is no true-positive GREENpredicted instance | there is no true-positive GREENpredicted instance all predicted instances are false-positive RED | there is no true-positive GREENpredicted instance all annotated instances are false-negative BLUE | there is no true-positive GREENpredicted instance, the AP curve stays at zero | There are n (n - number) predictions in the image(s) or the ROI(s), but none of them correspond to annotations, so all of them are false-positive RED |
What does this mean?
This is a valid case, where the model is unable to identify correct instances. It could be due to insufficient training data preparation, i.e. incomplete or incorrect labeling and/or annotation.
What can you do?
Check if all instances in your training and validation images/ROIs were annotated and labeled correctly. Wrong annotations and mislabeling negatively affect the application.
If this issue occurs on single images/ROIs only, add more annotated and labeled images/ROIs with a similar appearance to the ones with wrong predictions to your training set.