Improve instance segmentation performance by learning about all possible scenarios where your application fails to deliver the expected outcomes.
We take a look at four special cases where your instance segmentation application is underperforming and offer you practical solutions to the occurring issues.
What to expect from this page?
📑 Case 1 - An image or an ROI contains manual annotations, and there is a prediction without any overlap.
Your results
Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) | |
---|---|---|---|---|---|
0.00 | 0.00 | 0.00 | 0.00 | n | |
Meaning | there is no true-positive GREENpredicted area | there is no true-positive GREENpredicted instance all predicted instances are false-positive RED | there is no true-positive GREENpredicted instance all annotated instances are false-negative BLUE | there is no true-positive GREENpredicted instance, the AP curve stays at zero | There are n (n - number) predictions in the image(s) or the ROI(s), but none of them corresponds to an annotation, so all of them are false-positive RED |
What does this mean?
This is a valid case, where the model is unable to identify the correct instances, which could be due to insufficient training data preparation, i.e. incomplete or incorrect annotation.
What can you do?
Check if all instances/objects have been annotated and labeled correctly as mislabeling will confuse the model. Check the training and validation images.
If this issue occurs on individual images or ROI(s) only, try to add more annotated training images or ROI(s) with a similar appearance to your training data.
📑 Case 2 - An image or an ROI contains no manual annotations, and there is no prediction.
Your results
Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) | |
---|---|---|---|---|---|
- | - | - | - | 0 | |
Meaning | without an annotated ground truth area and predicted area, the IoU-value cannot be estimated | without predicted instances, the Precision-value cannot be estimated | without annotated ground truth instances, the Recall-value cannot be estimated | without defined values for Precision and Recall, no AP curve can be generated | without predicted instances, the number of false-positive instances is per definition zero |
What does this mean?
This is a valid case, where the model shows that it does not recognize anything in an image due to the lack of annotations.
What can you do?
In practice, you want to find out how well your application performs on images showing actual object instances.
If your validation images do not contain enough images with annotations, add more images to your validation set.
Select a manual split between the training- and the validation images and increase the number of validation images with annotations.
📑 Case 3 - An image or an ROI contains manual annotations, but there is no prediction (Recall equals 0.00 or is unexpectedly low).
Your results
Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) | |
---|---|---|---|---|---|
0.00 | - | 0.00 | - | 0 | |
Meaning | without a predicted area there is no intersecting area | without predicted instances, the Precision-value cannot be estimated | without predicted instances, the number of true-positive GREEN instances is zero all annotated instances are false-negative BLUE | without a defined value for Precision, no AP curve can be generated | without predicted instances, the number of false-positive instances is per definition zero |
What does this mean?
This is a valid case, suggesting that your model is not yet able to sufficiently recognize your objects of interest.
On occasions where the Recall-value is greater than 0.00, but still unexpectedly low, your trained application seems not to perform optimally when recognizing your annotated instances.
What can you do?
First, you need to check, if the annotations on your validation images are correct and complete (i.e. whether all target instances have been annotated). If annotations are not correct and complete, then you may have omitted annotations in your training images. This can be the reason for the low Recall-value, as the application will also mimic your annotation behavior and omit instances.
Further, you can add more annotated regions with the label of your interest to the training set to improve your performance. You will get the optimal improvements by adding images or ROI(s) containing instances with a similar appearance to the validation images that have not been correctly classified by the trained model.
If annotations are correct and complete, then you can improve your app by adding new images or ROI(s) containing annotations to the training set. You will get the optimal improvements by adding images or ROI(s) containing instances with a similar appearance to the regions in the validation images that have not been correctly classified by the trained model.
📑 Case 4 - An image or an ROI contains no manual annotations, but there is a prediction (Precision equals 0.00 or is unexpectedly low).
Your results
Intersection over Union (IoU) | Precision [%] | Recall [%] | Average Precision (AP) | Number of False Positives (#FP) | |
---|---|---|---|---|---|
0.00 | 0.00 | - | - | n | |
Meaning | without an annotated ground truth area there is no intersecting area | without annotated ground truth instances, there is no true-positive GREENpredicted instance all predicted instances are false-positive RED | without annotated ground truth instances, the value cannot be estimated | without a defined value for Recall, no AP curve can be generated | there are n (n - number) predictions in the image or the ROI, but none of them corresponds to a ground truth annotation, so all of them are false-positive RED |
What does this mean?
This is a false-positive prediction made by your trained application.
On occasions where the Precision-value is greater than 0.00, but still unexpectedly low, your trained model seems to predict instances in your validation images that have not been annotated. This can either be due to the fact that the application has made false-positive predictions or that some annotations in your validation images do not include all target instances.
What can you do?
Verify if there are really no instances of the target labels present in the image(s) or ROI(s). Try to add images or ROI(s) that have a similar appearance to the ones containing false-positive predictions.
Have a look at the annotations in your validation images in cases where the app has performed a segmentation task and check whether the annotations are complete. If annotations are correct and complete, then you can improve your trained model by adding new images or ROI(s) without any instances to the training set.
You will get the optimal improvements by adding images or ROI(s) containing instances with a similar appearance as the areas that have been detected by the model in the validation images.
Having successfully alleviated all frustrating false prediction issues, you can now use the potential of your trained applications to full advantage!
Share this article with a colleague, if it has helped you!
If you still have questions regarding your application training, feel free to send us an email at support@ikosa.ai. Copy-paste your training ID in the subject line of your email.