/
Instance segmentation results improvement

Instance segmentation results improvement

If your Intersection over Union, Precision, Recall, and Average Precision values are lower than expected or undefined (marked in the table as a dash “-”), don’t worry! In most cases, there are solutions to improve your results.

What to expect from this page?

Case 1 - An image or a region of interest contains annotated and labeled instances, but the app prediction doesn’t overlap with them.

Your results

 

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

 

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

 

0.00

0.00

0.00

0.00

n

Meaning

there is no true-positive GREENpredicted instance

there is no true-positive GREENpredicted instance

all predicted instances are false-positive RED

there is no true-positive GREENpredicted instance

all annotated instances are false-negative BLUE

there is no true-positive GREENpredicted instance,

the AP curve stays at zero

There are n (n - number) predictions in the image(s) or the ROI(s), but none of them correspond to annotations, so all of them are false-positive RED

What does this mean?

This is a valid case, where the model is unable to identify correct instances. It could be due to insufficient training data preparation, i.e. incomplete or incorrect labeling and/or annotation.

What can you do?

  • Check if all instances in your training and validation images/ROIs were annotated and labeled correctly. Wrong annotations and mislabeling negatively affect the application.

  • If this issue occurs on single images/ROIs only, add more annotated and labeled images/ROIs with a similar appearance to the ones with wrong predictions to your training set.

Case 2 - An image or a region of interest contains no annotated and labeled instances, and the app doesn’t predict anything.

Your results