Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Improve instance segmentation performance by learning about all possible scenarios where your application fails to deliver the expected outcomes.

We take a look at four special cases where your instance segmentation application is underperforming and offer you practical solutions to the occurring issuesIf your Intersection over Union, Precision, Recall, and Average Precision values are lower than expected or undefined (marked in the table as a dash “-”), don’t worry! In most cases, there are solutions to improve your results.

What to expect from this page?

Table of Contents
minLevel1
maxLevel1
excludeWhat to expect from this page?

📑 Case 1 - An image or a region of interest contains

...

annotated and labeled instances, but the app prediction doesn’t overlap with them.

Your results

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

0.00

0.00

0.00

0.00

n

Meaning

there is no true-positive

Status
colourGreen
titleGREEN
predicted instance

there is no true-positive

Status
colourGreen
titleGREEN
predicted instance

all predicted instances are false-positive

Status
colourRed
titleRED

there is no true-positive

Status
colourGreen
titleGREEN
predicted instance

all annotated instances are false-negative

Status
colourBlue
titleBLUE

there is no true-positive

Status
colourGreen
titleGREEN
predicted instance,

the AP curve stays at zero

There are n (n - number) predictions in the image(s) or the ROI(s), but none of them correspond to annotations, so all of them are false-positive

Status
colourRed
titleRED

...

This is a valid case, where the model is unable to identify correct instances, which . It could be due to insufficient training data preparation, i.e. incomplete or incorrect annotationslabeling and/or annotation.

What can you do?

  • Check if all areas have been instances in your training and validation images/ROIs were annotated and labeled correctly as mislabeling will confuse the model. Check the training and validation images/ROI(s). Wrong annotations and mislabeling negatively affect the application.

  • If this issue occurs on individual images or ROI(s) single images/ROIs only, try to add more annotated training images or ROI(s) and labeled images/ROIs with a similar appearance to the ones with wrong predictions to your training set.

📑 Case 2 - An image or a region of interest contains no

...

annotated and labeled instances, and the app doesn’t predict anything.

Your results

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

-

-

-

-

0

Meaning

without annotated instances and predicted instances, the IoU value cannot be estimated

without predicted instances, the Precision value cannot be estimated

without annotated instances, the Recall-value cannot be estimated

without defined values for Precision and Recall, the AP curve cannot be generated

without predicted instances, the number of false-positive instances is per definition zero

What does this mean?

This is a valid case, where the model shows that it application does not recognize anything in an image/ROI without annotations.

What can you do?

...

If this is an unexpected situation:

  • Add more annotated and labeled images/ROIs to your validation set.

    • Select manual data split between the training and the validation images/ROIs and increase the number of validation images

...

    • .

📑 Case 3 - An image or a region of interest contains

...

annotated and labeled instances, but the app doesn’t predict anything (Recall equals 0.00 or is unexpectedly low).

Your results

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

0.00

-

0.00

-

0

Meaning

without predicted instances, there is no intersecting area

without predicted instances, the Precision value cannot be estimated

without predicted instances, the number of true-positive

Status
colourGreen
titleGREEN
instances is zero

all annotated instances are false-negative

Status
colourBlue
titleBLUE

without a defined value for Precision, the AP curve cannot be generated

without predicted instances, the number of false-positive instances is per definition zero

What does this mean?

This is a valid case, suggesting that your model is not yet able to sufficiently recognize your objects of interest where the application cannot recognize the targeted instances.

On occasions where the Recall-value is greater than 0.00, but still unexpectedly low, your trained model seems to perform poorly when recognizing your annotated instances.

What can you do?

  • First, you need to check, if the annotations on your validation images are correct and complete (i.e. whether all target instances have been annotated). If annotations are not correct and complete, then you may have omitted annotations in your training images. This can be the reason for the low Recall-value, as the application will also mimic your annotation behavior and omit instances.

  • Further, you can add more annotations with the desired label to your training set to improve your performance. You will get the optimal improvements by adding images or ROI(s) containing instances with a similar appearance to the validation images that have not been correctly classified by the trained model.

  • If Add more images/ROIs containing annotated and labeled areas of the labels that were poorly recognized to your training set.

  • If the annotations are correct and complete, then you can improve your app by adding new images or ROI(s) containing annotations to the training set. You will get the optimal improvements by adding images or ROI(s) containing instances with a similar appearance to the regions in the validation images that have not been correctly classified by the applicationinclude new images/ROIs containing annotated and labeled areas in the training set.

  • Check if all areas in your validation images/ROIs were annotated and labeled correctly. If not, annotate and label all missing areas and correct any inaccuracies in your validation set.

📑 Case 4 - An image or a region of interest contains no

...

annotated and labeled instances, but

...

the app predicts something (Precision equals 0.00 or is unexpectedly low).

Your results

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

0.00

0.00

-

-

n

Meaning

without annotated instances, there is no intersecting area

without annotated instances, there is no true-positive

Status
colourGreen
titleGREEN
predicted instance

all predicted instances are false-positive

Status
colourRed
titleRED

without annotated instances, the Recall value cannot be estimated

without a defined value for Recall, the AP curve cannot be generated

there are n (n - number)predictions in the image or the ROI, but none of them corresponds to annotations, so all of them are false-positive

Status
colourRed
titleRED

...

This is a false-positive prediction made by your trained application, meaning it predicts areas that shouldn’t be predicted.

On occasions where If the Precision value is greater than 0.00 , but still unexpectedly low, your trained application seems to predict instances areas in your validation images that have not been annotated.

This can either be because:

  • the

...

  • app has made false-positive predictions

...

  • , or

  • not all areas are annotated and labeled on your validation images

...

  • .

What can you do?

  • Verify if there are no instances of the target labels present in the image(s) or ROI(s). Try to add images or ROI(s) that have a similar appearance to the ones containing false-positive predictions.

  • Have a look at the annotations in your validation images in cases where the app has performed a segmentation task and check whether the annotations are complete. If annotations are correct and complete, then you can improve your trained model by adding new images or ROI(s) without any instances Check if all areas in your validation images/ROIs were annotated and labeled correctly:

    • If not, annotate and label all missing areas and correct any inaccuracies in your validation set.

    • If yes, add background images/ROIs to the training set.

  • You will get the optimal improvements by adding images or ROI(s) containing instances Add images/ROIs containing areas with a similar appearance to the areas that have been detected by the application in the validation imagesas the false-positive predictions to the training set.

Having successfully alleviated all frustrating false prediction issuespredictions, you can now use the potential of your trained applications to full advantage! Share this article with a colleague, if it has helped you!

...

If you still have questions regarding your application training, feel free to send us an email at support@ikosa.ai. Copy-paste your training ID into the subject line of your email.

...