Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

0.00

-

0.00

-

0

Meaning

without predicted instances, there is no intersecting areainstance

without predicted instances, the Precision value cannot be estimated

without predicted instances, the number of true-positive

Status
colourGreen
titleGREEN
instances is zero

all annotated instances are false-negative

Status
colourBlue
titleBLUE

without a defined value for Precision, the AP curve cannot be generated

without predicted instances, the number of false-positive instances is per definition zero

...

  • Add more images/ROIs containing annotated and labeled areas instances of the labels that were poorly recognized to your training set.

  • If the annotations are correct and complete, include new images/ROIs containing annotated and labeled areas instances in the training set.

  • Check if all areas instances in your validation images/ROIs were annotated and labeled correctly. If not, annotate and label all missing areas instances and correct any inaccuracies in your validation set.

...

Intersection over Union (IoU)

Precision [%]

Recall [%]

Average Precision (AP)

Number of False Positives (#FP)

0.00

0.00

-

-

n

Meaning

without annotated instances, there is no intersecting areainstance

without annotated instances, there is no true-positive

Status
colourGreen
titleGREEN
predicted instance

all predicted instances are false-positive

Status
colourRed
titleRED

without annotated instances, the Recall value cannot be estimated

without a defined value for Recall, the AP curve cannot be generated

there are n (n - number)predictions in the image or the ROI, but none of them corresponds to annotations, so all of them are false-positive

Status
colourRed
titleRED

...

This is a false-positive prediction by your trained application, meaning it predicts areas instances that shouldn’t be predicted.

If the Precision value is greater than 0.00 but still unexpectedly low, your trained application seems to predict areas instances in your validation images that have not been annotated.

...

  • the app has made false-positive predictions, or

  • not all areas instances are annotated and labeled on your validation images.

What can you do?

  • Check if all areas instances in your validation images/ROIs were annotated and labeled correctly:

    • If not, annotate and label all missing areas instances and correct any inaccuracies in your validation set.

    • If yes, add background images/ROIs to the training set.

  • Add images/ROIs containing areas instances with a similar appearance as the false-positive predictions to the training set.

...