Improve your understanding of quantitative results in semantic segmentation.

Summary:

  • Dice Coefficient: The closer the value is to 1,

  • Precision [%]: The closer the value is to 100%,

  • Recall [%]: The closer the value is to 100%,

  • Specificity: The closer the value is to 100%,

the better your trained application performs.

What to expect from this page?

(blue star) Basic theory

Metrics for semantic segmentation are calculated per label based on the so-called confusion matrix. It means that each pixel of an image or ROI is classified into one of four categories about the result of the application training (the ‘prediction’ or ‘predicted areas’).

The confusion matrix is unique for each label, and it contains the count of annotation (ground truth) cases and predicted pixel labels, i.e. true positives, false positives, true negatives, and false negatives. These metrics are used to evaluate the correctness of the overall app performance.

All metrics are computed unweighted for each ROI and/or image, and as a total for each label in a bottom-up fashion and don't represent aggregations, such as averages or median values.

Please note: In the results of the application training presented in the PDF report and the visualizations, we do not include true-negative predicted areas (to avoid visual confusion).

In the correctness section of the demo validation visualization:

True-positive areas are shown in .

Areas where the application prediction matches your annotated and labeled areas. In other words, it predicts image/ROI areas correctly.

False-positive areas are shown in .

Areas that have not been annotated and labeled, yet predicted by your application. In other words, it predicts areas of the image/ROI that should not be predicted.

True-negative areas are shown in

Areas that have not been annotated, labeled, and predicted. In other words, it is an area that doesn’t include true-positive, false-positive, and false-negative areas.

False-negative areas are shown in .

Areas that have been annotated and labeled yet were not predicted by your application.

(blue star) Performance metrics for images with annotations

Section 2, Training results, of the downloaded PDF report, includes tables with quantitative results for all validation images and ROI(s) (if they were used) of all available labels.

These tables include performance metrics such as Dice Coefficient, Precision, Recall, and Specificity as a total value across all images as well as for each image separately. These measures are all based on:

In the sections below, we will describe how to understand and interpret these metrics.

Dice Coefficient

What is it?

The Dice Coefficient is a measure of the concordance between the results of your trained app’s prediction and your annotations ('the ground truth'). It is calculated using the following formula.

X - the annotation (ground truth)

Y - the result of your application.

The Dice Coefficient ranges from 0 to 1, where values at the ends of this spectrum signify the following:

0 - there is no overlap between the result and annotation

1 - the result is equal to the annotation (ground truth).

Please note: The closer the value is to 1, the better your trained application performs.

Why can it be undefined?

Important: If the image does not contain any annotation or prediction area at all, the Dice Coefficient will be reported as an undefined value (marked in the table as a dash “-”)

Precision

What is it?

Precision (positive predictive value) shows the percentage of the predicted area that overlaps with the annotation and is calculated by the following formula.

Please note: The closer the value is to 100%, the better your trained application performs.

Why can it be undefined?

Important: If the image does not contain any prediction area at all, the Precision will be reported as an undefined value (marked in the table as a dash “-”)

Recall

What is it?

Recall (sensitivity, true positive rate) shows the percentage of the annotation area that is predicted by
the trained app and is calculated by the formula below.

Please note: The closer the value is to 100%, the better your trained application performs.

Why can it be undefined?

Important: If the image does not contain any annotations at all, the Recall will be reported as an undefined value (marked in the table as a dash “-”).

Specificity

What is it?

Specificity (true negative rate) shows the percentage of empty (non-annotated) area that is predicted correctly by the application and is calculated by the formula below.

Please note: The closer the value is to 100%, the better your trained application performs.

Why can it be undefined?

Important: If the image does not contain any background at all, the Specificity will be reported as an undefined value (marked in the table as a dash “-“)

(blue star) Performance metrics for images without annotations

Important:

If the image(s) doesn’t contain any annotations, by definition the Dice Coefficient and Precision are 0, and Recall cannot be calculated (undefined), as neither true positive nor annotation areas (ground truth) are defined. Therefore the only metric that matters is Specificity.

From the sections above, we know that when the Dice Coefficient and Precision are 0, the app’s performance is poor. However, if there are no annotations on the image(s), they will equal 0 anyway, and will not affect the performance of the application, as in this case, it does not depend on them.

So it was decided not to calculate the Dice Coefficient and Precision for images without annotations and mark them in the tables with a dash "-".

Now that you have mastered the handling of quantitative results, you are perfectly equipped to gain solid insights from your analysis outputs.


If you still have questions regarding your application training, feel free to send us an email at support@ikosa.ai. Copy-paste your training ID into the subject line of your email.


(blue star) Related articles

The content by label feature automatically displays related articles based on labels you choose. To edit options for this feature, select the placeholder and tap the pencil icon.