Semantic segmentation - Dice Coefficient, Precision, Recall and Specificity

Improve your understanding of quantitative results in semantic segmentation.

Summary:

  • Dice Coefficient: The closer the value is to 1,

  • Precision [%]: The closer the value is to 100%,

  • Recall [%]: The closer the value is to 100%,

  • Specificity: The closer the value is to 100%,

the better your trained application performs.

What to expect from this page?

Basic theory

Metrics for semantic segmentation are calculated per label based on the so-called confusion matrix. It means that each pixel of an image or ROI is classified into one of four categories about the result of the application training (the ‘prediction’ or ‘predicted areas’).

The confusion matrix is unique for each label, and it contains the count of annotation (ground truth) cases and predicted pixel labels, i.e. true positives, false positives, true negatives, and false negatives. These metrics are used to evaluate the correctness of the overall app performance.

All metrics are computed unweighted for each ROI and/or image, and as a total for each label in a bottom-up fashion and don't represent aggregations, such as averages or median values.

Please note: In the results of the application training presented in the PDF report and the visualizations, we do not include true-negative predicted areas (to avoid visual confusion).

Demo validation visualization

In the correctness section of the demo validation visualization:

True-positive areas are shown in green.

Areas where the application prediction matches your annotated and labeled areas. In other words, it predicts image/ROI areas correctly.

False-positive areas are shown in red.

Areas that have not been annotated and labeled, yet predicted by your application. In other words, it predicts areas of the image/ROI that should not be predicted.

True-negative areas are shown in grEy

Areas that have not been annotated, labeled, and predicted. In other words, it is an area that doesn’t include true-positive, false-positive, and false-negative areas.

False-negative areas are shown in blue.

Areas that have been annotated and labeled yet were not predicted by your application.

Performance metrics for images with annotations

Section 2, Training results, of the downloaded PDF report, includes tables with quantitative results for all validation images and ROI(s) (if they were used) of all available labels.

These tables include performance metrics such as Dice Coefficient, Precision, Recall, and Specificity as a total value across all images as well as for each image separately. These measures are all based on:

  • the calculated total predicted area,

  • the true positive predicted area,

  • the false positive predicted area,

  • the true negative predicted area, and

  • the annotated area.

In the sections below, we will describe how to understand and interpret these metrics.

Dice Coefficient

What is it?

The Dice Coefficient is a measure of the concordance between the results of your trained app’s prediction and your annotations ('the ground truth'). It is calculated using the following formula.

 

X - the annotation (ground truth)

Y - the result of your application.

 

The Dice Coefficient ranges from 0 to 1, where values at the ends of this spectrum signify the following:

0 - there is no overlap between the result and annotation

1 - the result is equal to the annotation (ground truth).

 

Please note: The closer the value is to 1, the better your trained application performs.

Why can it be undefined?

Precision

What is it?

Precision (positive predictive value) shows the percentage of the predicted area that overlaps with the annotation and is calculated by the following formula.

Why can it be undefined?

Recall

What is it?

Recall (sensitivity, true positive rate) shows the percentage of the annotation area that is predicted by
the trained app and is calculated by the formula below.

Why can it be undefined?

Specificity

What is it?

Specificity (true negative rate) shows the percentage of empty (non-annotated) area that is predicted correctly by the application and is calculated by the formula below.

Why can it be undefined?

Performance metrics for images without annotations

 

Now that you have mastered the handling of quantitative results, you are perfectly equipped to gain solid insights from your analysis outputs.


If you still have questions regarding your application training, feel free to send us an email at support@ikosa.ai. Copy-paste your training ID into the subject line of your email.


Related articles

Copyright 2024 KOLAIDO GmbH. IKOSA® is a registered EU trademark.