2.0.1 Instance Segmentation App
Application Name | IKOSA AI Instance Segmentation App |
Version | 2.0.1 |
Documentation Version | 06.09.2023 - 1 |
Input Image(s) | 2D (standard and/or WSI) / Time Series / z-Stack / Multichannel Images; RGB or Grayscale (8/16 bit) |
Input Parameter(s) |
|
Keywords | - |
Short Description | Instance Segmentation of objects that resemble ones annotated and labeled in the training data of this application. Extraction of morphometric parameters and intensity/distance/density measures of objects and corresponding outgrowth areas. |
References / Literature |
|
Table of contents
IKOSA AI Instance Segmentation App
Your IKOSA AI application was created using our training for instance segmentation. All information presented in this document is applicable to your trained application.
Application description
This application automatically segments instances of objects that resemble ones previously annotated and labeled in the training data of this application. The application supports multiple classes, for each class an outgrowth region can be defined separately. Object count, morphometric feature measurements, densities and distances between instances are calculated. Color channel intensities (Red/Green/Blue or Gray values) are measured for the object-, outgrowth-, and combined area. This analysis can also be performed on time-lapse recordings (Time Series), z-stacks or multichannel images.
In the following, the prerequisites for an accurate analysis are outlined and the output of the application is described.
Input data requirements
Input image(s)
Input for this application is the following image data:
Image type | Color channels | Color depth (per channel) | Size (px) | Resolution (μm/px) |
|---|---|---|---|---|
2D (standard and/or WSI), Time Series, Multichannel, Z-stack*
Check image format | 3 (RGB) 1 (Grayscale) | 8 Bit or 16 Bit (Gray) | WSI formats: arbitrary Standard images: max. 20,000 x 20,000 | arbitrary |
Image content Arbitrary Additional requirements None | ||||
*Please note: Z-stack images cannot be uploaded into IKOSA but still can be analyzed via IKOSA Prisma API.
Important:
For all images, the following requirements apply:
The illumination must be constant throughout the image(s).
The sample must be in focus, i.e. no blurry regions in image(s).
Input parameter(s)
Required input parameters:
Max. outgrowth: For each class (trained label) a maximum outgrowth in the range of 0 - 99 pixels can be defined.
As an optional parameter, a single or multiple regions of interest (ROIs) can be defined in which the analysis should be performed (‘inclusion ROIs’).
Please note: Parameters that were set during training may affect also prediction with the deployed application. More information can be found under How to set custom training parameter values?.
Description of output files and their content
Files
File format | Description | |
|---|---|---|
| 1 | csv | results.csv A csv file containing the overall analysis results for the input image or all inclusion ROIs. |
| 2 | csv | results_<xx>_<class-name>.csv A csv file containing the analysis results for all detected objects of class number <xx> with class name <class-name> (in training data) in the input image or inclusion ROIs. |
| 3 | jpg | results_vis/<xx>_<class-name>_vis.jpg (2D image, no ROI), or results_vis/<xx>_<class-name>_t<time-step>_z<z-layer>_c<channel>.jpg (for time series, z-stack, or multichannel image, no ROI), or results_vis/<xx>_<class-name>_<roi-id>.jpg (2D image, ROI <roi-id>), or results_vis/<xx>_<class-name>_t<time-step>_z<z-layer>_c<channel>_<roi-id>.jpg for time series, z-stack, or multichannel image, ROI <roi-id>): A visualization of the analysis result for a specific time step (of a time series), z-layer (of a z-stack), or channel (of a multichannel image) for either the whole image (if no inclusion ROIs selected for analysis) or each individual inclusion ROI, for each class number <xx> with class name <class-name> (in training data). Each visualization includes two parts:
Please note: These files are only created if qualitative result visualization was requested when submitting the analysis job. |
| 4 | json | annotation_results.json A json file that includes geometries of the objects detected in the input image or inclusion ROIs. |
| 5 | json | roiMeta.json A json file containing all information regarding the ROIs defined for the analysis job to ensure reproducibility. The file is empty if no ROIs were defined for analysis. |
| 6 | jpg | rois_visualization.jpg or t<time-step>_z<z-layer>_c<channel>_rois_visualization.jpg An overview visualization to show locations of all analyzed ROIs for the 2D image or time step <time-step> of a time series, z-layer <z-layer> of a z-stack, or channel <channel> of a multichannel image. Please note: This file is only created if inclusion ROIs were defined for analysis. |
| 7 | json | jobResultBundleMeta.json A json file containing all information regarding the analysis job (application name and version, project, etc.) to ensure reproducibility. Please note: This file is only included if bundled or merged analysis jobs are downloaded. |
Please note:
In the case of inclusion ROIs that are partially outside of the image, the ROIs are cropped to the areas that lie inside the image.
In the case of inclusion ROIs that are completely outside of the image, no analysis is performed. However, they are still listed in corresponding results files.
A <roi-id> is generated automatically by the application corresponding to the creation date of a ROI. The location of a ROI within an image with its specific <roi-id> can be seen in the file “rois_visualization.jpg.” ROIs that are completely outside of the image are not shown in this file.
All visualizations are downscaled to 25 megapixels (MP), if the original image or inclusion ROI is larger than 25 MP.
Segmented objects with a size/area below a certain threshold are discarded. The threshold is defined as
8*(downscaleFactor^2). So for adownscaleFactor=1(default), objects with a size/area<8 pixels are discarded and only objects with a size/area>=8 pixels are provided in the results. For more information regarding thedownscaleFactor, see How to set custom training parameter values? | Decreasing Image Resolution Downscale Factor. Please note: There can still be objects with smaller sizes/areas in the results if they are on the border of the analysed ROI, since the total object size (also the parts outside of the ROI) are taken into account for this thresholding.
Content
results.csv
Single csv-file
If one or more time steps (of a Time Series), z-layers (of a z-Stack), or channels (of a multichannel image) were specified, the results in a specific row refer to the time step/z-layer/channel specified in the corresponding column.
If one or more ROIs were specified, the results in a specific row refer to the ROI specified in the corresponding columns, otherwise (empty ROI columns) the results refer to the whole image.
Column NO. | Column name | Examples | Value range | Description |
|---|---|---|---|---|
1 | t | 3 | 1 - | Time step, i.e. the position of the image in the time series. |
2 | z | 5 | 1 - | z-layer, i.e. the position of the layer in the z-stack. |
3 | c | 2 | 1 - | Channel, i.e. the position of the channel in the multichannel image. |
4 | roi_id | ROI-03 | ROI-01 - | <roi-id> starting from “ROI1”. Empty, if no inclusion ROI is specified and the whole image was analyzed. |
5 | roi_name | “central” | text | Custom text to identify the ROI. Empty, if no inclusion ROI is specified and the whole image was analyzed. |
6 | roi_size [Px^2] | 1212212 | 1 - | Size of the ROI that was analyzed in pixels^2. The size of the whole image is given if no inclusion ROI is specified and the whole image was analyzed. |
7 | bit depth [Bit] | 8 | 8, 16 | Bit/color depth of each channel of the image |
8 | class | “cell nuclei” | text | Name of the trained label (<class-name>). |
9 | object_max_outgrowth [Px] | 2 | 0 - 99 | Maximum outgrowth in pixels. |
10 | total_num_objects | 788 | 1 - | Total number of objects which are detected in the image/ROI. |
11 | on_border_num_objects | 12 | 0 - total_num_objects | Number of objects which are detected in the image/ROI and touch the border of the image/ROI. |
12 | holes_num_objects | 3 | 0 - total_num_objects | Number of objects which are detected in the image/ROI and have holes (>=1 pixel). |
13 | recombined_num_objects | 4 | 0 - total_num_objects | Number of objects which are detected at tile borders of image (tile size: 2048x2048 pixels) and are recombined. |
14 | density_num_objects [1/Px^2] | 0.0007514953 | 0 - 1 | Total number of objects divided by area of image/ROI. |
15 | total_area_objects [Px^2] | 101863 | 8 - #pixels | Total area of objects which are detected in the image/ROI. |
16 | density_area_objects | 0.0971441268 | 0 - 1 | Total area of objects divided by area of image/ROI. |
17 | total_area_outgrowth [Px^2] | 125326 | 0 - #pixels | Total area of object’s outgrowth which are detected in the image/ROI. |
18 | density_area_outgrowth | 0.11952018737 | 0 - 1 | Total area of outgrowth divided by area of image/ROI. |
19 | total_area_objects_incl_outgrowth [Px^2] | 227189 | 8 - #pixels | Total area of objects including outgrowth which are detected in the image/ROI. |
20 | density_area_objects_incl_outgrowth | 0.2166643142 | 0 - 1 | Total area of objects including outgrowth divided by area of image/ROI |
21 | mean_area_objects [Px^2] | 14.324523 | 8 - #pixels | Mean area of detected objects in Pixels^2. |
22 | stddev_area_objects [Px^2] | 1.213133 | 0 - #pixels | Standard deviation of area of detected objects in Pixels^2. |
23 | median_area_objects [Px^2] | 13.983252 | 8 - # pixels | Median area of detected objects in Pixels^2. |
24 | mean_area_outgrowth [Px^2] | 21.363824 | 0 - #pixels | Mean area of outgrowth in Pixels^2. |
25 | stddev_area_outgrowth [Px^2] | 2.573912 | 0 - #pixels | Standard deviation of areas of outgrowth in Pixels^2. |
26 | median_area_outgrowth [Px^2] | 20.874623 | 0 - #pixels | Median area of outgrowth in Pixels^2. |
27 | mean_area_objects_incl_outgrowth [Px^2] | 35.836274 | 8 - #pixels | Mean area of detected objects including outgrowth in Pixels^2. |
28 | stddev_area_objects_incl_outgrowth [Px^2] | 2.309003 | 0 - #pixels | Standard deviation of area of detected objects including outgrowth in Pixels^2. |
29 | median_area_objects_incl_outgrowth [Px^2] | 35.473222 | 8 - #pixels | Median area of detected objects including outgrowth in Pixels^2. |
30 | mean_distance [Px] | 13.5016475693 | 0 - | Mean distance (center-to-center) between detected objects in Pixels. The distance is calculated by the nearest-neighbor distance to other objects of the same type. |
31 | stddev_distance [Px] | 5.5485792042 | 0 - | Standard deviation of distances between detected objects in Pixels. The distance is calculated by the nearest-neighbor distance to other objects of the same type. |
32 | median_distance [Px] | 12.529964086 | 0 - | Median distance between detected objects in Pixels. The distance is calculated by the nearest-neighbor distance to other objects of the same type. |
33 | mean_bbox_area [Px^2] | 202.03426395 | 8 - | Mean area of bounding box of detected objects in Pixels^2. |
34 | mean_area_ratio [%] | 0.0124492385 | 0 - 100 | Mean area of detected object as percentage of overall image area or ROI area inside the image. |
35 | mean_perimeter [Px] | 40.434010152 | 8 - | Mean perimeter of detected objects in Pixels. |
36 | mean_circularity | 0.92114213197 | 0 - ~1 | Mean circularity factor of detected objects. Circularity = 4*pi*area/(perimeter^2). The circularity of a circle is 1. Note: Detected objects with a circularity of infinity are ignored when calculating mean_circularity. |
37 | mean_circularity_ISO | 0.85114213197 | 0 - ~1 | Mean circularity factor of detected objects calculated by ISO defintion. |
38 | mean_solidity | 0.9310659898 | 0 - 1 | Mean solidity value of all detected objects. |
Copyright 2024 KOLAIDO GmbH. IKOSA® is a registered EU trademark.