All information presented in this document is applicable to your trained application.
If you want to learn more about the preparation and process of application training, please refer to our section Application training with IKOSA AI.
Application Name | IKOSA AI Semantic Segmentation App |
Version | 5.1.1 |
Documentation Version | 30.01.2024 - 1 |
Input Image(s) | 2D (standard and/or WSI) / Time Series / z-Stack / Multichannel Images; RGB or Grayscale (8/16 bit, 16-bit support only if trained with IKOSA AI Trainer version >= 2.3.0) |
Input Parameter(s) | Regions of interest (optional) |
Keywords | - |
Short Description | Segmentation of regions/objects that resemble the appearance of regions/objects annotated and labeled in the training data for this application. |
References / Literature |
Table of contents
IKOSA AI Semantic Segmentation App
Your IKOSA AI application was created using our training for semantic segmentation. All information presented in this document is applicable to your trained application.
Application description
This application automatically segments the regions/objects of the images that resemble the appearance of regions/objects that were annotated and labeled in the training data of this application. The application supports multiple classes and also performs an instance separation in post-processing. Areas of the different classes are measured, and the number and morphological parameters of the objects belonging to the different classes are calculated. This analysis can also be performed on time-lapse recordings (Time Series), z-stacks, or multichannel images, uploaded as 8-bit multipage TIFF files.
In the following, the prerequisites for an accurate analysis are outlined and the output of the application is described.
Input data requirements
Input image(s)
Input for this application is the following image data:
Image type | Color channels | Color depth (per channel) | Size (px) | Resolution (μm/px) |
---|---|---|---|---|
2D (standard and/or WSI), Time Series, Multichannel, Z-stack* Check image format | 3 (RGB) 1 (Grayscale) | 8 Bit or 16 Bit (Gray, 16-bit support only if trained with IKOSA AI Trainer version >= 2.3.0) | WSI formats: arbitrary Standard images: max. 25,000 x 25,000 | arbitrary |
Image content Arbitrary Additional requirements None |
*Please note: Z-stack images cannot be uploaded into IKOSA but still can be analyzed via IKOSA Prisma API.
Important:
For all images, the following requirements apply:
The illumination must be constant throughout the image(s).
The sample must be in focus, i.e. no blurry regions in image(s).
Input parameter(s)
No additional input parameters are required for this application.
As an optional parameter, a single or multiple regions of interest (ROIs) can be defined in which the analysis should be performed (‘inclusion ROIs’).
Please note: Parameters that were set during training may affect also prediction with the deployed application. More information can be found under How to set custom training parameter values?.
Description of output files and their content
Files
File format | Description | |
---|---|---|
1 | csv | results.csv A csv file containing the overall analysis results for the input image or all inclusion ROIs. |
2 | csv | results_<xx>_<class-name>.csv A csv file containing the analysis results for all detected objects of class number <xx> with label name <class-name> (in training data) in the input image or inclusion ROIs. |
3 | jpg | results_vis/<xx>_<class-name>_vis.jpg (2D image, no ROI), or results_vis/<xx>_<class-name>_t<time-step>_z<z-layer>_c<channel>.jpg (for time series, z-stack, or multichannel image, no ROI), or results_vis/<xx>_<class-name>_<roi-id>.jpg (2D image, ROI <roi-id>), or results_vis/<xx>_<class-name>_t<time-step>_z<z-layer>_c<channel>_<roi-id>.jpg for time series, z-stack, or multichannel image, ROI <roi-id>): A visualization of the analysis result for a specific time step (of a time series), z-layer (of a z-stack), or channel (of a multichannel image) for either the whole image (if no inclusion ROIs selected for analysis) or each individual inclusion ROI, for each class number <xx> with label name <class-name> (in training data). Each visualization includes two parts:
Please note: These files are only created if qualitative result visualization was requested when submitting the analysis job. |
4 | json | annotation_results.json: JSON file containing segmented regions. The position is measured from the left upper corner (1,1) of the image. |
5 | json | roiMeta.json A json file containing all information regarding the ROIs defined for the analysis job to ensure reproducibility. Also, information regarding image and analysis dimensions is provided. |
6 | jpg | rois_visualization.jpg, or t<time-step>_z<z-layer>_c<channel>_rois_visualization.jpg: An overview visualization to show locations of all analyzed ROIs for the 2D image or time step <time-step> of a time series, z-layer <z-layer> of a z-stack, or channel <channel> of a multichannel image. Please note: This file is only created if inclusion ROIs were defined for analysis. |
7 | json | jobResultBundleMeta.json: A json file containing all information regarding the analysis job (application name and version, project, etc.) to ensure reproducibility. Please note: This file is only included if bundled or merged analysis jobs are downloaded. |
Please note:
In the case of inclusion ROIs that are partially outside of the image, the ROIs are cropped to the areas that lie inside the image.
In the case of inclusion ROIs that are completely outside of the image, no analysis is performed. However, they are still listed in corresponding results files.
A <roi-id> is generated automatically by the application corresponding to the creation date of a ROI. The location of a ROI within an image with its specific <roi-id> can be seen in the file “rois_visualization.jpg.” ROIs that are completely outside of the image are not shown in this file.
All visualizations are downscaled to 25 megapixels (MP), if the original image or inclusion ROI is larger than 25 MP.
Segmented regions/objects with a size/area below a certain threshold are discarded. The threshold is defined as
8*(downscaleFactor^2)
. So for adownscaleFactor=2
(default), regions/objects with a size/area<32 pixels are discarded and only objects with a size/area>=32 pixels are provided in the results. For more information regarding thedownscaleFactor
, see https://kmlvision.atlassian.net/wiki/spaces/KB/pages/3627712526/How+to+set+custom+training+parameter+values#Decreasing-Image-Resolution---Downscale-Factor. Please note: There can still be regions/objects with smaller sizes/areas in the results if they are on the border of the analysed ROI, since the total object size (also the parts outside of the ROI) are taken into account for this thresholding.
Content
results.csv
Single csv-file
If one or more time steps (of a Time Series), z-layers (of a z-Stack), or channels (of a multichannel image) were specified, the results in a specific row refer to the time step/z-layer/channel specified in the corresponding column.
If one or more ROIs were specified, the results in a specific row refer to the ROI specified in the corresponding columns, otherwise (empty ROI columns) the results refer to the whole image.
Column NO. | Column name | Examples | Value range | Description |
---|---|---|---|---|
1 | t | 3 | 1 - | Time step, i.e. the position of the image in the time series. |
2 | z | 5 | 1 - | z-layer, i.e. the position of the layer in the z-stack. |
3 | c | 2 | 1 - | Channel, i.e. the position of the channel in the multichannel image. |
4 | roi_id | ROI-03 | ROI-01 - | <roi-id> starting from “ROI1”. Empty, if no inclusion ROI is specified and the whole image was analyzed. |
5 | roi_name | “central” | text | Custom text to identify the ROI. Empty, if no inclusion ROI is specified and the whole image was analyzed. |
6 | roi_size [Px^2] | 1212212 | 1 - | Size of the ROI that was analyzed in pixels^2. The size of the whole image is given if no inclusion ROI is specified and the whole image was analyzed. |
7 | <class-name>_total_num_objects | 3796 | 0 - | Total number of detected objects of class <class-name> in ROI or image. |
8 | <class-name>_on_border_num_objects | 12 | 0 - total_num_objects | Number of objects of class <class-name> which are detected in the image/ROI and touch the border of the image/ROI. |
9 | <class-name>_holes_num_objects | 3 | 0 - total_num_objects | Number of objects of class <class-name> which are detected in the image/ROI and have holes (>=1 pixel). |
10 | <class-name>_recombined_num_objects | 4 | 0 - total_num_objects | Number of objects of class <class-name> which are detected at tile borders of image (tile size: 2048x2048 pixels) and are recombined. |
11 | <class-name>_total_area [Px^2] | 122438 | 0 - no. of image px | Total area covered by detected objects of class <class-name> in Pixels^2. |
12 | <class-name>_total_area [%] | 3.66 | 0 - 100 | Total area covered by detected objects of class <class-name> as percentage of overall image area or ROI area inside the image. |
... | Similar to columns 7-12 with further classes. |
results_<xx>_<class-name>.csv
Single or multiple csv-file(s)
If one or more time steps (of a Time Series), z-layers (of a z-Stack), or channels (of a multichannel image) were specified, the results in a specific row refer to the time step/z-layer/channel specified in the corresponding column.
If one or more ROIs were specified, the results in a specific row refer to the ROI specified in the corresponding columns, otherwise (empty ROI columns) the results refer to the whole image.
Column NO. | Column name | Examples | Value range | Description |
---|---|---|---|---|
1 | t | 3 | 1 - | Time step, i.e. the position of the image in the time series. |
2 | z | 5 | 1 - | z-layer, i.e. the position of the layer in the z-stack. |
3 | c | 2 | 1 - | Channel, i.e. the position of the channel in the multichannel image. |
4 | roi_id | ROI-03 | ROI-01 - | <roi-id> starting from “ROI1”. Empty, if no inclusion ROI is specified and the whole image was analyzed. |
5 | roi_name | “central” | text | Custom text to identify the ROI. Empty if no inclusion ROI is specified and the whole image was analyzed. |
6 | roi_size [Px^2] | 1212212 | 1 - | Size of the ROI that was analyzed in pixels^2. The size of the whole image is given if no inclusion ROI is specified and the whole image was analyzed. |
7 | object_id | 5 | 1 - | ID of object corresponding to id in visualization of ROI or image. More information regarding numbering can be found in Numbering (IDs) of objects in analysis results. |
8 | is_on_border | “True” | “True” or “False” | Boolean indicator to show if object is touching the border of the image/ROI. |
9 | has_holes | “False” | “True” or “False” | Boolean indicator to show if object has holes (>=1 pixel). |
10 | is_recombined | “False” | “True” or “False” | Boolean indicator to show if object was recombined because it was detected at tile borders of image (tile size: 2048x2048 pixels). |
11 | area [Px^2] | 132 | 1 - | Area of detected object in Pixels^2. |
12 | bbox_area [Px^2] | 175 | 1 - | Area of bounding box of detected object in Pixels^2. |
13 | area_ratio [%] | 0.4 | 0 - 100 | Area of detected object as percentage of overall image area or ROI area inside the image. |
14 | perimeter [Px] | 12.3 | 0 - | Perimeter of detected object in Pixels. |
15 | circularity | 0.91 | 0 - | Circularity factor of detected object; circularity = 4*pi*area/(perimeter^2). The circularity of a circle is 1. |
16 | circularity_ISO | 0.65 | 0 - ~1 | Circularity of detected object calculated by ISO definition. |
17 | solidity | 0.98 | 0 - 1 | Ratio of pixels in the region to pixels of the convex hull image. |
18 | eccentricity | 0.96 | 0 - 1 | Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. When it is 0, the ellipse becomes a circle. |
19 | equivalent_diameter [Px] | 10.3 | 0 - | Diameter of a circle having the same area as the detected object. |
20 | extent | 0.73 | 0 - 1 | Ratio of pixels in the region to pixels in the total bounding box |
21 | minor_axis_length [Px] | 6 | 1 - | The length of the minor axis of the ellipse that has the same normalized second central moments as the region. |
22 | major_axis_length [Px] | 12 | 1 - | The length of the major axis of the ellipse that has the same normalized second central moments as the region. |
Error information
More information about errors can be found in the Application Error Documentation.
Contact
If you have any questions about IKOSA AI and the applications you can create with it, please refer to Application Training with IKOSA AI section in the Knowledge Base.
You can also always contact our team at support@kmlvision.com for any further clarifications.
Feel free to book a 30-minute meeting to speak with us about IKOSA and the apps!