Trailer: Bio-image Analysis with Python#

In the following chapters we will dive into image analysis, machine learning and bio-statistics using Python. This first notebook serves as a trailer of what we will be doing.

Python notebooks typically start with the imports of Python libraries which the notebook will use. The reader can check first if all those libraries are installed before going through the whole notebook.

import numpy as np
from skimage.io import imread, imshow
import pyclesperanto_prototype as cle
from cellpose import models, io
from skimage import measure
import pandas as pd
import apoc

Working with image data#

We start with loading the image data of interest. In this example we load an image showing a zebrafish eye, courtesy of Mauricio Rocha Martins, Norden lab, MPI CBG Dresden.

# open an image file
multichannel_image = imread("../../data/zfish_eye.tif")
print("Image size", multichannel_image.shape)
Image size (1024, 1024, 3)

From the three image channels, we extract one and visualize it.

# extract a channel
single_channel_image = multichannel_image[:,:,0]

# visualizing an image
cle.imshow(single_channel_image, color_map='Greys_r')
../_images/trailer_5_0.png

We often don’t work on the whole image to setup image analysis routines, we instead crop out interesting regions to concentrate on.

cropped_image = single_channel_image[200:600, 500:900]

cle.imshow(cropped_image, color_map='Greys_r')
../_images/trailer_7_0.png

Image filtering#

A common step when working with fluorescence microscopy images is subtracting background intensity, e.g. resulting from out-of-focus light. This can improve image segmentation results further down in the workflow.

# subtract background using a top-hat filter
background_subtracted_image = cle.top_hat_box(cropped_image, radius_x=10, radius_y=10)

cle.imshow(background_subtracted_image, color_map='Greys_r')
../_images/trailer_9_0.png

Image segmentation#

For segmenting the nuclei in the given image, a huge number of algorithms exist. Modern approaches of deep-learning are gaining more and more attention because once trained, deep-learning based models are very easy to use. For example, there are models for segmenting nuclei using the CellPose algorithm. It will result in a label image visualizing all nuclei in different colors.

# load cellpose model
model = models.Cellpose(gpu=False, model_type='nuclei')

# apply model
channels = [0,0] # This means we are processing single channel greyscale images.
label_image, flows, styles, diams = model.eval(cropped_image, diameter=None, channels=channels)

# show result
cle.imshow(label_image, labels=True)
../_images/trailer_11_0.png

Measurements and feature extraction#

After the image is segmented, we can measure properties of the individual objects. Those properties are typically descriptive statistical parameters, called features. When we derive measurements such as the area or the mean intensity, we extract those two features.

statistics = measure.regionprops_table(label_image, 
                                       intensity_image=cropped_image,
                                       properties=('area', 'mean_intensity'))

Working with tables#

The statistics object created above contains a Python data structure, a dictionary of measurement vectors, which is not most intuitive to look at. Hence, we convert it into a table. Data scientists often call these tables DataFrames, which are available in the pandas library.

# show table
dataframe = pd.DataFrame(statistics)
dataframe
area mean_intensity
0 364 35866.392857
1 891 33722.372615
2 356 24384.025281
3 898 37399.743875
4 822 35182.845499
... ... ...
94 981 46618.353721
95 718 42108.785515
96 620 43798.804839
97 641 47469.379095
98 216 43791.847222

99 rows × 2 columns

Descriptive statistics#

Taking this table as a starting point, we can use statistics to get an overview about the measured data.

mean_area = np.mean(dataframe['area'])
stddev_area = np.std(dataframe['area'])

print("Mean nucleus area is", mean_area, "+-", stddev_area, "pixels")
Mean nucleus area is 625.9191919191919 +- 241.2726223974426 pixels

Spatial statistics#

When studying how cells are forming tissues, relationships of the cells to the local environment, their neighboring cells, are highly relevant. We can visualize those relationships for example in a distance mesh, where distances between centroids of nuclei are color-coded.

distance_mesh = cle.draw_distance_mesh_between_proximal_labels(
                    label_image, 
                    maximum_distance=50)

# for visualization purposes we make the lines a bit thicker
distance_mesh = cle.maximum_box(distance_mesh, radius_x=2, radius_y=2)

cle.imshow(distance_mesh, 
           colorbar=True, 
           color_map='jet')
../_images/trailer_19_0.png

Visualization#

Measurements such as distances between neighbors can also be summarized for each individual cell. We can for example color-code the average distance of each cell to its n nearest neighbors and with that, highlight dense areas. Such a visualization is referred to as parametric map image.

average_distance_map = cle.average_distance_of_n_nearest_neighbors_map(
                            label_image,
                            n=6)

cle.imshow(average_distance_map, 
           colorbar=True, 
           color_map='jet', 
           max_display_intensity=50)
../_images/trailer_21_0.png

We can also take measurements from the table above and create a visualization where we see those values color-coded.

# We add 0 to the list at the beginning. 
# It represents the measurements of the background
area = [0] + dataframe['area'].tolist()

print(area)
[0, 364, 891, 356, 898, 822, 1087, 640, 283, 554, 1173, 701, 888, 665, 1381, 704, 824, 406, 248, 270, 716, 726, 596, 146, 677, 119, 423, 642, 939, 853, 822, 545, 523, 657, 551, 656, 598, 159, 686, 641, 904, 480, 55, 1076, 783, 471, 676, 556, 553, 315, 767, 317, 591, 583, 675, 678, 459, 764, 836, 841, 783, 958, 681, 761, 190, 812, 396, 401, 706, 686, 259, 407, 882, 674, 535, 428, 582, 661, 665, 643, 681, 765, 454, 707, 655, 377, 156, 1111, 689, 646, 633, 482, 553, 841, 720, 981, 718, 620, 641, 216]
parametric_image = cle.replace_intensities(label_image, area)

cle.imshow(parametric_image, colorbar=True)
../_images/trailer_24_0.png

Classification#

For better understanding the internal structure of tissues, but also to correct for artifacts in image processing workflows, we can classify cells, for example according to their size and shape.

object_classifier = apoc.ObjectClassifier('../../data/blobs_classifier.cl')
classification_image = object_classifier.predict(label_image, cropped_image)

cle.imshow(classification_image, labels=True)
../_images/trailer_26_0.png