clEsperanto is a project between multiple bio-image analysis ecosystem aiming at removing language barriers. It is based on OpenCL, an open standard for programming graphics processing units (GPUs, and more) and its python wrapper pyopencl. Under the hood, it uses processing kernels originating from the clij project.

See also

GPU Initialization#

We’ll start with initializing checking out what GPUs are installed:

import pyclesperanto_prototype as cle
import matplotlib.pyplot as plt

# list available devices
['NVIDIA GeForce RTX 3050 Ti Laptop GPU', 'gfx1035']
# select a specific device with only a part of its name
<gfx1035 on Platform: AMD Accelerated Parallel Processing (2 refs)>
# check which device is uses right now
<gfx1035 on Platform: AMD Accelerated Parallel Processing (2 refs)>

Processing images#

For loading image data, we use scikit-image as usual:

from import imread, imshow

image = imread("../../data/blobs.tif")
<matplotlib.image.AxesImage at 0x1c1be677640>

The cle. gateway has all methods you need, it does not have sub-packages:

# noise removal
blurred = cle.gaussian_blur(image, sigma_x=1, sigma_y=1)
cle._ image
shape(254, 256)
size254.0 kB
# binarization
binary = cle.threshold_otsu(blurred)
cle._ image
shape(254, 256)
size63.5 kB
# labeling
labels = cle.connected_components_labeling_box(binary)
cle._ image
shape(254, 256)
size254.0 kB
# visualize results
C:\Users\haase\mambaforge\envs\bio39\lib\site-packages\skimage\io\_plugins\ UserWarning: Low image data range; displaying image with stretched contrast.
  lo, hi, cmap = _get_display_range(image)
<matplotlib.image.AxesImage at 0x1c1be77e910>

cle. also comes with an imshow function, that allows for example showing label images more conveniently:

cle.imshow(labels, labels=True)

One can also determine label edges and blend them over the image.

label_edges = cle.detect_label_edges(labels) * labels

cle.imshow(image, continue_drawing=True, color_map="Greys_r")
cle.imshow(label_edges, labels=True, alpha=0.5)
C:\Users\haase\mambaforge\envs\bio39\lib\site-packages\pyclesperanto_prototype\_tier9\ UserWarning: The imshow parameter color_map is deprecated. Use colormap instead.
  warnings.warn("The imshow parameter color_map is deprecated. Use colormap instead.")

Therefore, it may make sense to increase the figure and combine multiple sub-plots

fig, axs = plt.subplots(1, 2, figsize=(12,12))

# left plot
cle.imshow(image, color_map="Greys_r", plot=axs[0])

# right plot
cle.imshow(image, alpha=0.5, continue_drawing=True, color_map="Greys_r", plot=axs[1])
cle.imshow(label_edges, labels=True, alpha=0.5, plot=axs[1])

Some of these operations, e.g. voronoi_otsu_labeling are in fact short-cuts and combine a number of operations such as Gaussian blur, Voronoi-labeling and Otsu-thresholding to go from a raw image to a label image directly:

labels = cle.voronoi_otsu_labeling(image, spot_sigma=3.5, outline_sigma=1)
cle._ image
shape(254, 256)
size254.0 kB

Also, just a reminder, read the documentation of methods you haven’t used before:

Labels objects directly from grey-value images.

    The two sigma parameters allow tuning the segmentation result. Under the hood,
    this filter applies two Gaussian blurs, spot detection, Otsu-thresholding [2] and Voronoi-labeling [3]. The
    thresholded binary image is flooded using the Voronoi tesselation approach starting from the found local maxima.

    * This operation assumes input images are isotropic.

    source : Image
        Input grey-value image
    label_image_destination : Image, optional
        Output image
    spot_sigma : float, optional
        controls how close detected cells can be
    outline_sigma : float, optional
        controls how precise segmented objects are outlined.
    >>> import pyclesperanto_prototype as cle
    >>> cle.voronoi_otsu_labeling(source, label_image_destination, 10, 2)
    .. [1]
    .. [2]
    .. [3]


In pyclesperanto, images are handled in the random access memory (RAM) of your GPU. If you want to use other libraries, which process images on the GPU, the memory must be transferred back. Usually, this happens transparently for the user, e.g. when using scikit-image for measuring region properties:

from skimage.measure import regionprops

statistics = regionprops(labels)

import numpy as np
np.mean([s.area for s in statistics])

If you want to explicitly convert your image, e.g. into a numpy array, you can do it like this:

array([[ 0,  0,  0, ..., 62, 62, 62],
       [ 0,  0,  0, ..., 62, 62, 62],
       [ 0,  0,  0, ..., 62, 62, 62],
       [ 0,  0,  0, ...,  0,  0,  0],
       [ 0,  0,  0, ...,  0,  0,  0],
       [ 0,  0,  0, ...,  0,  0,  0]], dtype=uint32)

Memory management#

In jupyter noteboooks, variables are kept alive as long as the notebook kernel is running. Thus, your GPU may fill up with memory. Thus, if you don’t need an image anymore, remove it from memory using del. It will then be remove from GPU memory thanks to pyopencl magic.

del image
del blurred
del binary
del labels

Napari integration#

For processing 3D image data it is recommended to use a 3D-viewer such as napari:

import napari

# start viewer
viewer = napari.Viewer(ndisplay=3)

# load image
from import imread
image = imread("../../data/Lund_000500_resampled-cropped.tif")

<Image layer 'image' at 0x1c1bf3506d0>

In napari, you can change the view to see different layers side-by-side.

viewer.grid.enabled = True

You can also turn the camera angle = (10, -30, -30)

Next, we process the image and put the segmented image into napari as new layer.

background_subtracted = cle.top_hat_box(image, radius_x=10, radius_y=10, radius_z=10)

labels = cle.voronoi_otsu_labeling(background_subtracted, spot_sigma=2, outline_sigma=1)

<Labels layer 'labels' at 0x1c1d626d310>