Processing timelapse data#
This notebook demonstrates how to process timelapse data frame-by-frame.
from skimage.io import imread, imsave
import pyclesperanto_prototype as cle
import numpy as np
First, we should define the origin of the data we want to process and where the results should be saved to.
input_file = "../../data/CalibZAPWfixed_000154_max.tif"
output_file = "../../data/CalibZAPWfixed_000154_max_labels.tif"
Next, we open the dataset and see what image dimensions it has.
timelapse = imread(input_file)
timelapse.shape
(100, 235, 389)
If it is not obvious which dimension is the time dimension, it is recommended to slice the dataset in different directions.
cle.imshow(timelapse[:,:,150])
![../_images/442b86f3596c139e62135602bdaac281bfc4ff584cf7186cb7c43320f2364a8e.png](../_images/442b86f3596c139e62135602bdaac281bfc4ff584cf7186cb7c43320f2364a8e.png)
cle.imshow(timelapse[50,:,:])
![../_images/ec9eeb7a9de6d9f80660d6ccb54ce388278033abab502f313884091c6179f125.png](../_images/ec9eeb7a9de6d9f80660d6ccb54ce388278033abab502f313884091c6179f125.png)
Obviously, the time dimension is the first dimension (index 0).
Next, we define the image processing workflow we want to apply to our dataset. It is recommended to do this in a function so that we can later reuse it without copy&pasting everything.
def process_image(image,
# define default parameters for the procedure
background_subtraction_radius=10,
spot_sigma=1,
outline_sigma=1):
"""Segment nuclei in an image and return labels"""
# pre-process image
background_subtracted = cle.top_hat_box(image,
radius_x=background_subtraction_radius,
radius_y=background_subtraction_radius)
# segment nuclei
labels = cle.voronoi_otsu_labeling(background_subtracted,
spot_sigma=spot_sigma,
outline_sigma=outline_sigma)
return labels
# Try out the function
single_timepoint = timelapse[50]
segmented = process_image(single_timepoint)
# Visualize result
cle.imshow(segmented, labels=True)
![../_images/ce59c91bb9c93b4882f2dc7c145ab7e13a8b4d46c58793967833f7388be507d1.png](../_images/ce59c91bb9c93b4882f2dc7c145ab7e13a8b4d46c58793967833f7388be507d1.png)
After we made this function work on a single timepoint, we should program a for-loop that goes through the timelapse, applies the procedure to a couple of image and visualizes the results. Note: We go in steps of 10 images through the timelapse, to get an overview.
max_t = timelapse.shape[0]
for t in range(0, max_t, 10):
label_image = process_image(timelapse[t])
cle.imshow(label_image, labels=True)
![../_images/51c6a9e598304f16847f3f1dfa098cc73ac8ee7cc392437e3ff8dca75bab9b5d.png](../_images/51c6a9e598304f16847f3f1dfa098cc73ac8ee7cc392437e3ff8dca75bab9b5d.png)
![../_images/ab3bcb4e330a3a2eb539f33bf994b3f49b2f28fbe65f8a9536b59154eb9d7af7.png](../_images/ab3bcb4e330a3a2eb539f33bf994b3f49b2f28fbe65f8a9536b59154eb9d7af7.png)
![../_images/c7b5a1ab97a72cff10faab3b10775ab52170e3cdec6c3679bf095e3199fae549.png](../_images/c7b5a1ab97a72cff10faab3b10775ab52170e3cdec6c3679bf095e3199fae549.png)
![../_images/eaeba58f06f14bcd20ed9acc8f1d120e95a15aafe5a33046c09671057ec8b456.png](../_images/eaeba58f06f14bcd20ed9acc8f1d120e95a15aafe5a33046c09671057ec8b456.png)
![../_images/da2a4535ff6354b5b969b5aa928862976ae89f852f8210a45173e5e9a1d57bed.png](../_images/da2a4535ff6354b5b969b5aa928862976ae89f852f8210a45173e5e9a1d57bed.png)
![../_images/ce59c91bb9c93b4882f2dc7c145ab7e13a8b4d46c58793967833f7388be507d1.png](../_images/ce59c91bb9c93b4882f2dc7c145ab7e13a8b4d46c58793967833f7388be507d1.png)
![../_images/782db570f2761bb79105d0bd8d09f1c006db76309e0f73198f731951b7a890f7.png](../_images/782db570f2761bb79105d0bd8d09f1c006db76309e0f73198f731951b7a890f7.png)
![../_images/a67f022642078a4ad8366a1d480c9de82cc69d9b6d359394eecfca0f5a4a310b.png](../_images/a67f022642078a4ad8366a1d480c9de82cc69d9b6d359394eecfca0f5a4a310b.png)
![../_images/cb912bd877a26a7d486345389c8be2a466864e5802172087952024f04f799539.png](../_images/cb912bd877a26a7d486345389c8be2a466864e5802172087952024f04f799539.png)
![../_images/3d0b815c7f235394fb18a80db92db9d3d309ca81049f0be54e4d677eef4d11c8.png](../_images/3d0b815c7f235394fb18a80db92db9d3d309ca81049f0be54e4d677eef4d11c8.png)
When we are convinced that the procedure works, we can apply it to the whole timelapse, collect the results in a list and save it as stack to disc.
label_timelapse = []
for t in range(0, max_t):
label_image = process_image(timelapse[t])
label_timelapse.append(label_image)
# convert list of 2D images to 3D stack
np_stack = np.asarray(label_timelapse)
# save result to disk
imsave(output_file, np_stack)
C:\Users\rober\AppData\Local\Temp\ipykernel_27924\219181406.py:10: UserWarning: ../../data/CalibZAPWfixed_000154_max_labels.tif is a low contrast image
imsave(output_file, np_stack)
Just to be sure that everything worked nicely, we reopen the dataset and print its dimensionality. It’s supposed to be identical to the original timelapse
dataset.
result = imread(output_file)
result.shape
(100, 235, 389)
timelapse.shape
(100, 235, 389)