Quick start with micro-sam#

This notebook shows the very basics necessary to segment an image using micro-sam.

Installation#

You can install micro-sam in a conda environment like this. If you never worked with conda-environments before, consider reading this blog post first.

mamba install -y -q -c conda-forge micro_sam

For result visualization we use stackview which can be installed using pip.

pip install stackview

First we import the required libraries.

from micro_sam.automatic_segmentation import get_predictor_and_segmenter, automatic_instance_segmentation
from skimage.data import cells3d
import stackview

We load an example 2D image from the scikit-image library.

image = cells3d()[30,0]

stackview.insight(image)
shape(256, 256)
dtypeuint16
size128.0 kB
min277
max44092

Loading a pre-trained micro-sam model and applying it to an image just takes two lines of python code:

# Load model
predictor, segmenter = get_predictor_and_segmenter(model_type="vit_b_lm")

# Apply model
label_image = automatic_instance_segmentation(predictor=predictor, segmenter=segmenter, input_path=image)

# Visualize result
stackview.insight(label_image)
Compute Image Embeddings 2D: 100%|███████████████████████████████████████████████████████| 1/1 [00:00<00:00,  1.50it/s]
Initialize instance segmentation with decoder: 100%|█████████████████████████████████████| 1/1 [00:00<00:00,  5.14it/s]
shape(256, 256)
dtypeint32
size256.0 kB
min1
max39

We can also quickly show the result using an animated curtain.

stackview.animate_curtain(image, label_image)