Quick start with micro-sam#
This notebook shows the very basics necessary to segment an image using micro-sam.
Installation#
You can install micro-sam in a conda environment like this. If you never worked with conda-environments before, consider reading this blog post first.
mamba install -y -q -c conda-forge micro_sam
For result visualization we use stackview which can be installed using pip.
pip install stackview
First we import the required libraries.
from micro_sam.automatic_segmentation import get_predictor_and_segmenter, automatic_instance_segmentation
from skimage.data import cells3d
import stackview
We load an example 2D image from the scikit-image library.
image = cells3d()[30,0]
stackview.insight(image)
|
|
Loading a pre-trained micro-sam model and applying it to an image just takes two lines of python code:
# Load model
predictor, segmenter = get_predictor_and_segmenter(model_type="vit_b_lm")
# Apply model
label_image = automatic_instance_segmentation(predictor=predictor, segmenter=segmenter, input_path=image)
# Visualize result
stackview.insight(label_image)
Compute Image Embeddings 2D: 100%|███████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.50it/s]
Initialize instance segmentation with decoder: 100%|█████████████████████████████████████| 1/1 [00:00<00:00, 5.14it/s]
|
|
We can also quickly show the result using an animated curtain.
stackview.animate_curtain(image, label_image)