Prompting bio-image analysis tasks using LangChain#
In this notebook we demonstrate how to prompt for executing bio-image analysis tasks using chatGPT and LangChain.
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.tools import tool
from skimage.io import imread
from napari_segment_blobs_and_things_with_membranes import voronoi_otsu_labeling
import stackview
For accomplishing this, we need an image storage. To keep it simple, we use a dictionary.
image_storage = {}
To demonstrate bio-image analysis using English language, we define common bio-image analysis functions for loading images, segmenting and counting objects and showing results.
tools = []
@tools.append
@tool
def load_image(filename:str):
"""Useful for loading an image file and storing it."""
print("loading", filename)
image = imread(filename)
image_storage[filename] = image
return "The image is now stored as " + filename
@tools.append
@tool
def segment_bright_objects(image_name):
"""Useful for segmenting bright objects in an image that has been loaded and stored before."""
print("segmenting", image_name)
image = image_storage[image_name]
label_image = voronoi_otsu_labeling(image, spot_sigma=4)
label_image_name = "segmented_" + image_name
image_storage[label_image_name] = label_image
return "The segmented image has been stored as " + label_image_name
@tools.append
@tool
def show_image(image_name):
"""Useful for showing an image that has been loaded and stored before."""
print("showing", image_name)
image = image_storage[image_name]
display(stackview.insight(image))
return "The image " + image_name + " is shown above."
@tools.append
@tool
def count_objects(image_name):
"""Useful for counting objects in a segmented image that has been loaded and stored before."""
label_image = image_storage[image_name]
num_labels = label_image.max()
print("counting labels in ", image_name, ":", num_labels)
return f"The label image {image_name} contains {num_labels} labels."
We create some memory and a large language model based on OpenAI’s chatGPT.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm=ChatOpenAI(temperature=0)
Given the list of tools, the large language model and the memory, we can create an agent.
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory
)
This agent can then respond to prompts.
agent.run("Please load the image ../../data/blobs.tif and show it.")
loading ../../data/blobs.tif
showing ../../data/blobs.tif
|
'The image ../../data/blobs.tif is shown above.'
agent.run("Please segment the image ../../data/blobs.tif .")
segmenting ../../data/blobs.tif
'The segmented image has been stored as segmented_../../data/blobs.tif'
agent.run("Please show the segmented ../../data/blobs.tif image.")
showing segmented_../../data/blobs.tif
|
'The segmented image ../../data/blobs.tif is shown above.'
agent.run("How many objects are there in the segmented ../../data/blobs.tif image?")
counting labels in segmented_../../data/blobs.tif : 64
'The segmented ../../data/blobs.tif image contains 64 objects.'
Chaining operations#
We can also chain these operations in a single sentence and the agent
will figure out on it’s own how to do this.
# empty memory and start from scratch
image_memory = {}
agent.run("""
Please load the image ../../data/blobs.tif,
segment bright objects in it,
count them and
show the segmentation result.
""")
loading ../../data/blobs.tif
segmenting ../../data/blobs.tif
counting labels in segmented_../../data/blobs.tif : 64
showing segmented_../../data/blobs.tif
|
'The segmented image has been shown.'
agent.run("How many objects were there?")
counting labels in segmented_../../data/blobs.tif : 64
'The segmented image contains 64 objects.'