Tracing memory consumption#

When setting up complex workflows, it might male sense to take a look at memory consumption. In interactive environments, the user can use the Windows Task manager to see how busy GPU memory is. That might be cumbersome for scipting. When using an nvidia GPU, the following procedure can be used for workflow memory consumption debugging.

import numpy as np
import pyclesperanto_prototype as cle

<NVIDIA GeForce RTX 3050 Ti Laptop GPU on Platform: NVIDIA CUDA (1 refs)>

For overseeing memory consumption, one can use nvidia-smi, a command line tool that can print out how much memory is currently blocked in a given GPU, by any application:

!nvidia-smi --query-gpu=memory.used --format=csv
memory.used [MiB]
178 MiB

If we then run an operation on the GPU and check memory consumption again, we should see an increase.

image = np.random.random((1024, 1024, 100))

blurred = cle.gaussian_blur(image)
!nvidia-smi --query-gpu=memory.used --format=csv
memory.used [MiB]
580 MiB

The del command allows to free memory. Note: The memory behind the variable may not be freed immediately, depending on how busy the system is at the moment.

del blurred
!nvidia-smi --query-gpu=memory.used --format=csv
memory.used [MiB]
180 MiB