I am continuing to develop a microscopy image analysis pipeline (for virology) which utilizes 2048x2048 .tiff files and am investigating whole dataset visualization options (2500 tiled images). I think bokeh and datashader are ideal as it is important that I can manage polygons and image data simultaneously (advice?) at multiple scales and rapidly visualize the polygons i’m creating upsteam.
In the landasat example you’re able to use xarrays and rasterio to do exactly what i want to do from a tiled GeoTiff file. You note that dask is useful to serve data to datashader, but I can only find examples for points data in dask dataframes…
I have tried combining multiple rasterize(hv.RGB(xs,ys,r,g,b.a)) loaded into xarrays by rasterio (and ignoring the geographic information) but this quickly fails to render everything when zooming so I think I’m missing the point…
Is there a way to use something similar to rasterio to combine multiple high resolution images together before rasterization with datashader?
(e.g. the below code, but it does not seem to leverage the parallel power of dask
data = [ [ image[0, …], image[1, …] ],
[ image[2, …], image[3, …] ] ]
combined_images = ds.block(data))
Alternatively, I could just use RangeXY to load specific image coordinates / resolutions?
Or perhaps I need to figure out a way to save my tiff data as a tiled tiff?
I really like bokeh/holoviews/datashader because it is highly compatible with jupyter lab which is ideal for users understanding what is going on under the hood. I’ve investigated using napari and image pyramids, but it does not run seamlessly with jupyter and I still face issues down-sampling larger images I’ve created with dask blocks.
Any advice or guidance will be greatly appreciated.
Thank you in advance. Love your work!