import tiledb
from tiledb.cloud.bioimg.ingestion import ingest
Batched Ingestion of Biomedical Images
You can run this tutorial only on TileDB Cloud. However, TileDB Cloud has a free tier. We strongly recommend that you sign up and run everything there, as that requires no installations or deployment.
Efficiently handling massive collections of biomedical images, like those from high-resolution microscopy or large-scale medical scans, poses a significant challenge. This tutorial focuses on optimizing the ingestion of these images by using a technique called batched ingestion with task graphs in a cluster environment.
Imagine you’re a pathologist analyzing hundreds of tissue slides. Instead of examining each slide individually, you might group similar ones together for a more streamlined workflow. That’s essentially what batching does for data ingestion. It groups images into “batches” for more efficient processing.
Why is batching crucial for biomedical images?
- Reduced upkeep: Processing images in batches minimizes the extra work needed for individual file transfers and computations, leading to faster processing times.
- Optimized resources: Batching helps you tailor the image load to your cluster’s capacity, maximizing resource use and preventing bottlenecks.
- Increased throughput: By processing multiple images concurrently within a batch, you can significantly increase how many images your pipeline can handle.
This tutorial will guide you through implementing batched ingestion for your biomedical images by using task graphs in a cluster environment. To explore how to build task graphs that efficiently handle batches of images, you can distribute work across multiple runners and optimize your ingestion pipeline.
Setup
While you can run this tutorial locally, note that this tutorial relies on remote resources to run correctly.
You must create a REST API token and create an environment variable named $TILEDB_REST_TOKEN
set to the value of your generated token.
However, this is not necessary when running this notebook inside of a TileDB workspace where the API token is automatically generated and configured for you.
As a first step, you need to import the TileDB client library.
Next, log in to TileDB.
Cloud client configuration
To configure the cloud client, you need to set up the following parameters:
- Namespace: The namespace where the DAG will run.
- Access Credentials Name: Access Credentials Name (ACN) registered in TileDB (ARN type).
- Resources: Configuration for nodes resources being used.
Ingestion storage paths
To ingest your data from S3, you need to declare one or more input files by providing either a source path on S3 or a list of S3 absolute paths.
While S3 storage appears to have folders, it uses a key-based object structure and has no notion of a directory like your local filesystem. To simulate a folder, ensure your source path ends with a trailing slash (/). This distinguishes between objects and “folders” within S3.
Batched ingestion
TileDB-BioImaging’s ingestion function offers extensive customization through a wide range of parameters, and the TileDB team is constantly adding more. For a comprehensive overview of available options and their usage, refer to the TileDB-BioImaging API documentation.
The ingestion function uses a num_of_batches
parameter to optimize performance. This parameter enables parallel processing of your data by splitting your dataset into smaller batches and distributing them across multiple runners. This approach significantly enhances ingestion speed and efficiency, especially for large datasets.
After setting all the parameters, you can run the ingestion, with your preferred task-graph name.