Bulk Ingestion Tutorial
Overview
Performing large-scale ingestion of tens or more H5ADs into TileDB-SOMA requires a different process than for single-dataset ingestion. Changes are necessary to complete the operation with minimal runtime, and to ensure that the result reflects source data.
This tutorial assumes you are familiar with the Single-Cell Data Ingestion tutorial, and builds upon that knowledge. If you have not done so, read that tutorial first.
The SOMA ingestion API is compatible with a wide variety of distributed and parallel computing frameworks. This tutorial uses the Python concurrent.futures
API, which offers a multiprocessing API (for more information, refer to the Python concurrent.futures
documentation).
TileDB-SOMA versions 1.16.2 and above include the software support for the material in this document.
Summary of approach
The recommended approach is as follows:
- Preparation:
- Identify your experiment location—that is, where you will store the TileDB-SOMA data.
- Identify your source H5AD files, and have a common schema (for example,
obs
column names and data types, and so on).
- If it does not yet exist, create the TileDB-SOMA experiment.
- Create an ingestion registration map, which you can think of as an ingestion plan built from the source H5ADs and the target experiment. This step will scan all H5ADs, collecting information about their shape and data types.
- Prepare the TileDB-SOMA experiment schema as shown by the registration map.
- Read and ingest all H5AD files.
This process creates a new SOMA experiment from H5AD files, or appends more H5AD files to an existing SOMA experiment (by skipping step 2).
Steps 1-4 must run sequentially. Step 5 may run in parallel, as the registration map has all information required by each concurrent ingestion worker.
Step 1: Preparation
As documented in the Single-Cell Data Ingestion tutorial, decide the TileDB-SOMA experiment location (a URL or file path), and the location of all source H5ADs.
If you are using a distributed-computing framework, ensure that the source H5AD files are accessible to every worker node.
Step 2: Create the TileDB-SOMA experiment
One way to perform this step is to use the tiledbsoma.io.from_anndata()
function in the schema_only
mode. When called in this mode, the function will create any necessary elements in the TileDB-SOMA experiment. If you are ingesting more H5AD files that require new TileDB-SOMA measurements or X layers, it will also create those.
For example:
tiledbsoma.io.from_h5ad(
experiment_uri,"/path/to/data.h5ad",
="RNA",
measurement_name="obs_id",
obs_id_name="var_id",
var_id_name="schema_only",
ingest_mode=soma_context,
context )
You may also specify other optional arguments, such as the X layer name—visit the tiledbsoma.io.from_h5ad
documentation for more details.
Step 3: Create the registration map
The registration map is a summary of all datasets for parallel workers to ingest, and includes all the information they need to ingest H5AD files independently. This step must occur after you create the experiment.
Example:
= tiledbsoma.io.register_h5ads(
registration_mapping
experiment_uri,
h5ad_paths,=args.measurement_name,
measurement_name="obs_id",
obs_field_name="var_id",
var_field_name=soma_context,
context )
Creating a registration map entails scanning all H5ADs to find information affecting the experiment schema (shape, among others). The tiledbsoma.io.register_h5ads()
function has an optional use_multiprocessing
argument, which will offer some performance benefit when used on hosts with enough CPU and memory resources.
Example:
= tiledbsoma.io.register_h5ads(
registration_mapping
experiment_uri,
h5ad_paths,=args.measurement_name,
measurement_name="obs_id",
obs_field_name="var_id",
var_field_name=soma_context,
context=True, # performance improvement when reading H5AD files
use_multiprocessing )
Step 4: Prepare the experiment
Once the experiment and registration map are available, use prepare_experiment()
to evolve the dataframe and array schemas to reflect the pending ingestion. For example, the shape of the X
matrices in the experiment need resizing, and the dictionary (categorical) columns in the obs
and var
dataframes need updating. The prepare_experiment()
method of the registration map performs these steps.
Example:
=soma_context) registration_mapping.prepare_experiment(experiment_uri, context
Step 5: Ingest all H5AD/AnnData files
Now that you created and prepared the experiment, you can start ingesting all your H5AD files. This step can execute serially across all H5ADs, or concurrently using a multiprocessing or distributed-computing framework. For each dataset, a worker should call tiledbsoma.io.from_h5ad()
or tiledbsoma.io.from_anndata()
, supplying the registration map as an argument to guide the dataset ingestion.
For example, an individual worker can call the following:
tiledbsoma.io.from_h5ad(
experiment_uri,"/path/to/dataset.h5ad",
=measurement_name,
measurement_name=obs_id_name,
obs_id_name=var_id_name,
var_id_name=X_layer_name,
X_layer_name=(),
uns_keys=registration_mapping,
registration_mapping=soma_context,
context )
Because the registration map is a large data structure, it is inefficient to send it to each worker. For example, if you use the Python concurrent.futures.ProcessPoolExecutor
class, a multiprocessing framework, each worker task would require a copy of the complete registration map (which may be gigabytes in size). To solve this problem, the registration map has a helper method, .subset_for_h5ad()
, which will subset to just the information required for a single dataset:
= registration_mapping.subset_for_h5ad(
subset_registration_mapping "path/to/dataset.h5ad"
)
tiledbsoma.io.from_h5ad(
experiment_uri,"/path/to/dataset.h5ad",
=measurement_name,
measurement_name=obs_id_name,
obs_id_name=var_id_name,
var_id_name=X_layer_name,
X_layer_name=(),
uns_keys=subset_registration_mapping, # call with the dataset-specific subset
registration_mapping=soma_context,
context )
Along with a concurrent.futures.ProcessPoolExecutor
, you can have code similar to the following:
from concurent.futures import ProcessPoolExecutor
from itertools import repeat
def worker_fn(
experiment_uri,
h5ad_path,
measurement_name,
obs_id_name,
var_id_name,
registration_mapping,
):= (
context
tiledbsoma.SOMATileDBContext()# configure as needed, for example, S3 region
)
tiledbsoma.io.from_h5ad(
experiment_uri,
h5ad_path,=measurement_name,
measurement_name=obs_id_name,
obs_id_name=var_id_name,
var_id_name=registration_mapping,
registration_mapping=context,
context
)
with ProcessPoolExecutor(max_workers=4) as executor:
= list(
results map(
executor.
worker_fn,
repeat(experiment_uri),
h5ad_paths,
repeat(measurement_name),"obs_id"),
repeat("var_id"),
repeat(
(
registration_mapping.subset_for_h5ad(h5ad_path)for h5ad_path in h5ad_paths
),
) )
All parts together
The TileDB-SOMA repository contains a demonstration script putting this all together:
Other considerations
Considerations related to total memory usage:
- Creating the registration map with
tiledbsoma.io.register_anndatas()
ortiledbsoma.io.register_h5ads()
will consume memory proportional to the total number of “obs” values (also known asn_obs
) in the combined H5ADs and experiment. You need approximately 100-200 bytes per observation to store the registration map on disk, and approximately 2x-3x that (approximately 200-600 bytes) in memory to create the registration map (so that TileDB-SOMA can read every H5AD file and build the map). For example, a one million cell ingestion would require about 500 MiB of RAM in the registration process, resulting in an approximately 100-200 MiB data structure. - As noted earlier, it is expensive to send the registration map to each worker, and you should use the
subset_for_h5ad()
orsubset_for_anndata()
methods to reduce the parameter size. - Each worker will need enough memory to load the H5AD file into memory, and then write it to the SOMA experiment. Using too many workers per host can result in an out-of-memory condition. The total memory required depends on the number of workers on each host and the per-worker memory consumption required to load each AnnData file.
If all your data is in H5AD files suitable for ingestion, then the tiledbsoma.io.register_h5ads()
and tiledbsoma.io.from_h5ad()
methods are the most convenient to use. If your workflow requires modification of each AnnData file before ingestion, and you prefer to do that immediately, tiledbsoma.io.register_anndatas()
and tiledbsoma.io.from_anndata()
are available. These functions assume that TileDB already loaded the AnnData file into memory and, as a result, require care not to exhaust host memory.
tiledbsoma.io.register_anndatas()
will accept an iterable producingAnnData
(that is,Iterable[AnnData]
), so you can lazy-open eachAnnData
with a Python generator.tiledbsoma.io.from_anndatas()
will accept anAnnData
opened in “backed” mode.