1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Bulk Ingestion Tutorial
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Overview
    • Summary of approach
    • Step 1: Preparation
    • Step 2: Create the TileDB-SOMA experiment
    • Step 3: Create the registration map
    • Step 4: Prepare the experiment
    • Step 5: Ingest all H5AD/AnnData files
    • All parts together
  • Other considerations
  1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Bulk Ingestion Tutorial

Bulk Ingestion Tutorial

life sciences
single cell (soma)
tutorials
python
ingestion
ETL
How to perform large-scale ingestion of H5AD files into a TileDB-SOMA Experiment

Overview

Performing large-scale ingestion of tens or more H5ADs into TileDB-SOMA requires a different process than for single-dataset ingestion. Changes are necessary to complete the operation with minimal runtime, and to ensure that the result reflects source data.

This tutorial assumes you are familiar with the Single-Cell Data Ingestion tutorial, and builds upon that knowledge. If you have not done so, read that tutorial first.

The SOMA ingestion API is compatible with a wide variety of distributed and parallel computing frameworks. This tutorial uses the Python concurrent.futures API, which offers a multiprocessing API (for more information, refer to the Python concurrent.futures documentation).

Note

TileDB-SOMA versions 1.16.2 and above include the software support for the material in this document.

Summary of approach

The recommended approach is as follows:

  1. Preparation:
    • Identify your experiment location—that is, where you will store the TileDB-SOMA data.
    • Identify your source H5AD files, and have a common schema (for example, obs column names and data types, and so on).
  2. If it does not yet exist, create the TileDB-SOMA experiment.
  3. Create an ingestion registration map, which you can think of as an ingestion plan built from the source H5ADs and the target experiment. This step will scan all H5ADs, collecting information about their shape and data types.
  4. Prepare the TileDB-SOMA experiment schema as shown by the registration map.
  5. Read and ingest all H5AD files.

This process creates a new SOMA experiment from H5AD files, or appends more H5AD files to an existing SOMA experiment (by skipping step 2).

Steps 1-4 must run sequentially. Step 5 may run in parallel, as the registration map has all information required by each concurrent ingestion worker.

Step 1: Preparation

As documented in the Single-Cell Data Ingestion tutorial, decide the TileDB-SOMA experiment location (a URL or file path), and the location of all source H5ADs.

If you are using a distributed-computing framework, ensure that the source H5AD files are accessible to every worker node.

Step 2: Create the TileDB-SOMA experiment

One way to perform this step is to use the tiledbsoma.io.from_anndata() function in the schema_only mode. When called in this mode, the function will create any necessary elements in the TileDB-SOMA experiment. If you are ingesting more H5AD files that require new TileDB-SOMA measurements or X layers, it will also create those.

For example:

tiledbsoma.io.from_h5ad(
    experiment_uri,
    "/path/to/data.h5ad",
    measurement_name="RNA",
    obs_id_name="obs_id",
    var_id_name="var_id",
    ingest_mode="schema_only",
    context=soma_context,
)

You may also specify other optional arguments, such as the X layer name—visit the tiledbsoma.io.from_h5ad documentation for more details.

Step 3: Create the registration map

The registration map is a summary of all datasets for parallel workers to ingest, and includes all the information they need to ingest H5AD files independently. This step must occur after you create the experiment.

Example:

registration_mapping = tiledbsoma.io.register_h5ads(
    experiment_uri,
    h5ad_paths,
    measurement_name=args.measurement_name,
    obs_field_name="obs_id",
    var_field_name="var_id",
    context=soma_context,
)

Creating a registration map entails scanning all H5ADs to find information affecting the experiment schema (shape, among others). The tiledbsoma.io.register_h5ads() function has an optional use_multiprocessing argument, which will offer some performance benefit when used on hosts with enough CPU and memory resources.

Example:

registration_mapping = tiledbsoma.io.register_h5ads(
    experiment_uri,
    h5ad_paths,
    measurement_name=args.measurement_name,
    obs_field_name="obs_id",
    var_field_name="var_id",
    context=soma_context,
    use_multiprocessing=True,  # performance improvement when reading H5AD files
)

Step 4: Prepare the experiment

Once the experiment and registration map are available, use prepare_experiment() to evolve the dataframe and array schemas to reflect the pending ingestion. For example, the shape of the X matrices in the experiment need resizing, and the dictionary (categorical) columns in the obs and var dataframes need updating. The prepare_experiment() method of the registration map performs these steps.

Example:

registration_mapping.prepare_experiment(experiment_uri, context=soma_context)

Step 5: Ingest all H5AD/AnnData files

Now that you created and prepared the experiment, you can start ingesting all your H5AD files. This step can execute serially across all H5ADs, or concurrently using a multiprocessing or distributed-computing framework. For each dataset, a worker should call tiledbsoma.io.from_h5ad() or tiledbsoma.io.from_anndata(), supplying the registration map as an argument to guide the dataset ingestion.

For example, an individual worker can call the following:

tiledbsoma.io.from_h5ad(
    experiment_uri,
    "/path/to/dataset.h5ad",
    measurement_name=measurement_name,
    obs_id_name=obs_id_name,
    var_id_name=var_id_name,
    X_layer_name=X_layer_name,
    uns_keys=(),
    registration_mapping=registration_mapping,
    context=soma_context,
)

Because the registration map is a large data structure, it is inefficient to send it to each worker. For example, if you use the Python concurrent.futures.ProcessPoolExecutor class, a multiprocessing framework, each worker task would require a copy of the complete registration map (which may be gigabytes in size). To solve this problem, the registration map has a helper method, .subset_for_h5ad(), which will subset to just the information required for a single dataset:

subset_registration_mapping = registration_mapping.subset_for_h5ad(
    "path/to/dataset.h5ad"
)
tiledbsoma.io.from_h5ad(
    experiment_uri,
    "/path/to/dataset.h5ad",
    measurement_name=measurement_name,
    obs_id_name=obs_id_name,
    var_id_name=var_id_name,
    X_layer_name=X_layer_name,
    uns_keys=(),
    registration_mapping=subset_registration_mapping,  # call with the dataset-specific subset
    context=soma_context,
)

Along with a concurrent.futures.ProcessPoolExecutor, you can have code similar to the following:

from concurent.futures import ProcessPoolExecutor
from itertools import repeat


def worker_fn(
    experiment_uri,
    h5ad_path,
    measurement_name,
    obs_id_name,
    var_id_name,
    registration_mapping,
):
    context = (
        tiledbsoma.SOMATileDBContext()
    )  # configure as needed, for example, S3 region
    tiledbsoma.io.from_h5ad(
        experiment_uri,
        h5ad_path,
        measurement_name=measurement_name,
        obs_id_name=obs_id_name,
        var_id_name=var_id_name,
        registration_mapping=registration_mapping,
        context=context,
    )


with ProcessPoolExecutor(max_workers=4) as executor:
    results = list(
        executor.map(
            worker_fn,
            repeat(experiment_uri),
            h5ad_paths,
            repeat(measurement_name),
            repeat("obs_id"),
            repeat("var_id"),
            (
                registration_mapping.subset_for_h5ad(h5ad_path)
                for h5ad_path in h5ad_paths
            ),
        )
    )

All parts together

The TileDB-SOMA repository contains a demonstration script putting this all together:

ingest_h5ads.py

Other considerations

Considerations related to total memory usage:

  1. Creating the registration map with tiledbsoma.io.register_anndatas() or tiledbsoma.io.register_h5ads() will consume memory proportional to the total number of “obs” values (also known as n_obs) in the combined H5ADs and experiment. You need approximately 100-200 bytes per observation to store the registration map on disk, and approximately 2x-3x that (approximately 200-600 bytes) in memory to create the registration map (so that TileDB-SOMA can read every H5AD file and build the map). For example, a one million cell ingestion would require about 500 MiB of RAM in the registration process, resulting in an approximately 100-200 MiB data structure.
  2. As noted earlier, it is expensive to send the registration map to each worker, and you should use the subset_for_h5ad() or subset_for_anndata() methods to reduce the parameter size.
  3. Each worker will need enough memory to load the H5AD file into memory, and then write it to the SOMA experiment. Using too many workers per host can result in an out-of-memory condition. The total memory required depends on the number of workers on each host and the per-worker memory consumption required to load each AnnData file.

If all your data is in H5AD files suitable for ingestion, then the tiledbsoma.io.register_h5ads() and tiledbsoma.io.from_h5ad() methods are the most convenient to use. If your workflow requires modification of each AnnData file before ingestion, and you prefer to do that immediately, tiledbsoma.io.register_anndatas() and tiledbsoma.io.from_anndata() are available. These functions assume that TileDB already loaded the AnnData file into memory and, as a result, require care not to exhaust host memory.

  • tiledbsoma.io.register_anndatas() will accept an iterable producing AnnData (that is, Iterable[AnnData]), so you can lazy-open each AnnData with a Python generator.
  • tiledbsoma.io.from_anndatas() will accept an AnnData opened in “backed” mode.
Data Ingestion
Data Access