1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Add New Measurements
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Introduction
  • Setup
  • Create the initial Experiment
  • Define the new Measurement’s schema on disk
  • Register the AnnData for the new Measurement
  • Prepare the Experiment for ingestion
  • Ingest data into the new Measurement
  • Inspect the updated SOMA Experiment
  • Conclusion
  1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Add New Measurements

Add New Measurements to an Existing SOMA Experiment

life sciences
single cell (soma)
tutorials
python
updates
Learn how to add new measurements to an existing SOMA experiment to organize datasets that share the same observations but differ in their features.
Warning

This feature is currently limited to Python and requires TileDB-SOMA version 1.16.2 or later.

Introduction

TileDB-SOMA supports storing multiple measurements from the same set of observations (typically cells) within a single experiment. This makes it possible to manage multi-modal datasets or any scenario where you have a different set (or a different number) of features measured across the same observations. This tutorial shows the recommended workflow for adding a new SOMA Measurement to an existing SOMA Experiment.

As a practical example, you will process the same single-cell RNA-seq dataset with two different methods for selecting highly variable genes. Each method will store its output as a separate Measurement within the same Experiment.

Setup

Import necessary libraries. Then, load the standard pbmc3k dataset and simulate two different processing outputs.

import tempfile

import scanpy as sc
import tiledbsoma
import tiledbsoma.io

Define a URI for the SOMA Experiment.

experiment_uri = tempfile.mkdtemp(prefix="multi_aligner_experiment_")
experiment_uri
'/var/folders/nr/1dsl0n155wj7wv083km8t1540000gn/T/multi_aligner_experiment_c_w6k0w0'

Load and preprocess the base dataset.

adata_base = sc.datasets.pbmc3k()

# Preprocess
adata_base.var["mt"] = adata_base.var_names.str.startswith("MT-")
sc.pp.calculate_qc_metrics(adata_base, qc_vars=["mt"], log1p=True, inplace=True)
sc.pp.filter_genes(adata_base, min_cells=3)

# Ensure obs index has a name
adata_base.obs.index.name = "cell_barcode"

adata_base
AnnData object with n_obs × n_vars = 2700 × 13714
    obs: 'n_genes_by_counts', 'log1p_n_genes_by_counts', 'total_counts', 'log1p_total_counts', 'pct_counts_in_top_50_genes', 'pct_counts_in_top_100_genes', 'pct_counts_in_top_200_genes', 'pct_counts_in_top_500_genes', 'total_counts_mt', 'log1p_total_counts_mt', 'pct_counts_mt'
    var: 'gene_ids', 'mt', 'n_cells_by_counts', 'mean_counts', 'log1p_mean_counts', 'pct_dropout_by_counts', 'total_counts', 'log1p_total_counts', 'n_cells'

Select a specific set of highly variable genes for approach A.

sc.pp.highly_variable_genes(
    adata_base, n_top_genes=1000, subset=False, flavor="cell_ranger"
)

adata_a = adata_base[:, adata_base.var.highly_variable].copy()

# Ensure var index has a name
adata_a.var.index.name = "feature_id_aligner_a"

measurement_name_a = "ApproachA_RNA"
print(f"'{measurement_name_a}' AnnData shape: {adata_a.shape}")
'ApproachA_RNA' AnnData shape: (2700, 1000)

Select a different set of highly variable genes for approach B.

sc.pp.highly_variable_genes(
    adata_base,
    n_top_genes=1500,
    subset=False,
    flavor="seurat_v3",
)

adata_b = adata_base[:, adata_base.var.highly_variable].copy()

# Ensure var index has a name
adata_b.var.index.name = "feature_id_aligner_b"

measurement_name_b = "ApproachB_RNA"
print(f"'{measurement_name_b}' AnnData shape: {adata_b.shape}")
'ApproachB_RNA' AnnData shape: (2700, 1500)

Create the initial Experiment

Create the SOMA Experiment and ingest the output from Approach A.

tiledbsoma.io.from_anndata(
    experiment_uri,
    adata_a,
    measurement_name=measurement_name_a,
)
print(f"Initial Experiment created at: {experiment_uri}")
Initial Experiment created at: /var/folders/nr/1dsl0n155wj7wv083km8t1540000gn/T/multi_aligner_experiment_c_w6k0w0

Now, you have an experiment containing a single measurement (ApproachA_RNA). The next steps focus on adding the ApproachB_RNA data as a new, distinct measurement.

Define the new Measurement’s schema on disk

This is a critical step. Before ingesting data for ApproachB_RNA, define its SOMA structure (including its unique var DataFrame and X array) on disk.

Use tiledbsoma.io.from_anndata() with ingest_mode='schema_only', referencing adata_b.

tiledbsoma.io.from_anndata(
    experiment_uri,
    adata_b,
    measurement_name=measurement_name_b,
    ingest_mode="schema_only",
    # Specify the .var index for the new measurement
    var_id_name=adata_b.var.index.name,
)
'file:///var/folders/nr/1dsl0n155wj7wv083km8t1540000gn/T/multi_aligner_experiment_c_w6k0w0'

This creates the necessary SOMA arrays (experiment.ms['ApproachB_RNA'].var, experiment.ms['ApproachB_RNA'].X['data']) with their schemas derived from adata_b, but without writing the actual data yet.

Register the AnnData for the new Measurement

Create a registration_mapping for adata_b. This maps the cell barcodes in adata_b.obs to the existing soma_joinids in experiment.obs (since the cells are the same as in AlignerA_RNA). It also establishes mappings for the new set of features in adata_b.var.

registration_mapping = tiledbsoma.io.register_anndatas(
    experiment_uri,
    [adata_b],
    measurement_name=measurement_name_b,
    # Maps to existing experiment.obs joinids
    obs_field_name=adata_b.obs.index.name,
    # For the new set of features in this measurement
    var_field_name=adata_b.var.index.name,
)

Prepare the Experiment for ingestion

With the registration_mapping created (linking the new AnnData’s obs and var to the SOMA Experiment), call registration_mapping.prepare_experiment(experiment_uri).

This function updates the SOMA experiment’s structure before writing any new data. It performs two main operations based on all registered AnnData objects (in this case, just adata_b for the new measurement):

  1. Dimension Resizing: It checks if the existing SOMA arrays need to be resized to accommodate the dimensions of the new data.
  2. Enum Schema Evolution: For any categorical columns in your adata_b.obs or adata_b.var (which SOMA stores as enumerations), prepare_experiment updates the SOMA array schemas. It adds any new categories present in adata_b to the existing SOMA enumerations.

This step is important for data integrity and ensures the safety of future data writes, even if multiple processes write data concurrently.

After this step, the SOMA experiment (including the schema-defined AlignerB_RNA measurement) is fully prepared to receive the actual data.

registration_mapping.prepare_experiment(experiment_uri)

Ingest data into the new Measurement

Now that you created and prepared the experiment, you can ingest the actual data from adata_b into the new measurement.

tiledbsoma.io.from_anndata(
    experiment_uri,
    anndata=adata_b,
    measurement_name=measurement_name_b,
    obs_id_name=adata_b.obs.index.name,
    var_id_name=adata_b.var.index.name,
    registration_mapping=registration_mapping,
)
'file:///var/folders/nr/1dsl0n155wj7wv083km8t1540000gn/T/multi_aligner_experiment_c_w6k0w0'

Inspect the updated SOMA Experiment

Open the SOMA experiment again (this time in read mode 'r') and verify that the new measurement is present and contains the expected data.

with tiledbsoma.Experiment.open(experiment_uri) as exp:  # Open in read mode
    assert measurement_name_a in exp.ms
    print(f"Found measurement: '{measurement_name_a}'")

    ms_a = exp.ms[measurement_name_a]
    X_array = ms_a.X["data"]
    print(f"'{measurement_name_a}' .X['data'] shape: {X_array.shape}")

    assert measurement_name_b in exp.ms
    print(f"Found measurement: '{measurement_name_b}'")

    ms_b = exp.ms[measurement_name_b]
    X_array = ms_b.X["data"]
    print(f"'{measurement_name_b}' .X['data'] shape: {X_array.shape}")
Found measurement: 'ApproachA_RNA'
'ApproachA_RNA' .X['data'] shape: (2700, 1000)
Found measurement: 'ApproachB_RNA'
'ApproachB_RNA' .X['data'] shape: (2700, 1500)

Conclusion

Adding new measurements to an existing SOMA experiment is a powerful way to organize datasets that share the same observations but differ in their features or modalities. By following the workflow outlined in this tutorial, you can build your SOMA objects correctly.

Appending Data to a SOMA Experiment
SQL Queries