1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Data Ingestion
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Overview
  • Ingestion locations
  • Prerequisites
  • R
    • Setup
    • Dataset inspection
    • Ingest
  • Python
    • Setup
    • Dataset inspection
    • Ingest
  • Summary
  • Clean-up
  1. Structure
  2. Life Sciences
  3. Single-cell
  4. Tutorials
  5. Data Ingestion

Single-Cell Data Ingestion

life sciences
single cell (soma)
tutorials
python
r
ingestion
Learn how to ingest single-cell data with TileDB-SOMA.

Overview

The first step in utilizing TileDB-SOMA for managing single-cell genomics data is ingestion. It involves converting the original data, typically stored in formats such as CSV or HDF5, into TileDB’s on-disk storage format.

In this tutorial, you will learn how to ingest existing datasets stored in commonly used single-cell formats into SOMA experiments using TileDB-SOMA’s Python and R APIs.

To make the conversion process as painless as possible, TileDB-SOMA provides a set of high-level functions that can ingest data from popular single-cell genomics packages, such as Seurat and AnnData, into SOMA experiments. These functions automatically handle the conversion of the data into TileDB’s storage format, storing each component of the original dataset into a separate TileDB array organized following the SOMA data model.

You will use the pbmc3k dataset from the Seurat package as an example. This dataset contains 2,700 peripheral blood mononuclear cells (PBMCs) from a healthy donor, and is commonly used as a benchmark dataset in the single-cell genomics community.

To accommodate the diverse tools and preferences in the computational biology community, this tutorial provides separate instructions for using TileDB-SOMA’s Python and R APIs. These sections will cater to the specific needs and common data formats prevalent in each ecosystem, ensuring that all users can effectively integrate TileDB-SOMA into their workflows.

Ingestion locations

All that’s needed to perform an ingestion is the dataset itself and a URI pointing to the location where the SOMA experiment will be created. The rest is handled by the ingestion functions. The URI can point to a local directory, an S3 bucket, or a TileDB Cloud URI. The latter is a special URI in the form tiledb://<namespace>/s3://<bucket>/<experiment_name>, where <namespace> is the TileDB Cloud account name, <bucket> is the S3 bucket, and <experiment_name> is the name of the SOMA experiment. Using this URI, the ingestion function will:

  1. Create the new SOMA experiment at s3://<bucket>/<experiment_name>.
  2. Register the new SOMA experiment in <namespace>’s TileDB Cloud data catalog.

Registering the experiment with TileDB Cloud allows you (or anyone with whom you share the experiment) to securely access the data using the short URI:tiledb://<namespace>/<experiment_name>. Using this URI also allows TileDB Cloud to authenticate requests, enforce access control policies, and log all queries and operations.

Tip

Interested to learn more? See the [TileDB Cloud URI][] foundation page for more details.

Prerequisites

While you can run this tutorial locally, note that this tutorial relies on remote resources to run correctly.

You must create a REST API token and create an environment variable named $TILEDB_REST_TOKEN set to the value of your generated token.

However, this is not necessary when running on TileDB Cloud where the REST API token is automatically generated and configured for you.

To proceed with this tutorial, you will need to update the following variables to correspond to your TileDB Cloud namespace and destination S3 bucket:

  • Python
  • R
import os

TILEDB_NAMESPACE = os.environ["TILEDB_ACCOUNT"]
S3_BUCKET = os.environ["S3_BUCKET"]
EXPERIMENT_NAME = "soma-exp-pbmc3k"
TILEDB_NAMESPACE <- "tiledb-inc"
S3_BUCKET <- "s3://tiledb-inc-demo-data/examples/notebooks/soma"
EXPERIMENT_NAME <- "soma-exp-pbmc3k"

R

In the R ecosystem, the most commonly used in-memory formats for representing single-cell genomics data come from the Seurat package and Bioconductor’s SummarizedExperiment and SingleCellExperiment packages. TileDB-SOMA’s R API supports ingesting data from all of these formats into SOMA experiments.

Setup

To get started, you will need to load tiledbsoma and a few other packages to complete this tutorial.

library(tiledb)
library(tiledbsoma)
suppressPackageStartupMessages(library(Seurat))

show_package_versions()
tiledbsoma:    1.11.4
tiledb-r:      0.27.0
tiledb core:   2.23.1
libtiledbsoma: 2.23.1
R:             R version 4.3.3 (2024-02-29)
OS:            Debian GNU/Linux 11 (bullseye)

For the purposes of this tutorial, a serialized Seurat object containing the pbmc3k dataset has been made available in a TileDB Cloud filestore. The following code snippet downloads the file and loads the Seurat object.

rds_uri <- "tiledb://TileDB-Inc/scanpy_pbmc3k_processed_rds"
rds_path <- file.path(tempdir(), "pbmc3k_processed.rds")

if (!file.exists(rds_path)) {
  if (!tiledb_filestore_uri_export(rds_path, rds_uri)) {
    stop("Failed to export RDS file from TileDB Cloud")
  }
}

pbmc3k <- readRDS(rds_path)
pbmc3k
An object of class Seurat 
1838 features across 2638 samples within 1 assay 
Active assay: RNA (1838 features, 0 variable features)
 2 layers present: counts, data
 4 dimensional reductions calculated: umap, tsne, draw_graph_fr, pca

Dataset inspection

Inspecting the pbmc3k object reveals that in addition to the RNA assay data, it also contains 4 dimensional reductions, as well as the following graphs:

Graphs(pbmc3k)
  1. 'connectivities'
  2. 'distances'

All of these components can be ingested into a SOMA experiment by passing the Seurat object to write_soma().

Ingest

The uri argument specifies the location where the SOMA experiment will be created In this case, you’re using a TileDB Cloud URI, but it could also be a local file path or an S3 bucket.

EXPERIMENT_URI <- sprintf("tiledb://%s/%s/%s", TILEDB_NAMESPACE, S3_BUCKET, EXPERIMENT_NAME)

EXPERIMENT_URI
'tiledb://tiledb-inc/s3://tiledb-inc-demo-data/examples/notebooks/soma/soma-exp-pbmc3k'

Now pass the Seurat object to write_soma() to ingest the dataset into a new SOMA experiment at the specified URI.

write_soma(pbmc3k, uri = EXPERIMENT_URI)
'tiledb://tiledb-inc/s3://tiledb-inc-demo-data/examples/notebooks/soma/soma-exp-pbmc3k'

Python

For Python users, AnnData is the predominant format used for representing single-cell genomics data. The tiledbsoma Python package provides functions for ingesting both in-memory AnnData and the specialized HDF5 format for storing AnnData objects on disk, called H5AD.

Tip

When ingesting data from an H5AD file, tiledbsoma leverages AnnData’s backed mode to load and ingest X data in a more memory-efficient manner.

Setup

Import the tiledbsoma Python package, as well as a few other packages you’ll use in this tutorial:

import anndata as ad
import tiledb
import tiledb.cloud
import tiledbsoma
import tiledbsoma.io

tiledbsoma.show_package_versions()

The tiledb Python package provides access to TileDB’s virtual filesystem (VFS), which allows for interacting with data on local disk, S3, and TileDB Cloud using the same API.

cfg = tiledb.Config({"vfs.s3.no_sign_request": True})
vfs = tiledb.VFS(config=cfg)

Using TileDB’s VFS, you can read the H5AD directly from S3 and load it into memory using the AnnData package:

H5AD_URI = "s3://tiledb-inc-demo-data/singlecell/h5ad/pbmc3k_processed.h5ad"

with vfs.open(H5AD_URI) as h5ad:
    adata = ad.read_h5ad(h5ad)

Dataset inspection

Inspecting the adata object, you will notice that in addition to the expression data, cell-level annotations in obs, and feature-level annotations in var, it also contains analysis results in obsm, varm, obsp, and uns. All of these components can be ingested into a SOMA experiment by passing the anndata object to the tiledbsoma.io.from_anndata.

adata
AnnData object with n_obs × n_vars = 2638 × 1838
    obs: 'n_genes', 'percent_mito', 'n_counts', 'louvain'
    var: 'n_cells'
    uns: 'draw_graph', 'louvain', 'louvain_colors', 'neighbors', 'pca', 'rank_genes_groups'
    obsm: 'X_draw_graph_fr', 'X_pca', 'X_tsne', 'X_umap'
    varm: 'PCs'
    obsp: 'connectivities', 'distances'

Ingest

The experiment_uri argument is a URI that points to the location where the SOMA experiment will be created. In this case, you’re using a TileDB Cloud URI, but it could also be a local file path or an S3 bucket.

EXPERIMENT_URI = f"tiledb://{TILEDB_NAMESPACE}/{S3_BUCKET}/{EXPERIMENT_NAME}"
EXPERIMENT_URI
'tiledb://tiledb-inc/s3://tiledb-inc-demo-data/examples/notebooks/soma/soma-exp-pbmc3k'

Now pass the AnnData object to tiledbsoma.io.from_anndata() to ingest the dataset into a new SOMA experiment at the specified URI.

tiledbsoma.io.from_anndata(
    experiment_uri=EXPERIMENT_URI, measurement_name="RNA", anndata=adata
)
'tiledb://tiledb-inc/s3://tiledb-inc-demo-data/examples/notebooks/soma/soma-exp-pbmc3k'

Summary

Whether you used Python or R, the SOMA experiment now exists at s3://tiledb-inc-demo-data/examples/notebooks/soma/soma-exp-pbmc3k, which is a collection of TileDB groups and arrays. Each array contains a different component of the original dataset, organized according to the SOMA data model. This new experiment can be accessed using the short URI tiledb://tiledb-inc/soma-exp-pbmc3k.

In this tutorial, you learned how to ingest single-cell genomics data from popular formats into SOMA experiments using TileDB-SOMA’s Python and R APIs. The next tutorial will cover how to access and query the data stored in these SOMA experiments.

Clean-up

Now that you have successfully ingested the pbmc3k dataset into a SOMA experiment, you can clean up by deleting it from S3 and deregistering it from TileDB Cloud.

  • Python
  • R
tiledb.cloud.asset.delete(EXPERIMENT_URI, recursive=True)
grp <- tiledb_group(EXPERIMENT_URI, "READ")
grp <- tiledb_group_close(grp)

grp <- tiledb_group_open(grp, "MODIFY_EXCLUSIVE")
tiledb_group_delete(grp, EXPERIMENT_URI, recursive = TRUE)
Tutorials
Bulk Ingestion Tutorial