import os
import tiledb
import tiledb.cloud
import tiledb.cloud.vcf
import tiledbvcf
# Print library versions
print("TileDB core version: {}".format(tiledb.libtiledb.version()))
print("TileDB-Py version: {}".format(tiledb.version()))
print("TileDB-VCF version: {}".format(tiledbvcf.version))
print("TileDB-Cloud-Py version: {}".format(tiledb.cloud.version.version))
# You should set the appropriate environment variables with your keys.
# Get the keys from the environment variables.
= os.environ["TILEDB_REST_TOKEN"]
tiledb_token # or use your username and password (not recommended)
# tiledb_username = os.environ["TILEDB_USERNAME"]
# tiledb_password = os.environ["TILEDB_PASSWORD"]
# Log into TileDB Cloud
=tiledb_token)
tiledb.cloud.login(token# or use your username and password (not recommended)
# tiledb.cloud.login(username=tiledb_username, password=tiledb_password)
# Set the TileDB-VCF dataset URI
= "scalable-ingestion"
vcf_name = tiledb.cloud.user_profile()
user_profile = user_profile.default_s3_path.rstrip("/")
s3_bucket = user_profile.username
tiledb_account = os.path.join("tiledb://", tiledb_account, s3_bucket, vcf_name)
vcf_uri
# Delete the dataset if it exists
if tiledb.object_type(vcf_uri, ctx=tiledb.cloud.Ctx()):
=True) tiledb.cloud.asset.delete(vcf_uri, recursive
Scalable Ingestion
You can run this tutorial in two ways:
- Locally on your machine.
- On TileDB Cloud.
However, since TileDB Cloud has a free tier, we strongly recommend that you sign up and run everything there, as that requires no installations or deployment.
This tutorial demonstrates a powerful, scalable solution provided by TileDB Cloud to ingest VCF data into a TileDB-VCF dataset, handling dataset sizes from a few samples to biobank scale.
You will ingest a subset of the publicly available 1000 Genomes Phase 3 Reanalysis with DRAGEN dataset, which is managed by Illumina and hosted on AWS Data Exchange. The ingested TileDB-VCF dataset will be stored on Amazon S3 and registered on TileDB Cloud.
Programmatic ingestion
TileDB-VCF batch ingestion is initiated by calling a single Python method in the tiledb.cloud
Python package and is recommended for the following use cases:
- Running ingestion in larger bioinformatics or data processing pipelines.
- Recording ingestion parameters in a script or notebook for documentation and reproducibility.
- Accessing advanced options that are not available in one-click ingestion.
Import the necessary libraries, load the appropriate environment variables, set the URIs used throughout the tutorial, and delete any previously created VCF datasets with the same name. If you are running this from a local notebook, visit the Tutorials: Basic TileDB Cloud for more information on how to set your TileDB Cloud credentials in a configuration object (this step can be omitted inside a TileDB Cloud notebook).
Now, start the ingestion using the parameters defined above. The search_uri
and pattern
arguments are configured to recursively search for VCF files in the public 1000 Genomes bucket. The remaining arguments are configured to reduce the time of the ingestion for this tutorial.
The ingestion progress can be tracked in the task graph log on TileDB Cloud by following the batch->ingest_vcf
, vcf-filter-uris
, vcf-populate-manifest
, vcf-filter-samples
, and vcf-ingest-samples
task graphs.
After the TileDB-VCF dataset is ingested and registered on TileDB Cloud, test reading variants from all samples in the region chr21:10000000-12000000
.
Finally, clean up by deleting the dataset from TileDB Cloud.
Ingestion from the UI console
TileDB Cloud provides a method to ingest a batch of VCFs into a TileDB-VCF dataset directly from its UI console. Visit Catalog: Genomics (VCF) for more details.