Demonstration of basic usage of TileDB-VCF on Amazon S3.
How to run this tutorial
You can run this tutorial in two ways:
Locally on your machine.
On TileDB Cloud.
However, since TileDB Cloud has a free tier, we strongly recommend that you sign up and run everything there, as that requires no installations or deployment.
This tutorial shows the basic usage of TileDB-VCF on Amazon S3. It assumes you have already created an account, a bucket, and the credentials required to access the bucket. For more details on the TileDB S3 usage, as well as information about how to use the underlying core TileDB engine with other object stores, visit the Advanced Backends section.
In order for TileDB to be able to access S3 buckets, it needs to know the S3 region and your secret keys. You need to pass this information into a TileDB configuration object. It is good practice not to share private information in notebooks. One way to do this more securely is to set your keys into environment variables and then have your code read those variables, which is what this tutorial does in the following code snippet.
import osimport tiledb# You should set the appropriate environment variables with your keys.# Get the keys from the environment variables.aws_access_key_id = os.environ["AWS_ACCESS_KEY_ID"]aws_secret_access_key = os.environ["AWS_SECRET_ACCESS_KEY"]# Get the bucket and region from environment variabless3_bucket = os.environ["S3_BUCKET"]s3_region = os.environ["S3_REGION"]# Set the AWS keys and region to the config of the default context# This context initialization can be performed only once.read_cfg = tiledb.Config( {"vfs.s3.region": s3_region,"vfs.s3.no_sign_request": True, })write_cfg = tiledb.Config( {"vfs.s3.aws_access_key_id": aws_access_key_id,"vfs.s3.aws_secret_access_key": aws_secret_access_key,"vfs.s3.region": s3_region, })
The rest of the tutorial is almost identical to the Tutorials: Basic Ingestion section, whereas you can create, write, and read any TileDB-VCF dataset in the same manner after setting up your AWS keys as shown above.
First, import the necessary libraries, set the TileDB VCF dataset URI (i.e., its path, which in this tutorial will be on local storage), and delete any previously created datasets with the same name.
vcf_bucket ="s3://tiledb-inc-demo-data/examples/notebooks/vcfs/1kg-dragen"samples_to_ingest = ["HG00096_chr21.gvcf.gz","HG00097_chr21.gvcf.gz","HG00099_chr21.gvcf.gz","HG00100_chr21.gvcf.gz","HG00101_chr21.gvcf.gz",]sample_uris = [f"{vcf_bucket}/{s}"for s in samples_to_ingest]sample_uris
The following block may take a lot of time if you are running it from your local machine with poor internet connection. For best performance, it is highly recommended that you run it from a TileDB Cloud notebook.
# Open a VCF dataset in write mode.# Notice that you need to pass the configuration object you created above.ds = tiledbvcf.Dataset(uri=vcf_uri, mode="w", tiledb_config=write_cfg)# Create empty VCF datasetds.create_dataset()# Ingest samplesds.ingest_samples(sample_uris=sample_uris)
The VCF dataset is now a prefix in the path specified in vcf_uri, which is similar to a subfolder on your local storage.
# Open the Dataset in read mode.# Notice that you need to pass the configuration object you created above.ds = tiledbvcf.Dataset(uri=vcf_uri, mode="r", tiledb_config=read_cfg)# Read a chromosome region, and subset on samples and attributesdf = ds.read( regions=["chr21:8220186-8405573"], samples=["HG00096", "HG00097"], attrs=["sample_name", "contig", "pos_start", "pos_end", "alleles", "fmt_GT"],)df