Google Cloud Storage
After setting up TileDB to work with Google Cloud Storage (GCS), your TileDB programs will function properly without any API change! Instead of using local file system paths when creating and accessing groups, arrays, and VFS files, use URIs that start with gcs://
. For instance, if you wish to create (and subsequently write and read) an array on GCS, you use URI gcs://<your-bucket>/<your-array-name>
for the array name.
Configuration
This section explains the TileDB configuration parameters you can tweak for GCS.
Application default credentials
TileDB supports authenticating to Google Cloud using Application Default Credentials. Authentication happens automatically if your application is running on Google Cloud or in your local environment and you have authenticated with the gcloud auth application-default login
command. In other cases, you can set the GOOGLE_APPLICATION_CREDENTIALS
environment variable to a credentials file, like a user-provided service account key.
Manually provided credentials
For more control, you can manually specify strings with the content of credentials files as a config option. TileDB supports the following types of credentials:
Parameter | Description |
---|---|
"vfs.gcs.service_account_key" |
JSON string with user-provided service account key |
"vfs.gcs.workload_identity_configuration" |
JSON string with configuration to obtain workload identity credentials |
If any of the above options are specified, Application Default Credentials will not be considered. If multiple options are specified, the one earlier in the table will be used.
Service account impersonation
You can connect to Google Cloud while impersonating a service account, by setting the vfs.gcs.impersonate_service_account
config option to either the name of a single service account, or a comma-separated sequence of service accounts, for delegated impersonation.
The impersonation will be performed using the credentials configured by one of the above methods.
Additional configuration
The following config options are additionally available:
Parameter | Description |
---|---|
"vfs.gcs.project_id" |
The name of the project to create new buckets to. Not required unless you are going to use the VFS to create buckets. |
Physical organization
So far, you learned that TileDB stores arrays and groups as directories. GCS has no concept of a directory, similar to other object stores. However, GCS uses the /
character in the object URIs, which allows the same conceptual organization as a directory hierarchy in local storage. At a physical level, TileDB stores on GCS all the files it would create locally as objects. For instance, for array gcs://bucket/path/to/array
, TileDB creates array schema object gcs://bucket/path/to/array/schema/__<timestamp>_<timestamp>_<uuid>
and other files and objects. Since GCS has no concept of a directory, nothing distinctive persists on GCS for directories (for example, gcs://bucket/path/to/array/meta/
doesn’t exist as an object).
Performance
TileDB writes the various fragment files as append-only objects using the insert object API of the Google Cloud C++ SDK. In addition to enabling appends, this API renders the TileDB writes to GCS particularly amenable to optimizations via parallelization. Since TileDB updates arrays only by writing (appending to) new files (i.e., it never updates a file in-place), TileDB does not need to download entire objects, update them, and re-upload them to GCS. This leads to excellent write performance.
TileDB reads utilize the range GET request API of the GCS SDK, which retrieves only the requested (contiguous) bytes from a file/object, rather than downloading the entire file from the cloud. This results in extremely fast subarray reads, especially because of the array tiling. Recall that a tile (which groups cell values that are stored contiguously in the file) is the atomic unit of I/O. The range GET API enables reading each tile from GCS in a single request. Finally, TileDB performs all reads in parallel using multiple threads, which is a tunable configuration parameter.
S3 compatibility API
While TileDB provides a native GCS backend implementation using the Google Cloud C++ SDK, it is also possible to use GCS via the GCS-S3 compatibility API using your S3 backend. Doing so requires setting several configuration parameters:
Parameter | Default value |
---|---|
"vfs.s3.endpoint_override" |
"storage.googleapis.com" |
"vfs.s3.region" |
"auto" |
"vfs.s3.aws_access_key_id" , "vfs.s3.aws_secret_access_key" |
Override here, or set as usual using AWS settings or environment variables. |
vfs.s3.use_multipart_upload=true
may work with recent GCS updates, but has not yet been tested/evaluated by the TileDB team.
Full example for GCS via S3 compatibility in Python:
import tiledb
import numpy as np
import sys
# update this with your array URI on GCS
= "s3://your-bucket/array-path"
uri
# read credentials from 'creds.nogit' file in current
# directory, newline separated:
# "key\nsecret"
= "creds.nogit"
creds_path = [x.strip() for x in open(creds_path).readlines()]
key, secret
# gcs config
= tiledb.Config()
config "vfs.s3.endpoint_override"] = "storage.googleapis.com"
config["vfs.s3.aws_access_key_id"] = key
config["vfs.s3.aws_secret_access_key"] = secret
config["vfs.s3.region"] = "auto"
config["vfs.s3.use_multipart_upload"] = "false"
config[
# context
= tiledb.Ctx(config=config)
ctx
# create sample array if it does not exist
= tiledb.VFS(ctx=ctx)
vfs if not vfs.is_dir(uri):
print("trying to write: ", uri)
= np.arange(5)
a = tiledb.schema_like(a, ctx=ctx)
schema
tiledb.Array.create(uri, schema)with tiledb.DenseArray(uri, "w", ctx=ctx) as T:
= a
T[:]
print("reading back from: ", uri)
with tiledb.DenseArray(uri, ctx=ctx) as t:
print(t[:])