1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. Nullable Attributes
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Fixed-length, nullable attributes
  • Variable-length, nullable attributes
    • Nullable, variable-length list attributes
    • Nullable, variable-length string attributes
  1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. Nullable Attributes

Nullable Attributes

Learn how to work with arrays that contain nullable attributes.

Two main types of nullable attributes exist in TileDB:

  • Fixed-length, nullable attributes
  • Variable-length, nullable attributes

Fixed-length, nullable attributes

Both the Python and R APIs support fixed-length, nullable attributes. The following array you’ll use is a sparse array, but these concepts also apply to dense arrays.

To get started, import the necessary libraries, set the array URI (that is, its path, which in this tutorial will be on local storage), and delete any previously created arrays with the same name.

  • Python
  • R
# Import necessary libraries
import os.path
import shutil

import numpy as np
import tiledb

# Set array URI
array_uri = os.path.expanduser("~/fixed_length_nullable_python")

# Delete array if it already exists
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
# Import necessary libraries
library(tiledb)

# Set array URI
sparse_array <- path.expand("~/fixed_length_nullable_r")

# Delete array if it already exists
if (file.exists(sparse_array)) {
  unlink(sparse_array, recursive = TRUE)
}

Next, create the array by specifying its schema.

  • Python
  • R
# Create the two dimensions
d1 = tiledb.Dim(name="d1", domain=(1, 4), tile=2, dtype=np.int32)
d2 = tiledb.Dim(name="d2", domain=(1, 4), tile=2, dtype=np.int32)

# Create a domain using the two dimensions
dom = tiledb.Domain(d1, d2)
# Order of the dimensions matters when slicing subarrays.
# Remember to give priority to more selective dimensions to
# maximize the pruning power during slicing.

# Create an attribute
a = tiledb.Attr(name="a", dtype=np.float64, nullable=True)

# Create the array schema with `sparse=True`.
# Set `cell_order` to 'row-major' (default) or 'C', 'col-major' or 'F', or 'hilbert'.
# Set `tile_order` to 'row-major' (default) or 'C', 'col-major' or 'F'.
sch = tiledb.ArraySchema(domain=dom, sparse=True, attrs=[a])

# Create the array on disk (it will initially be empty)
tiledb.Array.create(array_uri, sch)
# Create the two dimensions
d1 <- tiledb_dim("d1", c(1L, 4L), 2L, "INT32")
d2 <- tiledb_dim("d2", c(1L, 4L), 2L, "INT32")

# Create a domain using the two dimensions
dom <- tiledb_domain(dims = c(d1, d2))

# Create an attribute
a <- tiledb_attr("a", type = "FLOAT64", nullable = TRUE)

# Create the array schema with `sparse = TRUE`
sch <- tiledb_array_schema(dom, a, sparse = TRUE)

# Create the array on disk (it will initially be empty)
arr <- tiledb_array_create(sparse_array, sch)

Populate the TileDB array with a set of 1D input arrays: one for the coordinates of each dimension, and one for the attribute values. TileDB sparse arrays expect the coordinate (COO) format.

  • Python
  • R
# Prepare some data in numpy arrays
d1_data = np.array([1, 2, 3, 4], dtype=np.int32)
d2_data = np.array([2, 1, 3, 4], dtype=np.int32)
a_data = np.array(
    [1.1, 2.2, None, 4.4],
    dtype="O",
)

# Open the array in write mode and write the data in COO format
with tiledb.open(array_uri, "w") as A:
    A[d1_data, d2_data] = {"a": a_data}
# Prepare some data in an array
d1_data <- c(1, 2, 3, 4)
d2_data <- c(2, 1, 3, 4)
a_data <- c(1.1, 2.2, NA, 4.4)

# Open the array for writing and write data to the array
arr <- tiledb_array(
  uri = sparse_array,
  query_type = "WRITE",
  return_as = "data.frame"
)
arr[d1_data, d2_data] <- a_data

# Close the array
invisible(tiledb_array_close(arr))

The array is a folder in the path specified in array_uri. You can learn about the different contents of the array folder in other sections of the Academy.

  • Python
  • R
/Users/nickv/basic_sparse
├── __commits
│   └── __1739368327679_1739368327679_51fb1c9a2caa9db13e987a223f066c77_22.wrt
├── __fragment_meta
├── __fragments
│   └── __1739368327679_1739368327679_51fb1c9a2caa9db13e987a223f066c77_22
│       ├── __fragment_metadata.tdb
│       ├── a0.tdb
│       ├── a0_validity.tdb
│       ├── d0.tdb
│       └── d1.tdb
├── __labels
├── __meta
└── __schema
    ├── __1739368327666_1739368327666_000000028e0fbc74cc1ea0cf0e43ae11
    └── __enumerations

9 directories, 7 files
/Users/nickv/basic_sparse
├── __commits
│   └── __1739368327679_1739368327679_51fb1c9a2caa9db13e987a223f066c77_22.wrt
├── __fragment_meta
├── __fragments
│   └── __1739368327679_1739368327679_51fb1c9a2caa9db13e987a223f066c77_22
│       ├── __fragment_metadata.tdb
│       ├── a0.tdb
│       ├── a0_validity.tdb
│       ├── d0.tdb
│       └── d1.tdb
├── __labels
├── __meta
└── __schema
    ├── __1739368327666_1739368327666_000000028e0fbc74cc1ea0cf0e43ae11
    └── __enumerations

9 directories, 7 files

Read the data by using the slicing methods supported in TileDB.

  • Python
  • R
# Open the array in read mode
A = tiledb.open(array_uri, "r")

# Show the entire array
print("Entire array: ")
print(A[:])
print("\n")

print("Entire array as a data frame:")
print(A.df[:])
print("\n")

# Remember to close the array
A.close()
Entire array: 
OrderedDict({'a': masked_array(data=[1.1, 2.2, --, 4.4],
             mask=[False, False,  True, False],
       fill_value=1e+20), 'd1': array([1, 2, 3, 4], dtype=int32), 'd2': array([2, 1, 3, 4], dtype=int32)})


Entire array as a data frame:
   d1  d2    a
0   1   2  1.1
1   2   1  2.2
2   3   3  NaN
3   4   4  4.4

# Open the array in read mode
invisible(tiledb_array_open(arr, type = "READ"))

# Show the entire array
cat("Entire array:\n")
print(arr[])

# Close the array
invisible(tiledb_array_close(arr))
Entire array:
  d1 d2   a
1  2  1 2.2
2  1  2 1.1
3  3  3  NA
4  4  4 4.4

Variable-length, nullable attributes

As highlighted in Variable-Length Attributes, variable-length attributes exist in two forms:

  1. Attributes that accept variable-length lists of basic datatypes.
  2. Attributes that accept variable-length string values.

The same concept applies to variable-length, nullable attributes.

Nullable, variable-length list attributes

Writing variable-length attribute values to an array involves passing three buffers to TileDB: one for the variable-length cell values, one for the starting offset of each value in the first buffer, and one for the cell validity values. The following code block illustrates this with a sparse write, but it is also applicable for dense writes.

  • Python
  • R
# TODO
## Variable-length nullable (numerical) attributes are not yet supported

The code snippet above produces the following sparse fragment:

A 4x4 sparse fragment with a null cell at coordinates (3, 3). A 4x4 sparse fragment with a null cell at coordinates (3, 3).

Nullable, variable-length string attributes

First, import the necessary libraries, set the array URI (that is, its path, which in this tutorial will be on local storage), and delete any previously created arrays with the same name.

  • Python
  • R
# Import necessary libraries
import os.path
import shutil

import numpy as np
import tiledb

# Set array URI
array_uri = os.path.expanduser("~/var_length_null_string_py")

# Delete array if it already exists
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
library(tiledb)

# Set array URI
array_uri <- path.expand("~/var_length_attributes_string_r")

# Delete array if it already exists
if (file.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}

Next, create a 2D sparse array by specifying its schema (this applies to dense arrays as well). Notice how to specify a variable-length attribute that accepts variable-length strings.

  • Python
  • R
# Create the two dimensions
d1 = tiledb.Dim(name="d1", domain=(0, 3), tile=2, dtype=np.int32)
d2 = tiledb.Dim(name="d2", domain=(0, 3), tile=2, dtype=np.int32)

# Create a domain using the two dimensions
dom = tiledb.Domain(d1, d2)

# Create a string attribute by setting dtype=np.bytes_.
# This attribute will accept variable-length strings.
a = tiledb.Attr(name="a", dtype=np.bytes_, nullable=True)

# Create the array schema with `sparse=True`
sch = tiledb.ArraySchema(domain=dom, sparse=True, attrs=[a])

# Create the array on disk (it will initially be empty)
tiledb.Array.create(array_uri, sch)
# Create the two dimensions
d1_str <- tiledb_dim("d1", c(0L, 3L), 2L, "INT32")
d2_str <- tiledb_dim("d2", c(0L, 3L), 2L, "INT32")

# Create a domain using the two dimensions
dom <- tiledb_domain(dims = c(d1_str, d2_str))

# Create string attribute a
a <- tiledb_attr("a", type = "ASCII", ncells = NA, nullable = TRUE)

# Create the array schema, setting `sparse = TRUE`
sch <- tiledb_array_schema(dom, a, sparse = TRUE)

# Create the array on disk (it will initially be empty)
arr <- tiledb_array_create(array_uri, sch)

Populate the array with 1D NumPy arrays in COO format.

  • Python
  • R
import numpy as np
import tiledb

# Set the coordinates
d1_data = np.array([1, 2, 3, 3, 0])
d2_data = np.array([2, 1, 3, 2, 0])

# Set the string attribute values
a_data = np.array(["aa", "", "Ccc", "d", None], dtype="O")

# Write the data to the array
with tiledb.open(array_uri, "w") as A:
    A[d1_data, d2_data] = a_data
# Set the coordinates
d1_data <- c(1L, 2L, 3L, 3L, 0L)
d2_data <- c(2L, 1L, 3L, 2L, 0L)

# Set the string attribute values
a_data <- c("aa", "", "Ccc", "d", NA)

# Write the data to the array
arr <- tiledb_array(uri = array_uri, query_type = "WRITE", return_as = "data.frame")
arr[d1_data, d2_data] <- a_data

Read the entire array and observe the returned variable-length strings.

  • Python
  • R
# Variable-length arrays may be sliced as usual in Python.
# The API handles unpacking and type conversion, and returns
# a NumPy object array-of-arrays.

# Read all array data
with tiledb.open(array_uri) as A:
    print(A[:]["a"])
    print(A.df[:])
[-- b'aa' b'' b'd' b'Ccc']
   d1  d2       a
0   0   0    None
1   1   2   b'aa'
2   2   1     b''
3   3   2    b'd'
4   3   3  b'Ccc'
# Variable-length arrays may be sliced as usual in R
# The API handles unpacking and type conversion, and returns
# an array of arrays.

# Read all array data
print(arr[])
  d1 d2    a
1  0  0 <NA>
2  2  1     
3  1  2   aa
4  3  2    d
5  3  3  Ccc

Clean up in the end by deleting the array.

  • Python
  • R
# Delete the array
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
if (dir.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}
String Dimensions
Multi-Range Reads