1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. fromDataFrame
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary
  1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. fromDataFrame

Use fromDataFrame() to Create Arrays with TileDB-R

tutorials
arrays
r
reads
writes
With the TileDB-R API, you can create a TileDB array from a data.frame object by using the fromDataFrame function.

This tutorial shows how to use TileDB-R’s fromDataFrame() function to create a TileDB array from a data.frame object. This applies to both dense and sparse arrays.

For the complete API reference of this function, visit fromDataFrame() in the TileDB-R API docs.

First, import the necessary libraries and set the array URI (that is, its path, which in this tutorial will be on local storage).

  • R
# Import necessary libraries
library(tiledb)

# Set array URIs
(sparse_array_uri <- tempfile("fromDataFrame_sparse_r"))
(dense_array_uri <- tempfile("fromDataFrame_dense_r"))

Define the dataframes you’ll use in this tutorial. You’ll use the coords_sparse dataframe to create sparse arrays with fromDataFrame() and the coords_dense dataframe to create dense arrays with fromDataFrame().

  • R
# Create dense coordinates
(mat <- matrix(1L:16L, nrow = 4L))
(coords_dense <- reshape2::melt(
  mat,
  varnames = c("d1", "d2", value.name = "a")
)
)

# Create sparse data frame
(mat2 <- Matrix::sparseMatrix(
  i = c(3L, 1L, 4L, 3L, 1L, 2L),
  j = c(1L, 2L, 2L, 3L, 4L, 4L),
  x = c(4L, 1L, 6L, 5L, 2L, 3L),
  repr = "T"
))
(coords_sparse <- data.frame(d1 = mat2@i, d2 = mat2@j, a = mat2@x))

Use the fromDataFrame() function to create a sparse array from the coords_sparse dataframe. Since dataframes in R have no concept of sparsity, the fromDataFrame() by default creates a sparse TileDB array from the dataframe you pass as an argument.

At a minimum, fromDataFrame() needs the data.frame object and the array URI.

  • R
# Create a sparse array from the `coords_sparse` dataframe
# At a minimum, you must pass the dataframe object and array URI
# The `col_index` argument is optional and specifies the columns
# to use as the dimensions of the array
fromDataFrame(coords_sparse, sparse_array_uri, col_index = c("d1", "d2"))

Now that you created the array, read its schema:

  • R
arr <- tiledb_array(sparse_array_uri, query_type = "READ", return_as = "data.frame")
# Print the schema of the array
schema(arr)
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)

Now read its data.

  • R
# Read the data from the array
arr[]
A data.frame: 6 x 3
d1 d2 a
<int> <int> <int>
2 0 4
0 1 1
3 1 6
2 2 5
0 3 2
1 3 3

By default, sparse arrays created through fromDataFrame() allow duplicate values. Try adding a new cell value at coordinates [3, 1]:

  • R
# Reopen the array for writing and write the duplicate data
arr <- tiledb_array_close(arr)
arr <- tiledb_array_open(arr, type = "WRITE")
arr[3, 1] <- 2

# Reopen the array for reading
arr <- tiledb_array_close(arr)
arr <- tiledb_array_open(arr, type = "READ")
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 6
4  2  2 5
5  0  3 2
6  1  3 3
7  3  1 2

The array now returns 7 rows instead of 6. You can also set the allows_dups argument to FALSE to prevent TileDB from adding duplicates during writes.

When you disable duplicates, writing to a cell in an array that already has a value will overwrite the existing cell value.

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  allows_dups = FALSE
)
arr <- tiledb_array(sparse_array_uri, query_type = "WRITE", return_as = "data.frame")
arr[3, 1] <- 2
arr <- tiledb_array_close(arr)
arr <- tiledb_array_open(arr, type = "READ")
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=FALSE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 2
4  2  2 5
5  0  3 2
6  1  3 3

You can specify the cell and tile order of an array by using the cell_order and tile_order arguments. The default is COL_MAJOR order for both.

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  cell_order = "ROW_MAJOR",
  tile_order = "COL_MAJOR",
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="ROW_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  0  1 1
2  0  3 2
3  1  3 3
4  2  0 4
5  2  2 5
6  3  1 6

You can apply filters or compression to the array by using the filter argument. The filters you can apply depend on the data type of the attribute.

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  filter = c("ZSTD", "GZIP")
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1), tiledb_filter_set_option(tiledb_filter("GZIP"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 6
4  2  2 5
5  0  3 2
6  1  3 3

The default capacity of arrays you create with fromDataFrame() is 10,000 cells. You can change this with the capacity argument:

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  capacity = 3L,
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=3, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 6
4  2  2 5
5  0  3 2
6  1  3 3

You can set the array domain by using the tile_domain argument. The default is the minimum and maximum values of the dataframe.

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  tile_domain = list(
    d1 = c(0L, 4L),
    d2 = c(0L, 5L)
  )
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,4L), tile=5L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,5L), tile=6L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 6
4  2  2 5
5  0  3 2
6  1  3 3

The tile_extent argument controls the tile extent of the row dimensions.

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  tile_extent = 2L
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=2L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=2L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  0  1 1
2  2  0 4
3  3  1 6
4  0  3 2
5  1  3 3
6  2  2 5

Now, create a dense array from the coords_dense dataframe. Here, you’ll set mode to "schema_only" to create the array schema without writing any data. This is useful when you want to create an empty array and write data to it later.

Recall from earlier that dataframes in R have no notion of sparsity, so you must set sparse = FALSE in fromDataFrame() to create a dense array instead of a sparse array.

  • R
# Create a dense array from the `coords_dense` dataframe
arr <- tiledb_array_close(arr)
if (file.exists(dense_array_uri)) {
  unlink(dense_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_dense,
  dense_array_uri,
  col_index = c("d1", "d2"),
  sparse = FALSE,
  mode = "schema_only"
)
arr <- tiledb_array(
  dense_array_uri,
  query_type = "WRITE",
  return_as = "data.frame"
)
arr[] <- t(array(a_dense, dim = c(4, 4)))
arr <- tiledb_array_close(arr)
arr <- tiledb_array_open(arr, type = "READ")
print(schema(arr))
print(arr[])
arr <- tiledb_array_close(arr)
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(1L,4L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(1L,4L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=FALSE, allows_dups=FALSE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
   d1 d2  a
1   1  1  1
2   2  1  5
3   3  1  9
4   4  1 13
5   1  2  2
6   2  2  6
7   3  2 10
8   4  2 14
9   1  3  3
10  2  3  7
11  3  3 11
12  4  3 15
13  1  4  4
14  2  4  8
15  3  4 12
16  4  4 16

You can also use the "append" mode with fromDataFrame() to append data to an existing array. This is useful when you want to add new data to an already populated array. Try appending data to the sparse array.

Note

Using the "append" mode with fromDataFrame() is supported only for sparse arrays.

  • R
append_df <- data.frame(
  d1 = 1L,
  d2 = 1L,
  a = 2L
)
fromDataFrame(
  append_df,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  mode = "append"
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=2L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=2L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  0  1 1
2  2  0 4
3  3  1 6
4  0  3 2
5  1  3 3
6  2  2 5
7  1  1 2

You can set filters and compression for specific dimensions and attributes with the filter_list argument:

  • R
arr <- tiledb_array_close(arr)
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
fromDataFrame(
  coords_sparse,
  sparse_array_uri,
  col_index = c("d1", "d2"),
  filter_list = list(
    d1 = "GZIP",
    d2 = "ZSTD",
    a = "GZIP"
  )
)
arr <- tiledb_array(
  sparse_array_uri,
  query_type = "READ",
  return_as = "data.frame"
)
print(schema(arr))
print(arr[])
tiledb_array_schema(
    domain=tiledb_domain(c(
        tiledb_dim(name="d1", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("GZIP"),"COMPRESSION_LEVEL",-1)))),
        tiledb_dim(name="d2", domain=c(0L,3L), tile=4L, type="INT32", filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))))
    )),
    attrs=c(
        tiledb_attr(name="a", type="INT32", ncells=1, nullable=FALSE, filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("GZIP"),"COMPRESSION_LEVEL",-1))))
    ),
    cell_order="COL_MAJOR", tile_order="COL_MAJOR", capacity=10000, sparse=TRUE, allows_dups=TRUE,
    coords_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    offsets_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("ZSTD"),"COMPRESSION_LEVEL",-1))),
    validity_filter_list=tiledb_filter_list(c(tiledb_filter_set_option(tiledb_filter("RLE"),"COMPRESSION_LEVEL",-1)))
)
  d1 d2 a
1  2  0 4
2  0  1 1
3  3  1 6
4  2  2 5
5  0  3 2
6  1  3 3

Clean up in the end by deleting the array.

  • R
# Clean up the arrays
if (file.exists(sparse_array_uri)) {
  unlink(sparse_array_uri, recursive = TRUE)
}
if (file.exists(dense_array_uri)) {
  unlink(dense_array_uri, recursive = TRUE)
}
Basic TileDB Cloud
Palmer Penguins