1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. Aggregates
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Aggregates in dense arrays
  • Aggregates in sparse arrays
  1. Structure
  2. Arrays
  3. Tutorials
  4. Basics
  5. Aggregates

Compute Aggregate Queries

arrays
tutorials
python
r
aggregates
TileDB supports fast and parallel aggregation of results without the need to compute them externally.
How to run this tutorial

You can run this tutorial in two ways:

  1. Locally on your machine.
  2. On TileDB Cloud.

However, since TileDB Cloud has a free tier, we strongly recommend that you sign up and run everything there, as that requires no installations or deployment.

This tutorial explains how you can compute aggregate queries (for example, min, max, count, and sum) on TileDB arrays. The cases of dense and sparse arrays are similar, but described in two separate subsections. For more information on aggregates, visit the Key Concepts: Aggregates section.

Aggregates in dense arrays

First, import the necessary libraries, set the array URI (that is, its path, which in this tutorial will be on local storage), and delete any previously created arrays with the same name.

  • Python
  • R
# Import necessary libraries
import os.path
import shutil

import numpy as np
import tiledb

# Set array URI
array_uri = os.path.expanduser("~/aggregates_dense")

# Delete array if it already exists
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
# Import necessary libraries
library(tiledb)

# Set array URI
array_uri <- path.expand("~/aggregates_dense_r")

# Delete array if it already exists
if (file.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}

Next, create the array by specifying its schema.

  • Python
  • R
# Create the two dimensions
d1 = tiledb.Dim(name="d1", domain=(1, 4), tile=2, dtype=np.int32)
d2 = tiledb.Dim(name="d2", domain=(1, 4), tile=2, dtype=np.int32)

# Create a domain using the two dimensions
dom = tiledb.Domain(d1, d2)

# Create two attributes
a1 = tiledb.Attr(name="a1", dtype=np.int32)
a2 = tiledb.Attr(name="a2", dtype=np.float32)

# Create the array schema, setting `sparse=False` to indicate a dense array
sch = tiledb.ArraySchema(domain=dom, sparse=False, attrs=[a1, a2])

# Create the array on disk (it will initially be empty)
tiledb.Array.create(array_uri, sch)
# Create the two dimensions
d1 <- tiledb_dim("d1", c(1L, 4L), 2L, "INT32")
d2 <- tiledb_dim("d2", c(1L, 4L), 2L, "INT32")

# Create a domain using the two dimensions
dom <- tiledb_domain(dims = c(d1, d2))
# Order of the dimensions matters when slicing subarrays.
# Remember to give priority to more selective dimensions to
# maximize the pruning power during slicing.

# Create an attribute
a1 <- tiledb_attr("a1", type = "INT32")
a2 <- tiledb_attr("a2", type = "FLOAT64")

# Create the array schema, setting `sparse = FALSE` to indicate a dense array
sch <- tiledb_array_schema(dom, c(a1, a2), sparse = FALSE)

# Create the array on disk (it will initially be empty)
arr <- tiledb_array_create(array_uri, sch)

Populate the TileDB array with two 2D input arrays (since you have two attributes).

  • Python
  • R
# Prepare some data in NumPy arrays
a1_data = np.array(
    [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]], dtype=np.int32
)
a2_data = np.array(
    [
        [1.1, 2.2, 3.3, 4.4],
        [5.5, 6.6, 7.7, 8.8],
        [9.9, 10.10, 11.11, 12.12],
        [13.13, 14.14, 15.15, 16.16],
    ],
    dtype=np.float32,
)

# Write data to the array
with tiledb.open(array_uri, "w") as A:
    A[:] = {"a1": a1_data, "a2": a2_data}
# Prepare some data in two arrays, one for each attribute
a1_data <- t(array(1:16, dim = c(4, 4)))

a2_data <- t(array(
  c(
    1.1, 2.2, 3.3, 4.4,
    5.5, 6.6, 7.7, 8.8,
    9.9, 10.10, 11.11, 12.12,
    13.13, 14.14, 15.15, 16.16
  ),
  dim = c(4L, 4L)
))
# Open the array for writing and write data to the array
arr <- tiledb_array(
  uri = array_uri,
  query_type = "WRITE"
)
arr[] <- list(
  a1 = a1_data,
  a2 = a2_data
)

# Close the array
arr <- tiledb_array_close(arr)

Read the data by using one of the following ways to invoke aggregate queries:

  • Python
  • R
# Open the array in read mode
A = tiledb.open(array_uri, "r")

print("Entire array:")
print(A[:])
print("\n")

# Various aggregates
print("--- Various aggregates ---\n")
print("Count in entire array for both attributes:", A.query().agg("count")[:])
print("Count in entire array for 'a1':", A.query().agg({"a1": "count"})[:])
print("Count in entire array for 'a2':", A.query().agg({"a2": "count"})[:])
print("Max for 'a1':", A.query().agg({"a1": "max"})[:])
print("Max for 'a2':", A.query().agg({"a2": "max"})[:])
print(
    "Max, min, count and sum for 'a1':",
    A.query().agg({"a1": ["max", "min", "count", "sum"]})[:],
)
print(
    "Max, min, count and sum for 'a2':",
    A.query().agg({"a2": ["max", "min", "count", "sum"]})[:],
)
print(
    "Max and min for 'a1', sum for 'a2':",
    A.query().agg({"a1": ["max", "min"], "a2": ["sum"]})[:],
)
print("\n")

# Aggregates work with slicing
print("--- Aggregates with slicing ---\n")
print("Slice [1:3),[2:3):", A.query(attrs=["a1", "a2"], dims=["d1", "d2"])[1:3, 2:3])
print("Sum for 'a1' and 'a2' in slice [1,3), [2,3):", A.query().agg("sum")[1:3, 2:3])
print(
    "Sum for multi-range, rows 1, 2 and 4, cols 1-3:",
    A.query().agg("sum").multi_index[[slice(1, 2), 4], slice(1, 3)],
)
print("\n")

# Aggregates work with query conditions
print("--- Aggregates with query conditions ---\n")
q = A.query(cond="a1 >= 5")
print("Sum with condition 'a1 >= 5':", q.agg("sum")[:])

# Remember to close the array
A.close()
Entire array:
OrderedDict({'a1': array([[ 1,  2,  3,  4],
       [ 5,  6,  7,  8],
       [ 9, 10, 11, 12],
       [13, 14, 15, 16]], dtype=int32), 'a2': array([[ 1.1 ,  2.2 ,  3.3 ,  4.4 ],
       [ 5.5 ,  6.6 ,  7.7 ,  8.8 ],
       [ 9.9 , 10.1 , 11.11, 12.12],
       [13.13, 14.14, 15.15, 16.16]], dtype=float32)})


--- Various aggregates ---

Count in entire array for both attributes: {'a1': {'count': 16}, 'a2': {'count': 16}}
Count in entire array for 'a1': 16
Count in entire array for 'a2': 16
Max for 'a1': 16
Max for 'a2': 16.15999984741211
Max, min, count and sum for 'a1': {'max': 16, 'min': 1, 'count': 16, 'sum': 136}
Max, min, count and sum for 'a2': {'max': 16.15999984741211, 'min': 1.100000023841858, 'count': 16, 'sum': 141.40999948978424}
Max and min for 'a1', sum for 'a2': {'a1': {'max': 16, 'min': 1}, 'a2': {'sum': 141.40999948978424}}


--- Aggregates with slicing ---

Slice [1:3),[2:3): OrderedDict({'d1': array([[1],
       [2]], dtype=int32), 'd2': array([[2],
       [2]], dtype=int32), 'a1': array([[2],
       [6]], dtype=int32), 'a2': array([[2.2],
       [6.6]], dtype=float32)})
Sum for 'a1' and 'a2' in slice [1,3), [2,3): {'a1': {'sum': 8}, 'a2': {'sum': 8.799999952316284}}
Sum for multi-range, rows 1, 2 and 4, cols 1-3: {'a1': {'sum': 66}, 'a2': {'sum': 68.81999981403351}}


--- Aggregates with query conditions ---

Sum with condition 'a1 >= 5': {'a1': {'sum': 126}, 'a2': {'sum': 130.40999937057495}}
# Aggregate function wrapper
aggregate_attributes <- function(
    array,
    attribute,
    operation,
    subarray = c(1L, 4L, 1L, 4L)) {
  library(tiledb)
  qry <- tiledb_query(array, "READ")
  qry <- tiledb_query_set_subarray(qry, subarray)

  # tiledb_query_apply_aggregate works for both dense and sparse arrays
  result <- tiledb_query_apply_aggregate(
    qry,
    attribute,
    operation,
    nullable = FALSE
  )
  if (missing(subarray)) {
    print(cat(operation, " for ", attribute, ": ", result))
  } else {
    print(
      cat(
        operation,
        " for ",
        attribute,
        " with subarray (",
        paste(subarray, collapse = ", "),
        "): ",
        result
      )
    )
  }
  array <- tiledb_array_close(array)
}

operations <- c("Count", "Min", "Max", "Mean", "Sum")
attributes <- c("a1", "a2")

for (operation in operations) {
  for (attribute in attributes) {
    aggregate_attributes(arr, attribute, operation)
  }
}

# You can run aggregates against subarrays
subarray <- c(1L, 3L, 1L, 2L)

for (operation in operations) {
  for (attribute in attributes) {
    aggregate_attributes(arr, attribute, operation, subarray)
  }
}

# You can also run aggregates on queries with query conditions
qc <- parse_query_condition(a1 >= 5)

for (attribute in attributes) {
  qry <- tiledb_query(arr, "READ")
  qry <- tiledb_query_set_condition(qry, qc)
  tiledb_query_set_subarray(qry, c(1L, 4L, 1L, 4L))
  result <- tiledb_query_apply_aggregate(
    qry,
    attribute,
    "Sum",
    nullable = FALSE
  )
  print(cat("Sum of ", attribute, " with condition 'a1 >= 5': ", result))
}
Count  for  a1 :  16NULL
Count  for  a2 :  16NULL
Min  for  a1 :  1NULL
Min  for  a2 :  1.1NULL
Max  for  a1 :  16NULL
Max  for  a2 :  16.16NULL
Mean  for  a1 :  8.5NULL
Mean  for  a2 :  8.838125NULL
Sum  for  a1 :  136NULL
Sum  for  a2 :  141.41NULL
Count  for  a1  with subarray ( 1, 3, 1, 2 ):  6NULL
Count  for  a2  with subarray ( 1, 3, 1, 2 ):  6NULL
Min  for  a1  with subarray ( 1, 3, 1, 2 ):  1NULL
Min  for  a2  with subarray ( 1, 3, 1, 2 ):  1.1NULL
Max  for  a1  with subarray ( 1, 3, 1, 2 ):  10NULL
Max  for  a2  with subarray ( 1, 3, 1, 2 ):  10.1NULL
Mean  for  a1  with subarray ( 1, 3, 1, 2 ):  5.5NULL
Mean  for  a2  with subarray ( 1, 3, 1, 2 ):  5.9NULL
Sum  for  a1  with subarray ( 1, 3, 1, 2 ):  33NULL
Sum  for  a2  with subarray ( 1, 3, 1, 2 ):  35.4NULL
Sum of  a1  with condition 'a1 >= 5':  126NULL
Sum of  a2  with condition 'a1 >= 5':  130.41NULL

Clean up in the end by deleting the array.

  • Python
  • R
# Delete the array
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
if (file.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}

Aggregates in sparse arrays

First, import the necessary libraries, set the array URI (that is, its path, which in this tutorial will be on local storage), and delete any previously created arrays with the same name.

  • Python
  • R
# Import necessary libraries
import os.path
import shutil

import numpy as np
import tiledb

# Set array URI
array_uri = os.path.expanduser("~/aggregates_sparse")

# Delete array if it already exists
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
# Import necessary libraries
library(tiledb)

# Set array URI
array_uri <- path.expand("~/aggregates_sparse_r")

# Delete array if it already exists
if (file.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}

Next, create the array by specifying its schema.

  • Python
  • R
# Create the two dimensions
d1 = tiledb.Dim(name="d1", domain=(0, 3), tile=2, dtype=np.int32)
d2 = tiledb.Dim(name="d2", domain=(0, 3), tile=2, dtype=np.int32)

# Create a domain using the two dimensions
dom = tiledb.Domain(d1, d2)

# Create two attributes
a1 = tiledb.Attr(name="a1", dtype=np.int32)
a2 = tiledb.Attr(name="a2", dtype=np.float32)

# Create the array schema with `sparse=True`
sch = tiledb.ArraySchema(domain=dom, sparse=True, attrs=[a1, a2])

# Create the array on disk (it will initially be empty)
tiledb.Array.create(array_uri, sch)
# Create the two dimensions
d1 <- tiledb_dim("d1", c(0L, 3L), 2L, "INT32")
d2 <- tiledb_dim("d2", c(0L, 3L), 2L, "INT32")

# Create a domain using the two dimensions
dom <- tiledb_domain(dims = c(d1, d2))

# Create two attributes
a1 <- tiledb_attr("a1", type = "INT32")
a2 <- tiledb_attr("a2", type = "FLOAT32")

# Create the array schema with `sparse = TRUE`
sch <- tiledb_array_schema(dom, c(a1, a2), sparse = TRUE)

# Create the array on disk (it will initially be empty)
arr <- tiledb_array_create(array_uri, sch)

Populate the array with two 2D arrays (since there are two attributes).

  • Python
  • R
# Prepare some data in numpy arrays
d1_data = np.array([2, 0, 3, 2, 0, 1], dtype=np.int32)
d2_data = np.array([0, 1, 1, 2, 3, 3], dtype=np.int32)
a1_data = np.array([4, 1, 7, 5, 2, 3], dtype=np.int32)
a2_data = np.array([4.4, 1.1, 7.7, 5.5, 2.2, 3.3], dtype=np.float32)

# Open the array in write mode and write the data in COO format
with tiledb.open(array_uri, "w") as A:
    A[d1_data, d2_data] = {"a1": a1_data, "a2": a2_data}
# Prepare some data in an array
d1_data <- c(2L, 0L, 3L, 2L, 0L, 1L)
d2_data <- c(0L, 1L, 1L, 2L, 3L, 3L)
a1_data <- c(4L, 1L, 7L, 5L, 2L, 3L)
a2_data <- c(4.4, 1.1, 7.7, 5.5, 2.2, 3.3)

# Open the array for writing and write data to the array
arr <- tiledb_array(
  uri = array_uri,
  query_type = "WRITE",
  return_as = "data.frame"
)
arr[] <- data.frame(
  d1 = d1_data,
  d2 = d2_data,
  a1 = a1_data,
  a2 = a2_data
)

# Close the array
invisible(tiledb_array_close(arr))

Read the data by using one of the following ways to invoke aggregate queries:

  • Python
  • R
# Open the array in read mode
A = tiledb.open(array_uri, "r")

print("Entire array:")
print(A[:])
print("\n")

# Various aggregates
print("--- Various aggregates ---\n")
print("Count in entire array for both attributes:", A.query().agg("count")[:])
print("Count in entire array for 'a1':", A.query().agg({"a1": "count"})[:])
print("Count in entire array for 'a2':", A.query().agg({"a2": "count"})[:])
print("Max for 'a1':", A.query().agg({"a1": "max"})[:])
print("Max for 'a2':", A.query().agg({"a2": "max"})[:])
print(
    "Max, min, count and sum for 'a1':",
    A.query().agg({"a1": ["max", "min", "count", "sum"]})[:],
)
print(
    "Max, min, count and sum for 'a2':",
    A.query().agg({"a2": ["max", "min", "count", "sum"]})[:],
)
print(
    "Max and min for 'a1', sum for 'a2':",
    A.query().agg({"a1": ["max", "min"], "a2": ["sum"]})[:],
)
print("\n")

# Aggregates work with slicing
print("--- Aggregates with slicing ---\n")
print("Slice [0:3),[0:3):", A.query(attrs=["a1", "a2"], dims=["d1", "d2"])[0:3, 0:3])
print("Sum for 'a1' and 'a2' in slice [0,3), [0,3):", A.query().agg("sum")[0:3, 0:3])
print(
    "Sum for multi-range, rows 0, 1 and 3, cols 0-2:",
    A.query().agg("sum").multi_index[[slice(0, 1), 3], slice(0, 2)],
)
print("\n")

# Aggregates work with query conditions
print("--- Aggregates with query conditions ---\n")
q = A.query(cond="a1 >= 5")
print("Sum with condition 'a1 >= 5':", q.agg("sum")[:])

# Remember to close the array
A.close()
Entire array:
OrderedDict({'a1': array([1, 2, 3, 4, 7, 5], dtype=int32), 'a2': array([1.1, 2.2, 3.3, 4.4, 7.7, 5.5], dtype=float32), 'd1': array([0, 0, 1, 2, 3, 2], dtype=int32), 'd2': array([1, 3, 3, 0, 1, 2], dtype=int32)})


--- Various aggregates ---

Count in entire array for both attributes: {'a1': {'count': 6}, 'a2': {'count': 6}}
Count in entire array for 'a1': 6
Count in entire array for 'a2': 6
Max for 'a1': 7
Max for 'a2': 7.699999809265137
Max, min, count and sum for 'a1': {'max': 7, 'min': 1, 'count': 6, 'sum': 22}
Max, min, count and sum for 'a2': {'max': 7.699999809265137, 'min': 1.100000023841858, 'count': 6, 'sum': 24.199999928474426}
Max and min for 'a1', sum for 'a2': {'a1': {'max': 7, 'min': 1}, 'a2': {'sum': 24.199999928474426}}


--- Aggregates with slicing ---

Slice [0:3),[0:3): OrderedDict({'a1': array([1, 4, 5], dtype=int32), 'a2': array([1.1, 4.4, 5.5], dtype=float32), 'd1': array([0, 2, 2], dtype=int32), 'd2': array([1, 0, 2], dtype=int32)})
Sum for 'a1' and 'a2' in slice [0,3), [0,3): {'a1': {'sum': 10}, 'a2': {'sum': 11.00000011920929}}
Sum for multi-range, rows 0, 1 and 3, cols 0-2: {'a1': {'sum': 8}, 'a2': {'sum': 8.799999833106995}}


--- Aggregates with query conditions ---

Sum with condition 'a1 >= 5': {'a1': {'sum': 12}, 'a2': {'sum': 13.199999809265137}}
# tiledb_query_apply_aggregate() works for both dense and sparse arrays.
# For sparse arrays, you can also use tiledb_array_apply_aggregate().
aggregate_attributes_arr <- function(
    array,
    attribute,
    operation) {
  library(tiledb)
  result <- tiledb_array_apply_aggregate(
    array,
    attribute,
    operation,
    nullable = FALSE
  )
  print(paste0(operation, " of ", attribute, ": ", result))
}

operations <- c("Count", "Min", "Max", "Mean", "Sum")
attributes <- c("a1", "a2")

for (operation in operations) {
  for (attribute in attributes) {
    aggregate_attributes_arr(arr, attribute, operation)
  }
}
[1] "Count of a1: 6"
[1] "Count of a2: 6"
[1] "Min of a1: 1"
[1] "Min of a2: 1.10000002384186"
[1] "Max of a1: 7"
[1] "Max of a2: 7.69999980926514"
[1] "Mean of a1: 3.66666666666667"
[1] "Mean of a2: 4.0333333214124"
[1] "Sum of a1: 22"
[1] "Sum of a2: 24.1999999284744"

Clean up in the end by deleting the array.

  • Python
  • R
# Delete the array
if os.path.exists(array_uri):
    shutil.rmtree(array_uri)
if (file.exists(array_uri)) {
  unlink(array_uri, recursive = TRUE)
}
Query Conditions
Deletions