1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Performance
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Vector Search trade-offs
  • FLAT
    • Ingestion
    • Querying
  • IVF_FLAT
    • Ingestion
    • Querying
  • VAMANA
    • Ingestion
    • Querying
  1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Performance

Vector Search Performance

ai/ml
vector search
tutorials
performance
Learn about the trade-offs between speed and accuracy when using TileDB-Vector-Search.

This section explores the performance of the TileDB-Vector-Search indexing algorithms. Before reading this tutorial, it is recommended that you read Key Concepts: Algorithms.

Vector Search trade-offs

In approximate nearest neighbor search, performance varies greatly depending on the indexing algorithm and the parameters used during ingestion and querying. The performance of the algorithm is based on trade-offs you make between:

  1. The ingestion speed.
  2. The size of the index.
  3. The query speed.
  4. The query accuracy (how many of true nearest neighbors are returned).

Below is a detailed guide to the trade-offs between the three indexing algorithms, but here is a quick summary. More stars indicates better performance (i.e., an index that is faster, with a smaller index size, and with higher accuracy).

FLAT IVF_FLAT VAMANA
Ingestion speed ⭐️⭐️⭐️ ⭐️⭐️ ⭐️
Index size ⭐️⭐️⭐️ ⭐️⭐️ ⭐️
Query speed ⭐️ ⭐️⭐️⭐️ ⭐️⭐️⭐️
Query accuracy ⭐️⭐️⭐️ ⭐️⭐️⭐️ ⭐️⭐️⭐️

FLAT

FLAT is the simplest indexing algorithm. It is a brute-force search that trades off speed for accuracy, ensuring that you get the exact nearest neighbors.

Ingestion

FLAT ingestion can be tuned by configuring whether to run the ingestion locally on your machine, or distributed across multiple workers using TileDB task graphs. This can be configured by setting the mode parameter to Mode.LOCAL or Mode.BATCH. If you have a large dataset, you may want to use Mode.BATCH to distribute the ingestion across multiple workers. If you use Mode.Batch, you can also configure ingestion in a few other ways:

  1. By specifying the workers parameter, which specifies the number of distributed workers to use for vector ingestion. If you provide a larger number of workers, the ingestion will be faster but may consume more resources. If you provide a smaller number of workers, the ingestion will be slower but will use fewer machines.

  2. By configuring how many vectors are processed in a single worker. To do this, you can configure input_vectors_per_work_item, which specifies the number of vectors per ingestion work item. Then configure max_tasks_per_stage, which specifies how many of these work items are processed in a single worker. If you have a high max_tasks_per_stage, the worker will loop through and process the input_vectors_per_work_item vectors, which can be faster than just processing one group of input_vectors_per_work_item vectors at a time because of worker startup overhead.

  3. By specifying ingest_resources, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion.

These parameters only affect the ingestion speed, not the size of the index, the query speed, or the query accuracy.

Querying

FLAT querying can be tuned by specifying whether to run the query locally on your machine (with driver_mode as None or Mode.LOCAL) or on a single remote machine using a TileDB task graph (with driver_mode as Mode.REALTIME).

These parameters only affect the query speed, not the query accuracy. The query accuracy will be 100%, as all vectors will be searched to find the \(k\) nearest neighbors.

IVF_FLAT

IVF_FLAT provides a good balance between ingestion speed, index size, query speed, and query accuracy, and can be tuned further for your specific use case.

Ingestion

IVF_FLAT ingestion can be tuned in several ways:

  1. By specifying the number of partitions to generate during \(k\)-means clustering. If you create too few partitions, the search will be slow but accurate. If you create too many partitions, the search will be fast but less accurate. Finding a good balance is important for optimal performance.

  2. By specifying the training_sample_size, which controls the number of vectors to use for training the \(k\)-means clustering. If you provide a larger sample size, the clustering will be more accurate, but ingestion will be slower. If you provide a smaller sample size, the ingestion will be faster, but the clustering might be less accurate, leading to partition size imbalance and query inefficiencies.

  3. By configuring whether to run the ingestion locally on your machine, or distributed across multiple workers using TileDB task graphs. This can be configured by setting the mode parameter to Mode.LOCAL or Mode.BATCH. If you have a large dataset, you may want to use Mode.BATCH to distribute the ingestion across multiple workers. If you use Mode.Batch, you can also configure ingestion in a few other ways:

    1. By specifying the workers, which specifies the number of distributed workers to use for vector ingestion. If you provide a larger number of workers, the ingestion will be faster but may consume more resources. If you provide a smaller number of workers, the ingestion will be slower but will use less machines.

    2. By configuring how many vectors are processed in a single worker. To do this, you can configure input_vectors_per_work_item, which specifies the number of vectors per ingestion work item. Then configure max_tasks_per_stage, which specifies how many of these work items are processed in a single worker. If you have a high max_tasks_per_stage, the worker will loop through and process the input_vectors_per_work_item vectors, which can be faster than just processing one group of input_vectors_per_work_item vectors at a time because of worker overhead.

    3. By specifying ingest_resources, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion. You can also control the resources used during several other parts of the \(k\)-means clustering with kmeans_resources, compute_new_centroids_resources, assign_points_and_partial_new_centroids_resources, write_centroids_resources, and partial_index_resources.

    4. If you are using training_sampling_policy=TrainingSamplingPolicy.RANDOM, you can control how these vectors are randomly sampled using input_vectors_per_work_item_during_sampling and max_sampling_tasks in the same way as input_vectors_per_work_item and max_tasks_per_stage. You can also control machine resources with random_sample_resources.

Querying

IVF_FLAT querying can be tuned in several ways:

  1. By specifying nprobe, which configures the number of partitions to search through. If you search too few partitions, the query will be fast but less accurate. If you search too many partitions, the search will be slow but accurate.

    • As an example, if you ingested 200 vectors and created an index with partitions = 10, then each partition will contain 20 vectors. If you search for the \(k\) nearest neighbors of a single vector and set nprobe = 2, then search will look through each of the 20 vectors in the closest 2 partitions to your query vector (so 40 vectors in total), and the search will be fast but potentially inaccurate. If you search with nprobe = 10, then search will look through each of the 20 vectors in the closest 2 partitions to your query vector (so all 200 vectors in total), and the search will be slow but 100% accurate.

    • As a rule of thumb, configuring nprobe to be the square root of partitions should result in accuracy close to 100%.

  2. By specifying whether to run the query locally on your machine or in the cloud with TileDB task graphs. For IVF_FLAT, you can configure two different steps to queries:

    1. When a query starts, TileDB-Vector-Search opens several arrays and does some processing to prepare the query. You can specify whether to run this part of the query locally on your machine (with driver_mode as None or Mode.LOCAL) or on a single remote machine using a TileDB task graph (with driver_mode as Mode.REALTIME).

    2. After the query is prepared, TileDB-Vector-Search runs the actual search. You can specify whether to run this part of the query locally (with mode as None or Mode.LOCAL) or to run on remote machine(s) using a TileDB task graph (with driver_mode as Mode.REALTIME).

      1. If you select to run the query locally, TileDB-Vector-Search will run on the machine specified by driver_mode. This means if you have driver_mode as Mode.REALTIME and driver_mode as Mode.LOCAL, the query will run on the machine specified by driver_mode.

      2. If you select to run the query on a remote machine, you can configure how many workers to use for the query execution with num_workers, and how many partitions to split the query into with num_partitions.

  3. By specifying the memory_budget, which controls how many vectors are loaded into memory during query execution. If you do not provide this, the entire index is loaded into main memory when the index is opened. This will result in faster query execution if you have enough memory on the machine for it. If you provide it, vectors will be loaded into memory during query execution. This will be slower, but may be required if you have a large index and not enough memory on the machine.

VAMANA

VAMANA has slower ingestion speed and a larger index size, but can provide slightly improved query speed and accuracy as compared to IVF_FLAT, and can also be tuned further for your specific use case.

Ingestion

VAMANA ingestion can be tuned in several ways:

  1. By specifying how to build the graph with l_build and r_max_degree. l_build controls how many neighbors are considered for each node during construction of the graph. Larger values will take more time to build but result in indices that provide higher recall for the same search complexity. r_max_degree controls the maximum degree for each node in the final graph. Larger values will result in larger indices, longer indexing times, and longer query times, but better accuracy.

  2. By configuring whether to run the ingestion locally on your machine, or on a single remote worker using a TileDB task graph. This can be configured by setting the mode parameter to Mode.LOCAL or Mode.BATCH. If you have a large dataset, you may want to use Mode.BATCH to run ingestion on a cloud machine which you can provision to have enough CPU cores and memory. If you use Mode.Batch, you can also configure ingestion in a few other ways:

    1. By configuring how many vectors are processed in a single worker. To do this you can configure input_vectors_per_work_item, which specifies the number of vectors per ingestion work item. Then configure max_tasks_per_stage, which specifies how many of these work items are processed in a single worker. If you have a high max_tasks_per_stage, the worker will loop through and process the input_vectors_per_work_item vectors, which can be faster than just processing one group of input_vectors_per_work_item vectors at a time because of worker overhead.

    2. By specifying ingest_resources, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion.

Querying

VAMANA querying can be tuned in several ways:

  1. By specifying the number of neighbors to search in the graph with l_search. Larger values will result in slower query time, but higher accuracy.

  2. By specifying whether to run the query locally on your machine (with driver_mode as None or Mode.LOCAL) or on a single remote machine using a TileDB task graph (with driver_mode as Mode.REALTIME).

Protein Search
API Reference