Performance
This section explores the performance of the TileDB-Vector-Search indexing algorithms. Before reading this tutorial, it is recommended that you read Key Concepts: Algorithms.
Vector Search trade-offs
In approximate nearest neighbor search, performance varies greatly depending on the indexing algorithm and the parameters used during ingestion and querying. The performance of the algorithm is based on trade-offs you make between:
- The ingestion speed.
- The size of the index.
- The query speed.
- The query accuracy (how many of true nearest neighbors are returned).
Below is a detailed guide to the trade-offs between the three indexing algorithms, but here is a quick summary. More stars indicates better performance (i.e., an index that is faster, with a smaller index size, and with higher accuracy).
FLAT |
IVF_FLAT |
VAMANA |
|
---|---|---|---|
Ingestion speed | ⭐️⭐️⭐️ | ⭐️⭐️ | ⭐️ |
Index size | ⭐️⭐️⭐️ | ⭐️⭐️ | ⭐️ |
Query speed | ⭐️ | ⭐️⭐️⭐️ | ⭐️⭐️⭐️ |
Query accuracy | ⭐️⭐️⭐️ | ⭐️⭐️⭐️ | ⭐️⭐️⭐️ |
FLAT
FLAT
is the simplest indexing algorithm. It is a brute-force search that trades off speed for accuracy, ensuring that you get the exact nearest neighbors.
Ingestion
FLAT
ingestion can be tuned by configuring whether to run the ingestion locally on your machine, or distributed across multiple workers using TileDB task graphs. This can be configured by setting the mode
parameter to Mode.LOCAL
or Mode.BATCH
. If you have a large dataset, you may want to use Mode.BATCH
to distribute the ingestion across multiple workers. If you use Mode.Batch
, you can also configure ingestion in a few other ways:
By specifying the
workers
parameter, which specifies the number of distributed workers to use for vector ingestion. If you provide a larger number of workers, the ingestion will be faster but may consume more resources. If you provide a smaller number of workers, the ingestion will be slower but will use fewer machines.By configuring how many vectors are processed in a single worker. To do this, you can configure
input_vectors_per_work_item
, which specifies the number of vectors per ingestion work item. Then configuremax_tasks_per_stage
, which specifies how many of these work items are processed in a single worker. If you have a highmax_tasks_per_stage
, the worker will loop through and process theinput_vectors_per_work_item
vectors, which can be faster than just processing one group ofinput_vectors_per_work_item
vectors at a time because of worker startup overhead.By specifying
ingest_resources
, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion.
These parameters only affect the ingestion speed, not the size of the index, the query speed, or the query accuracy.
Querying
FLAT
querying can be tuned by specifying whether to run the query locally on your machine (with driver_mode
as None
or Mode.LOCAL
) or on a single remote machine using a TileDB task graph (with driver_mode
as Mode.REALTIME
).
These parameters only affect the query speed, not the query accuracy. The query accuracy will be 100%, as all vectors will be searched to find the \(k\) nearest neighbors.
IVF_FLAT
IVF_FLAT
provides a good balance between ingestion speed, index size, query speed, and query accuracy, and can be tuned further for your specific use case.
Ingestion
IVF_FLAT
ingestion can be tuned in several ways:
By specifying the number of
partitions
to generate during \(k\)-means clustering. If you create too few partitions, the search will be slow but accurate. If you create too many partitions, the search will be fast but less accurate. Finding a good balance is important for optimal performance.By specifying the
training_sample_size
, which controls the number of vectors to use for training the \(k\)-means clustering. If you provide a larger sample size, the clustering will be more accurate, but ingestion will be slower. If you provide a smaller sample size, the ingestion will be faster, but the clustering might be less accurate, leading to partition size imbalance and query inefficiencies.By configuring whether to run the ingestion locally on your machine, or distributed across multiple workers using TileDB task graphs. This can be configured by setting the
mode
parameter toMode.LOCAL
orMode.BATCH
. If you have a large dataset, you may want to useMode.BATCH
to distribute the ingestion across multiple workers. If you useMode.Batch
, you can also configure ingestion in a few other ways:By specifying the
workers
, which specifies the number of distributed workers to use for vector ingestion. If you provide a larger number of workers, the ingestion will be faster but may consume more resources. If you provide a smaller number of workers, the ingestion will be slower but will use less machines.By configuring how many vectors are processed in a single worker. To do this, you can configure
input_vectors_per_work_item
, which specifies the number of vectors per ingestion work item. Then configuremax_tasks_per_stage
, which specifies how many of these work items are processed in a single worker. If you have a highmax_tasks_per_stage
, the worker will loop through and process theinput_vectors_per_work_item
vectors, which can be faster than just processing one group ofinput_vectors_per_work_item
vectors at a time because of worker overhead.By specifying
ingest_resources
, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion. You can also control the resources used during several other parts of the \(k\)-means clustering withkmeans_resources
,compute_new_centroids_resources
,assign_points_and_partial_new_centroids_resources
,write_centroids_resources
, andpartial_index_resources
.If you are using
training_sampling_policy=TrainingSamplingPolicy.RANDOM
, you can control how these vectors are randomly sampled usinginput_vectors_per_work_item_during_sampling
andmax_sampling_tasks
in the same way asinput_vectors_per_work_item
andmax_tasks_per_stage
. You can also control machine resources withrandom_sample_resources
.
Querying
IVF_FLAT
querying can be tuned in several ways:
By specifying
nprobe
, which configures the number ofpartitions
to search through. If you search too fewpartitions
, the query will be fast but less accurate. If you search too manypartitions
, the search will be slow but accurate.As an example, if you ingested 200 vectors and created an index with
partitions = 10
, then each partition will contain 20 vectors. If you search for the \(k\) nearest neighbors of a single vector and setnprobe = 2
, then search will look through each of the 20 vectors in the closest 2 partitions to your query vector (so 40 vectors in total), and the search will be fast but potentially inaccurate. If you search withnprobe = 10
, then search will look through each of the 20 vectors in the closest 2 partitions to your query vector (so all 200 vectors in total), and the search will be slow but 100% accurate.As a rule of thumb, configuring
nprobe
to be the square root ofpartitions
should result in accuracy close to 100%.
By specifying whether to run the query locally on your machine or in the cloud with TileDB task graphs. For
IVF_FLAT
, you can configure two different steps to queries:When a query starts, TileDB-Vector-Search opens several arrays and does some processing to prepare the query. You can specify whether to run this part of the query locally on your machine (with
driver_mode
asNone
orMode.LOCAL
) or on a single remote machine using a TileDB task graph (withdriver_mode
asMode.REALTIME
).After the query is prepared, TileDB-Vector-Search runs the actual search. You can specify whether to run this part of the query locally (with
mode
asNone
orMode.LOCAL
) or to run on remote machine(s) using a TileDB task graph (withdriver_mode
asMode.REALTIME
).If you select to run the query locally, TileDB-Vector-Search will run on the machine specified by
driver_mode
. This means if you havedriver_mode
asMode.REALTIME
anddriver_mode
asMode.LOCAL
, the query will run on the machine specified bydriver_mode
.If you select to run the query on a remote machine, you can configure how many workers to use for the query execution with
num_workers
, and how many partitions to split the query into withnum_partitions
.
By specifying the
memory_budget
, which controls how many vectors are loaded into memory during query execution. If you do not provide this, the entire index is loaded into main memory when the index is opened. This will result in faster query execution if you have enough memory on the machine for it. If you provide it, vectors will be loaded into memory during query execution. This will be slower, but may be required if you have a large index and not enough memory on the machine.
VAMANA
VAMANA
has slower ingestion speed and a larger index size, but can provide slightly improved query speed and accuracy as compared to IVF_FLAT
, and can also be tuned further for your specific use case.
Ingestion
VAMANA
ingestion can be tuned in several ways:
By specifying how to build the graph with
l_build
andr_max_degree
.l_build
controls how many neighbors are considered for each node during construction of the graph. Larger values will take more time to build but result in indices that provide higher recall for the same search complexity.r_max_degree
controls the maximum degree for each node in the final graph. Larger values will result in larger indices, longer indexing times, and longer query times, but better accuracy.By configuring whether to run the ingestion locally on your machine, or on a single remote worker using a TileDB task graph. This can be configured by setting the
mode
parameter toMode.LOCAL
orMode.BATCH
. If you have a large dataset, you may want to useMode.BATCH
to run ingestion on a cloud machine which you can provision to have enough CPU cores and memory. If you useMode.Batch
, you can also configure ingestion in a few other ways:By configuring how many vectors are processed in a single worker. To do this you can configure
input_vectors_per_work_item
, which specifies the number of vectors per ingestion work item. Then configuremax_tasks_per_stage
, which specifies how many of these work items are processed in a single worker. If you have a highmax_tasks_per_stage
, the worker will loop through and process theinput_vectors_per_work_item
vectors, which can be faster than just processing one group ofinput_vectors_per_work_item
vectors at a time because of worker overhead.By specifying
ingest_resources
, which configures the number of CPU cores and memory size of machines used in the TileDB task graph during ingestion.
Querying
VAMANA
querying can be tuned in several ways:
By specifying the number of neighbors to search in the graph with
l_search
. Larger values will result in slower query time, but higher accuracy.By specifying whether to run the query locally on your machine (with
driver_mode
asNone
orMode.LOCAL
) or on a single remote machine using a TileDB task graph (withdriver_mode
asMode.REALTIME
).