1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. File Search
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Setup
  • Ingestion
  • Search
  • Clean up
  1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. File Search

File Search

ai/ml
vector search
tutorials
search
This tutorial demonstrates how to create a vector index for PDF files, and search over the files using an English phrase.
How to run this tutorial

We recommend running this tutorial, as well as the other various tutorials in the Tutorials section, inside TileDB Cloud. This will allow you to quickly experiment avoiding all the installation, deployment, and configuration hassles. Sign up for the free tier, spin up a TileDB Cloud notebook with a Python kernel, and follow the tutorial instructions. If you wish to learn how to run tutorials locally on your machine, read the Tutorials: Running Locally tutorial.

In this tutorial, you will learn how to load large collections of PDF files into a TileDB-Vector-Search index, and query them using an English phrase.

Setup

To be able to run this tutorial, you will need an OpenAI API key. In addition, if you wish to use your local machine instead of a TileDB Cloud notebook, you will need to install the following:

  • Pip
pip install langchain==0.0.331 langchain_community openai==0.28.1 tiktoken pymupdf pillow

Start by importing the necessary libraries, setting the URIs you will use throughout the tutorial, and clean up any previously generated data.

import os
import shutil

os.environ["TOKENIZERS_PARALLELISM"] = "true"
import warnings

warnings.filterwarnings("ignore")
from io import BytesIO

import fitz
import numpy as np
import tiledb
import tiledb.vector_search as vs
from langchain.document_loaders.generic import GenericLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders.parsers.pdf import PyMuPDFParser
from PIL import Image

# URIs you will use in this tutorial
input_files_uri = "random_invoices"
index_uri = "file_index"
metadata_array_uri = "chunk_metadata"

# Clean up past data
if os.path.exists(input_files_uri):
    shutil.rmtree(input_files_uri)
if os.path.exists(index_uri):
    shutil.rmtree(index_uri)
if os.path.exists(metadata_array_uri):
    shutil.rmtree(metadata_array_uri)

Download some synthetically generated invoice PDFs.

os.mkdir(input_files_uri)
!aws s3 cp --no-sign-request --recursive s3://tiledb-inc-demo-data/examples/notebooks/genai/random_invoices {input_files_uri}

Ingestion

You will ingest the invoice PDFs. The ingestion performs:

  • File parsing, text extraction, and text splitting into chunks.
  • Text embedding generation using open source embedding models or OpenAI API calls.
  • Vector indexing of embeddings.

Extract text from the PDF documents and split it into text chunks using LangChain utilities.

# Parse documents and split them into text chunks
loader = GenericLoader.from_filesystem(
    input_files_uri, glob="**/*", suffixes=[".pdf"], parser=PyMuPDFParser()
)
documents = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
documents = loader.load()
print(f"Number of raw documents loaded: {len(documents)}")
documents = splitter.split_documents(documents)
texts = [d.page_content for d in documents]
print(f"Number of document chunks: {len(texts)}")
Number of raw documents loaded: 971
Number of document chunks: 971

Create a TileDB array to store the chunks.

# Create metadata array
external_ids_dim = tiledb.Dim(
    name="external_id",
    domain=(0, np.iinfo(np.dtype("uint64")).max - 10000),
    tile=10000,
    dtype=np.dtype(np.uint64),
)
external_ids_dom = tiledb.Domain(external_ids_dim)
text_attr = tiledb.Attr(name="text", dtype=str)
file_path_attr = tiledb.Attr(name="file_path", dtype=str)
page_attr = tiledb.Attr(name="page", dtype=np.int32)
attrs = [text_attr, file_path_attr, page_attr]
schema = tiledb.ArraySchema(
    domain=external_ids_dom,
    sparse=True,
    attrs=attrs,
)
tiledb.Array.create(metadata_array_uri, schema)

Next, store the text of the chunks along with their metadata in the TileDB array.

# Add document chunk metadata to metadata array
size = len(documents)
text_metadata = np.empty(size, dtype="O")
file_paths = np.empty(size, dtype="O")
pages = np.zeros(size, dtype=np.int32)
external_ids = np.zeros(size, dtype=np.uint64)

for i in range(size):
    pages[i] = int(documents[i].metadata["page"])
    file_paths[i] = documents[i].metadata["file_path"]
    text_metadata[i] = texts[i]
    external_ids[i] = i

with tiledb.open(metadata_array_uri, "w") as metadata_array:
    metadata_array[external_ids] = {
        "text": text_metadata,
        "file_path": file_paths,
        "page": pages,
    }

You can now generate text embeddings and index them using an IVF_FLAT index.

# NOTE: You need to set the OPENAI_API_KEY variable for this to work.

# Generate embeddings for each document chunk
embedding = OpenAIEmbeddings()
text_embeddings = embedding.embed_documents(texts)

# Index document chunks using a TileDB IVF_FLAT index
vs.ingest(
    index_type="IVF_FLAT",
    index_uri=index_uri,
    input_vectors=np.array(text_embeddings).astype(np.float32),
)
print(
    f"Number of vector embeddings stored in TileDB-Vector-Search: {len(text_embeddings)}"
)
Number of vector embeddings stored in TileDB-Vector-Search: 971

Search

Open the vector search index:

index = vs.IVFFlatIndex(uri=index_uri)

Search for texts related to “Internet purchases”.

query = "Internet purchase"
k = 2

query_embeddings = embedding.embed_documents([query])
d, result_ids = index.query(
    queries=np.array(query_embeddings).astype(np.float32), k=k, nprobe=index.partitions
)

Display the results by retrieving the relevant chunk metadata from the metadata TileDB array.

def showImage(file_path):
    doc = fitz.open(result_metadata["file_path"][0])
    page = doc.load_page(0)
    zoom = 1
    mat = fitz.Matrix(zoom, zoom)
    pix = page.get_pixmap(matrix=mat)
    image = Image.open(BytesIO(pix.tobytes(output="png", jpg_quality=95)))
    display(image)


with tiledb.open(metadata_array_uri) as metadata_array:
    result_metadata = metadata_array.multi_index[result_ids[0]]
    for i in range(k):
        path = result_metadata["file_path"][i]
        print(f"File path: {path}")
        page = result_metadata["page"][i]
        print(f"Page: {page}")
        showImage(result_metadata["file_path"][i])
File path: random_invoices/Griffin-Lewis_invoice.pdf
Page: 0

File path: random_invoices/Flynn Ltd_invoice.pdf
Page: 0

Clean up

Delete the data you created in this tutorial.

# Clean up
if os.path.exists(input_files_uri):
    shutil.rmtree(input_files_uri)
if os.path.exists(index_uri):
    shutil.rmtree(index_uri)
if os.path.exists(metadata_array_uri):
    shutil.rmtree(metadata_array_uri)
LLM Memory
Image Search