1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. Protein Search
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Dataset
  • Embeddings
  • Setup
  • Download dataset
  • Index
  • Protein similarity search
    • Pick a query protein
    • Similarity search
    • Display results
  • Clean up
  1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. Protein Search

Protein Search

ai/ml
vector search
tutorials
python
search
Learn how to use TileDB-Vector-Search to perform a similarity search for a protein dataset.
How to run this tutorial

We recommend running this tutorial, as well as the other various tutorials in the Tutorials section, inside TileDB Cloud. This will allow you to quickly experiment avoiding all the installation, deployment and configuration hassles. Sign up for the free tier, spin up a TileDB Cloud notebook with a Python kernel, and follow the tutorial instructions. If you wish to learn how to run tutorials locally on your machine, read the Tutorials: Running Locally tutorial.

This tutorial shows how you can use TileDB-Vector-Search to search for similar proteins within a protein dataset.

Dataset

You will use the Swiss-Prot dataset from UniProtKB, including 570k manually-annotated proteins with information extracted from literature and curator evaluated computational analysis.

Embeddings

Protein embeddings are a way to encode functional and structural properties of a protein, from its amino-acid sequence. Generating such embeddings is computationally expensive, but once computed they can be leveraged for different tasks, such as sequence similarity search, sequence clustering, and sequence classification.

UniProt is providing raw embeddings (per-protein and per-residue using the ProtT5 model) for the Swiss-Prot dataset.

The embeddings were generated using the bio_embeddings tool, and the specific model used is prottrans_t5_xl_u50. You can check more details here.

The embeddings can also be generated using the publicly available HuggingFace model ProtT5-XL-UniRef50 using the following code snippet:

model = T5EncoderModel.from_pretrained("Rostlab/prot_t5_xl_uniref50")
tokenizer = T5Tokenizer.from_pretrained("Rostlab/prot_t5_xl_uniref50")
sequence_examples = ["PRTEINO", "SEQWENCE"]
# this will replace all rare/ambiguous amino acids by X and introduce white-space between all amino acids
sequence_examples = [" ".join(list(re.sub(r"[UZOB]", "X", sequence))) for sequence in sequence_examples]

# tokenize sequences and pad up to the longest sequence in the batch
ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest")
input_ids = torch.tensor(ids['input_ids']).to(device)
attention_mask = torch.tensor(ids['attention_mask']).to(device)

# generate embeddings
with torch.no_grad():
    embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask)

# extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7])
emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1024)
print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}")
# do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8])
emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1024)

# if you want to derive a single representation (per-protein embedding) for the whole protein
emb_0_per_protein = emb_0.mean(dim=0) # shape (1024)

print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}")

This tutorial uses the pre-computed per-protein embeddings for the Homo Sapiens part of the Swiss-Prot dataset.

Setup

If you are running this tutorial locally, you will additionally need to install the following:

 pip install icn3dpy h5py

Start by importing the necessary libraries and defining URI variables for the different assets:

import os
import random
import shutil

import h5py
import icn3dpy
import numpy as np
import tiledb
import tiledb.vector_search as vs

input_dir = "protein-data"
swiss_prot_uri = "swiss-prot-data"
index_uri = "swiss-prot-index"

Download dataset

Download the Swiss-Prot dataset and the per-protein embeddings of the Homo Sapiens part of the dataset.

!rm -r {input_dir}
!mkdir {input_dir}
!wget -P {input_dir} https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/embeddings/UP000005640_9606/per-protein.h5
!wget -P {input_dir} https://ftp.uniprot.org/pub/databases/uniprot/knowledgebase/complete/uniprot_sprot.fasta.gz 
!cd {input_dir} && gunzip uniprot_sprot.fasta.gz

You can now parse the FASTA encoded Swiss-Prot dataset and load it in a TileDB array for convenient retrieval of protein data.

# Util function to load a FASTA file to a TileDB array
def fasta_to_tiledb(fasta_path, tiledb_array_uri):
    max_size = 1000000
    sequences = np.empty(max_size, dtype="O")
    metadata = np.empty(max_size, dtype="O")
    uniprot_ids = np.empty(max_size, dtype="O")
    prot_ids = np.empty(max_size, dtype="O")
    ids = np.zeros(max_size, dtype=np.uint64)
    prot_id = -1
    with open(fasta_path, "r") as fasta_f:
        for line in fasta_f:
            if line.startswith(">"):
                prot_id += 1
                prot_metadata = line.replace(">", "").strip()
                prot_metadata = (
                    prot_metadata.replace("/", "_").replace(".", "_").split(" ", 1)
                )
                uniprot_id = prot_metadata[0]
                sequences[prot_id] = ""
                uniprot_ids[prot_id] = uniprot_id
                prot_ids[prot_id] = uniprot_id.split("|")[1]
                ids[prot_id] = abs(hash(uniprot_id.split("|")[1]))
                metadata[prot_id] = prot_metadata[1]
            else:
                sequences[prot_id] += (
                    "".join(line.split()).upper().replace("-", "")
                )  # drop gaps and cast to upper-case
    prot_id += 1
    with tiledb.open(tiledb_array_uri, mode="w") as swiss_prot_array:
        swiss_prot_array[ids[0:prot_id]] = {
            "sequence": sequences[0:prot_id],
            "metadata": metadata[0:prot_id],
            "uniprot_id": uniprot_ids[0:prot_id],
            "prot_id": prot_ids[0:prot_id],
        }


# Delete array is exists
if os.path.isdir(swiss_prot_uri):
    shutil.rmtree(swiss_prot_uri)

# Create TileDB array
dim = tiledb.Dim(name="id", domain=(0, np.iinfo(np.uint64).max - 1), dtype=np.uint64)
dom = tiledb.Domain(dim)
sequence_attr = tiledb.Attr(name="sequence", dtype=str)
metadata_attr = tiledb.Attr(name="metadata", dtype=str)
uniprot_id_attr = tiledb.Attr(name="uniprot_id", dtype=str)
prot_id_attr = tiledb.Attr(name="prot_id", dtype=str)
schema = tiledb.ArraySchema(
    domain=dom,
    sparse=True,
    attrs=[sequence_attr, metadata_attr, uniprot_id_attr, prot_id_attr],
)
tiledb.Array.create(swiss_prot_uri, schema)

# Load FASTA file to TileDB array
fasta_to_tiledb(f"{input_dir}/uniprot_sprot.fasta", swiss_prot_uri)

Index

First, lets read the protein embeddings from the downloaded H5 file.

with h5py.File(f"{input_dir}/per-protein.h5", "r") as file:
    size = len(file.items())
    external_ids = np.zeros(size, dtype=np.uint64)
    embeddings = np.zeros((size, 1024), dtype=np.float32)
    i = 0
    for sequence_id, embedding in file.items():
        external_ids[i] = abs(hash(sequence_id))
        embeddings[i] = np.array(embedding)
        i += 1

Index the protein embeddings using an IVF_FLAT index.

if os.path.isdir(index_uri):
    shutil.rmtree(index_uri)

index = vs.ingest(
    index_type="IVF_FLAT",
    index_uri=index_uri,
    input_vectors=embeddings,
    external_ids=external_ids,
)

Protein similarity search

Pick a query protein

Start by picking a random protein and displaying its 3D structure:

# Open the Swiss-Prot vector index
index = vs.IVFFlatIndex(uri=index_uri)

# Pick a random protein from Swiss-Prot
random_prot_id = random.randrange(size)
with tiledb.open(swiss_prot_uri) as swiss_prot_array:
    random_prot_data = swiss_prot_array[external_ids[random_prot_id]]
print(
    f"Query protein: {random_prot_data['prot_id'][0]} metadata: {random_prot_data['metadata']}"
)
view = icn3dpy.view(q=f"mmdbafid={random_prot_data['prot_id'][0]}")
display(view)
Query protein: Q8N1N4 metadata: ['Keratin, type II cytoskeletal 78 OS=Homo sapiens OX=9606 GN=KRT78 PE=1 SV=2']

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the extension:
jupyter labextension install jupyterlab_3dmol

<icn3dpy.view at 0x28a872e50>

Similarity search

Now search for similar proteins in the index:

# Search for similar proteins in SwissProt
d, i = index.query(np.array([embeddings[random_prot_id]]), k=4, nprobe=30)

Display results

Finally, display the results along with their 3D structure:

# Display the results
with tiledb.open(swiss_prot_uri) as swiss_prot_array:
    for similar_prot_id in i[0][1:]:
        similar_prot_data = swiss_prot_array[similar_prot_id]
        print(
            f"Similar protein: {similar_prot_data['prot_id'][0]} metadata: {similar_prot_data['metadata']}"
        )
        view = icn3dpy.view(q=f"mmdbafid={similar_prot_data['prot_id'][0]}")
        display(view)
Similar protein: Q14CN4 metadata: ['Keratin, type II cytoskeletal 72 OS=Homo sapiens OX=9606 GN=KRT72 PE=1 SV=2']

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the extension:
jupyter labextension install jupyterlab_3dmol

<icn3dpy.view at 0x28a872b20>
Similar protein: Q86Y46 metadata: ['Keratin, type II cytoskeletal 73 OS=Homo sapiens OX=9606 GN=KRT73 PE=1 SV=1']

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the extension:
jupyter labextension install jupyterlab_3dmol

<icn3dpy.view at 0x28a872b50>
Similar protein: Q3SY84 metadata: ['Keratin, type II cytoskeletal 71 OS=Homo sapiens OX=9606 GN=KRT71 PE=1 SV=3']

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the extension:
jupyter labextension install jupyterlab_3dmol

<icn3dpy.view at 0x28a872640>

Clean up

Clean up all the generated data.

# Clean up past data
if os.path.exists(input_dir):
    shutil.rmtree(input_dir)
if os.path.exists(swiss_prot_uri):
    shutil.rmtree(swiss_prot_uri)
if os.path.exists(index_uri):
    shutil.rmtree(index_uri)
Image Search
Performance