1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. RAG LLM
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Setup
  • Vanilla ChatGPT 3.5 Turbo
  • RAG ChatGPT 3.5 Turbo
  • Clean up
  • What’s next?
  1. Structure
  2. AI & ML
  3. Vector Search
  4. Tutorials
  5. Advanced
  6. RAG LLM

RAG LLM

ai/ml
vector search
tutorials
retrieval-augmented generation
Learn how to augment LLMs with vector search with TileDB-Vector-Search and LangChain.
How to run this tutorial

We recommend running this tutorial, as well as the other various tutorials in the Tutorials section, inside TileDB Cloud. This will allow you to quickly experiment avoiding all the installation, deployment, and configuration hassles. Sign up for the free tier, spin up a TileDB Cloud notebook with a Python kernel, and follow the tutorial instructions. If you wish to learn how to run tutorials locally on your machine, read the Tutorials: Running Locally tutorial.

One of the limitations of LLMs is that their knowledge extends only to the data that were used during their training. Public training datasets are missing private and proprietary information required for enterprise applications. They are also missing information about the world and events that happened after the dataset creation time. This problem affects all types of LLMs, including public models, proprietary models, and even those deployed and used locally (e.g., in sensitive enterprise applications).

In this tutorial, you will use TileDB-Vector-Search to allow the gpt-3.5-turbo model to answer questions about LangChain. Most ChatGPT models have limited world knowledge after 2021, which is the training data cutoff date. LangChain was created and became popular after 2021. Although we use ChatGPT 3.5, this example can be easily extended to other LLMs. You will augment gpt-3.5-turbo with TileDB-Vector-Search, via LangChain, one of the most popular large language model (LLM) application development frameworks that integrates with our TileDB-Vector-Search library. This approach is called retrieval-augmented generation (RAG).

If you wish to learn more about RAG LLMs, visit the Introduction section.

Setup

To be able to run this tutorial, you will need an OpenAI API key. In addition, if you wish to use your local machine instead of a TileDB Cloud notebook, you will need to install the following:

  • Conda
  • Pip
conda install -c conda-forge langchain==0.0.331 openai==0.28.1 tiktoken
pip install langchain==0.0.331 openai==0.28.1 tiktoken

Start by importing the necessary libraries, setting the URIs you will use throughout the tutorial, and clean up any previously generated data.

import os
import shutil

from langchain.chains import ConversationalRetrievalChain, ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers.txt import TextParser
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import Language, RecursiveCharacterTextSplitter
from langchain.vectorstores.tiledb import TileDB

# URIs to be used throughout the tutorial
langchain_repo_uri = "langchain"
index_uri = "langchain_doc_index"

# Clean up
if os.path.exists(langchain_repo_uri):
    shutil.rmtree(langchain_repo_uri)
if os.path.exists(index_uri):
    shutil.rmtree(index_uri)

Vanilla ChatGPT 3.5 Turbo

Initialize ChatGPT 3.5 Turbo.

# NOTE: Make sure you set the OPENAI_API_KEY environment
# variable with your OpenAI API key.

# Initialize chatgpt
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
chatgpt = ConversationChain(llm=llm)

Ask ChatGPT a question about LangChain. Note that ChatGPT incorrectly describes LangChain as a language learning platform based on the blockchain, and not as a framework for developing applications powered by LLMs.

question = "What is langchain?"
print(f"User: {question}")
print(f"AI: {chatgpt.run(question)}")
User: What is langchain?
AI: Langchain is a decentralized platform that uses blockchain technology to create a secure and transparent system for language learning. It allows users to connect with language tutors, access learning materials, and track their progress in real-time. The platform also uses smart contracts to ensure fair transactions between users and tutors.

RAG ChatGPT 3.5 Turbo

Now, you will use LangChain’s documentation to augment ChatGPT so that it can correctly answer the question about the project.

First, clone the LangChain repo from GitHub:

!git clone https://github.com/langchain-ai/langchain.git
Cloning into 'langchain'...
remote: Enumerating objects: 202474, done.
remote: Counting objects: 100% (15425/15425), done.
remote: Compressing objects: 100% (1401/1401), done.
remote: Total 202474 (delta 14572), reused 14225 (delta 14024), pack-reused 187049 (from 1)
Receiving objects: 100% (202474/202474), 283.53 MiB | 48.00 MiB/s, done.
Resolving deltas: 100% (151831/151831), done.
Updating files: 100% (7795/7795), done.

Next, parse the documents in the repo.

# Parse markdown documents and split them into text chunks
documentation_path = "./langchain/docs"
loader = GenericLoader.from_filesystem(
    documentation_path, glob="**/*", suffixes=[".mdx"], parser=TextParser()
)
splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.MARKDOWN, chunk_size=1000, chunk_overlap=100
)
documents = loader.load()
print(f"Number of raw documents loaded: {len(documents)}")
documents = splitter.split_documents(documents)
documents = [d for d in documents if len(d.page_content) > 5]
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
print(f"Number of document chunks: {len(texts)}")
Number of raw documents loaded: 299
Number of document chunks: 1268

Generate the appropriate vector embeddings from the documents, which will be used to create a vector index afterwards.

# Generate embeddings for each document chunk
embedding = OpenAIEmbeddings()
text_embeddings = embedding.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))

Create a vector index on the generated embeddings, using TileDB-Vector-Search:

# Index document chunks using a TileDB IVF_FLAT index
db = TileDB.from_embeddings(
    text_embedding_pairs,
    embedding,
    index_uri=index_uri,
    index_type="IVF_FLAT",
    metadatas=metadatas,
    allow_dangerous_deserialization=True,
)
print(
    f"Number of vector embeddings stored in TileDB-Vector-Search: {len(text_embeddings)}"
)
Number of vector embeddings stored in TileDB-Vector-Search: 1268

Now, ask the same question, augmenting ChatGPT with the vector index you created.

db = TileDB.load(
    index_uri=index_uri, embedding=embedding, allow_dangerous_deserialization=True
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever = db.as_retriever(
    search_type="similarity",
    search_kwargs={"k": 5},
)

# Chatgpt augmented with our vector index
rag_chatgpt = ConversationalRetrievalChain.from_llm(llm, retriever=retriever)

question = "What is langchain?"
print(f"User: {question}")
print(f"AI: {rag_chatgpt.run({'question': question, 'chat_history': ''})}\n")
User: What is langchain?
AI: LangChain is a platform that implements the latest research in Natural Language Processing (NLP). It allows users to build AI applications, chatbots, and agents using advanced language models like OpenAI, Google Gemini Pro, and LLAMA2. LangChain is used in various industries like retail, ed-tech, and more, and it offers a range of online courses and tutorials to help users get started with building AI applications.

You can see that ChatGPT successfully responds with meaningful information about the LangChain project.

Clean up

Clean up the generated data.

# Clean up
if os.path.exists(langchain_repo_uri):
    shutil.rmtree(langchain_repo_uri)
if os.path.exists(index_uri):
    shutil.rmtree(index_uri)

What’s next?

Now that you know how to augment an LLM with vector search, you should learn about how you can augment it with conversation history as well by reading the Tutorials: LLM memory tutorial.

Distributed Compute
LLM Memory