1. Structure
  2. AI & ML
  3. ML Models
  4. Tutorials
  5. Ingestion
  6. Data Ingestion
  7. Sparse Datasets
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Sparse ingestion
  • Dataloaders
  1. Structure
  2. AI & ML
  3. ML Models
  4. Tutorials
  5. Ingestion
  6. Data Ingestion
  7. Sparse Datasets

Ingest Sparse Data into a Machine Learning Model

tutorials
ai/ml
machine learning (ml)
ingestion
Learn how to ingest and perform basic ML operations on the MovieLens 100k sparse dataset.

A sparse dataset is one in which a significant portion of the values are zero.

  • In a sparse dataset, most elements in the data matrix or array contain zero values, resulting in a high level of sparsity.
  • Sparse datasets are common in various domains, such as natural language processing (where many words may not appear in a document), recommendation systems (where users may interact with only a small subset of items), and certain scientific applications (where measurements are missing or irrelevant for many observations).
  • Sparse datasets can be represented more efficiently using sparse matrix or sparse tensor formats, which store only the non-zero values along with their indices. This representation saves memory and computational resources compared to dense representations.

Sparse ingestion

For the sparse case, this tutorial will use the MovieLens 100k dataset.

Note

The MovieLens dataset is a well-known and widely used dataset in the field of recommendation systems and collaborative filtering research. It comprises various collections of movie ratings provided by users of the MovieLens website. These ratings are typically on a scale of 1 to 5, with 5 indicating the highest rating and 1 indicating the lowest. The MovieLens dataset comes in several versions, with the most commonly used ones being MovieLens 100k, MovieLens 1M, MovieLens 10M, MovieLens 20M, and MovieLens 25M, which denote the approximate number of ratings in each dataset. The sparsity of the MovieLens dataset depends on the specific version and how it is represented. In terms of ratings, the MovieLens dataset can be considered somewhat sparse, especially as the dataset size increases. This sparsity arises because not all users rate all movies, and the dataset typically contains many missing ratings.

  • PyTorch
  • TensorFlow

Import libraries

Start by importing the libraries used in this tutorial.

import os
import urllib.request

import numpy as np
import pandas as pd
import tiledb
from tiledb.ml.readers.pytorch import PyTorchTileDBDataLoader
from tiledb.ml.readers.types import ArrayParams

Download the dataset

original_path = os.path.join(os.path.pardir, "data")
filename = os.path.join(original_path, "movielens-ml-100k-u.data")
url = "https://files.grouplens.org/datasets/movielens/ml-100k/u.data"
urllib.request.urlretrieve(url, filename)

Transform to sparse dataset

data = pd.read_csv(
    filename, sep="\t", usecols=[0, 1, 2], names=["user_id", "item_id", "rating"]
)
data_one_hot = pd.get_dummies(data, columns=["user_id", "item_id"])
user_movie = data_one_hot[data_one_hot.columns.difference(["rating"])]
ratings = data["rating"]

Ingest in TileDB

def get_schema(data: np.array, batch_size: int, sparse: bool) -> tiledb.ArraySchema:
    dims = [
        tiledb.Dim(
            name="dim_" + str(dim),
            domain=(0, data.shape[dim] - 1),
            tile=data.shape[dim] if dim > 0 else batch_size,
            dtype=np.int32,
        )
        for dim in range(data.ndim)
    ]
    # TileDB schema
    schema = tiledb.ArraySchema(
        domain=tiledb.Domain(*dims),
        sparse=sparse,
        attrs=[tiledb.Attr(name="features", dtype=np.float32)],
    )
    return schema


# Let's define an ingestion function
def ingest_in_tiledb(data: np.array, batch_size: int, uri: str, sparse: bool):
    schema = get_schema(data, batch_size, sparse)
    # Create the (empty) array on disk.
    tiledb.Array.create(uri, schema)
    # Ingest
    with tiledb.open(uri, "w") as tiledb_array:
        idx = np.nonzero(data) if sparse else slice(None)
        tiledb_array[idx] = {"features": data[idx]}
data_dir = os.path.join(original_path, "readers", "sparse")
os.makedirs(data_dir, exist_ok=True)

# Ingest images
training_images = os.path.join(data_dir, "training_images")
ingest_in_tiledb(
    data=user_movie.to_numpy(), batch_size=64, uri=training_images, sparse=True
)

# Ingest labels
training_labels = os.path.join(data_dir, "training_labels")
ingest_in_tiledb(
    data=ratings.to_numpy(), batch_size=64, uri=training_labels, sparse=False
)

TileDB dataset

user_movie_array = tiledb.open(training_images)
ratings_array = tiledb.open(training_labels)

Arrays schemas

user_movie_array.schema
Domain
NameDomainTileData TypeIs Var-lengthFilters
dim_0(0, 99999)64int32False-
dim_1(0, 2624)2625int32False-
Attributes
NameData TypeIs Var-LenIs NullableFilters
featuresfloat32FalseFalse-
Cell Order
row-major
Tile Order
row-major
Capacity
10000
Sparse
True
Allows DuplicatesK/th>
False
ratings_array.schema
Domain
Name Domain Tile Data Type Is Var-length Filters
dim_0 (0, 99999) 64 int32 False -
Attributes
Name Data Type Is Var-Len Is Nullable Filters
features float32 False False -
Cell Order
row-major
Tile Order
row-major
Sparse
False

Import libraries

Start by importing the libraries used in this tutorial.

import os
import urllib.request

import numpy as np
import pandas as pd
import tiledb
from tiledb.ml.readers.tensorflow import ArrayParams, TensorflowTileDBDataset

Download the dataset

original_path = os.path.join(os.path.pardir, "data")
filename = os.path.join(original_path, "movielens-ml-100k-u.data")
url = "https://files.grouplens.org/datasets/movielens/ml-100k/u.data"
urllib.request.urlretrieve(url, filename)

Transform to sparse dataset

data = pd.read_csv(
    filename, sep="\t", usecols=[0, 1, 2], names=["user_id", "item_id", "rating"]
)
data_one_hot = pd.get_dummies(data, columns=["user_id", "item_id"])
user_movie = data_one_hot[data_one_hot.columns.difference(["rating"])]
ratings = data["rating"]

Ingest in TileDB

def get_schema(data: np.array, batch_size: int, sparse: bool) -> tiledb.ArraySchema:
    dims = [
        tiledb.Dim(
            name="dim_" + str(dim),
            domain=(0, data.shape[dim] - 1),
            tile=data.shape[dim] if dim > 0 else batch_size,
            dtype=np.int32,
        )
        for dim in range(data.ndim)
    ]
    # TileDB schema
    schema = tiledb.ArraySchema(
        domain=tiledb.Domain(*dims),
        sparse=sparse,
        attrs=[tiledb.Attr(name="features", dtype=np.float32)],
    )
    return schema


# Let's define an ingestion function
def ingest_in_tiledb(data: np.array, batch_size: int, uri: str, sparse: bool):
    schema = get_schema(data, batch_size, sparse)
    # Create the (empty) array on disk.
    tiledb.Array.create(uri, schema)
    # Ingest
    with tiledb.open(uri, "w") as tiledb_array:
        idx = np.nonzero(data) if sparse else slice(None)
        tiledb_array[idx] = {"features": data[idx]}
data_dir = os.path.join(original_path, "readers", "sparse")
os.makedirs(data_dir, exist_ok=True)

# Ingest images
training_images = os.path.join(data_dir, "training_images")
ingest_in_tiledb(
    data=user_movie.to_numpy(), batch_size=64, uri=training_images, sparse=True
)

# Ingest labels
training_labels = os.path.join(data_dir, "training_labels")
ingest_in_tiledb(
    data=ratings.to_numpy(), batch_size=64, uri=training_labels, sparse=False
)

TileDB dataset

user_movie_array = tiledb.open(training_images)
ratings_array = tiledb.open(training_labels)

Arrays schemas

user_movie_array.schema
Domain
NameDomainTileData TypeIs Var-lengthFilters
dim_0(0, 99999)64int32False-
dim_1(0, 2624)2625int32False-
Attributes
NameData TypeIs Var-LenIs NullableFilters
featuresfloat32FalseFalse-
Cell Order
row-major
Tile Order
row-major
Capacity
10000
Sparse
True
Allows DuplicatesK/th>
False
ratings_array.schema
Domain
Name Domain Tile Data Type Is Var-length Filters
dim_0 (0, 99999) 64 int32 False -
Attributes
Name Data Type Is Var-Len Is Nullable Filters
features float32 False False -
Cell Order
row-major
Tile Order
row-major
Sparse
False

Dataloaders

TileDB offers an API with native dataloaders for all the ML frameworks with which TileDB integrates. After you store your data, you can use the API to create dataloaders in each framework that will be later used as input to the model’s training stage. The API takes two TileDB arrays as inputs: x, which refers to the sample data; and y, which holds the label data corresponding to each sample in x. The dataloader collates these two arrays into a single data object that can later be used as input for training a model.

  • PyTorch
  • TensorFlow
with tiledb.open(training_images) as x, tiledb.open(training_labels) as y:
    train_loader = PyTorchTileDBDataLoader(
        ArrayParams(x),
        ArrayParams(y),
        batch_size=128,
        num_workers=0,
        shuffle_buffer_size=256,
    )
    batch_imgs, batch_labels = next(iter(train_loader))
    print(f"Input Shape: {batch_imgs.shape}")
    print(f"Label Shape: {batch_labels.shape}")
Input Shape: torch.Size([128, 2625])
Label Shape: torch.Size([128])
with tiledb.open(training_images) as x, tiledb.open(training_labels) as y:
    tiledb_dataset = TensorflowTileDBDataset(
        ArrayParams(array=x),
        ArrayParams(array=y),
    )
    batched_dataset = tiledb_dataset.batch(128)
    batch_imgs, batch_labels = next(batched_dataset.as_numpy_iterator())
    print(f"Input Shape: {batch_imgs.shape}")
    print(f"Label Shape: {batch_labels.shape}")
Input Shape: (128, 2625)
Label Shape: (128,)
Dense Datasets
ML Model Ingestion