1. Structure
  2. Tables
  3. Tutorials
  4. Basics
  5. CSV Ingestion
  • Home
  • What is TileDB?
  • Get Started
  • Explore Content
  • Accounts
    • Individual Accounts
      • Apply for the Free Tier
      • Profile
        • Overview
        • Cloud Credentials
        • Storage Paths
        • REST API Tokens
        • Credits
    • Organization Admins
      • Create an Organization
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
    • Organization Members
      • Organization Invitations
      • Profile
        • Overview
        • Members
        • Cloud Credentials
        • Storage Paths
        • Billing
      • API Tokens
  • Catalog
    • Introduction
    • Data
      • Arrays
      • Tables
      • Single-Cell (SOMA)
      • Genomics (VCF)
      • Biomedical Imaging
      • Vector Search
      • Files
    • Code
      • Notebooks
      • Dashboards
      • User-Defined Functions
      • Task Graphs
      • ML Models
    • Groups
    • Marketplace
    • Search
  • Collaborate
    • Introduction
    • Organizations
    • Access Control
      • Introduction
      • Share Assets
      • Asset Permissions
      • Public Assets
    • Logging
    • Marketplace
  • Analyze
    • Introduction
    • Slice Data
    • Multi-Region Redirection
    • Notebooks
      • Launch a Notebook
      • Usage
      • Widgets
      • Notebook Image Dependencies
    • Dashboards
      • Dashboards
      • Streamlit
    • Preview
    • User-Defined Functions
    • Task Graphs
    • Serverless SQL
    • Monitor
      • Task Log
      • Task Graph Log
  • Scale
    • Introduction
    • Task Graphs
    • API Usage
  • Structure
    • Why Structure Is Important
    • Arrays
      • Introduction
      • Quickstart
      • Foundation
        • Array Data Model
        • Key Concepts
          • Storage
            • Arrays
            • Dimensions
            • Attributes
            • Cells
            • Domain
            • Tiles
            • Data Layout
            • Compression
            • Encryption
            • Tile Filters
            • Array Schema
            • Schema Evolution
            • Fragments
            • Fragment Metadata
            • Commits
            • Indexing
            • Array Metadata
            • Datetimes
            • Groups
            • Object Stores
          • Compute
            • Writes
            • Deletions
            • Consolidation
            • Vacuuming
            • Time Traveling
            • Reads
            • Query Conditions
            • Aggregates
            • User-Defined Functions
            • Distributed Compute
            • Concurrency
            • Parallelism
        • Storage Format Spec
      • Tutorials
        • Basics
          • Basic Dense Array
          • Basic Sparse Array
          • Array Metadata
          • Compression
          • Encryption
          • Data Layout
          • Tile Filters
          • Datetimes
          • Multiple Attributes
          • Variable-Length Attributes
          • String Dimensions
          • Nullable Attributes
          • Multi-Range Reads
          • Query Conditions
          • Aggregates
          • Deletions
          • Catching Errors
          • Configuration
          • Basic S3 Example
          • Basic TileDB Cloud
          • fromDataFrame
          • Palmer Penguins
        • Advanced
          • Schema Evolution
          • Advanced Writes
            • Write at a Timestamp
            • Get Fragment Info
            • Consolidation
              • Fragments
              • Fragment List
              • Consolidation Plan
              • Commits
              • Fragment Metadata
              • Array Metadata
            • Vacuuming
              • Fragments
              • Commits
              • Fragment Metadata
              • Array Metadata
          • Advanced Reads
            • Get Fragment Info
            • Time Traveling
              • Introduction
              • Fragments
              • Array Metadata
              • Schema Evolution
          • Array Upgrade
          • Backends
            • Amazon S3
            • Azure Blob Storage
            • Google Cloud Storage
            • MinIO
            • Lustre
          • Virtual Filesystem
          • User-Defined Functions
          • Distributed Compute
          • Result Estimation
          • Incomplete Queries
        • Management
          • Array Schema
          • Groups
          • Object Management
        • Performance
          • Summary of Factors
          • Dense vs. Sparse
          • Dimensions vs. Attributes
          • Compression
          • Tiling and Data Layout
          • Tuning Writes
          • Tuning Reads
      • API Reference
    • Tables
      • Introduction
      • Quickstart
      • Foundation
        • Data Model
        • Key Concepts
          • Indexes
          • Columnar Storage
          • Compression
          • Data Manipulation
          • Optimize Tables
          • ACID
          • Serverless SQL
          • SQL Connectors
          • Dataframes
          • CSV Ingestion
      • Tutorials
        • Basics
          • Ingestion with SQL
          • CSV Ingestion
          • Basic S3 Example
          • Running Locally
        • Advanced
          • Scalable Ingestion
          • Scalable Queries
      • API Reference
    • AI & ML
      • Vector Search
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Vector Search
            • Vector Databases
            • Algorithms
            • Distance Metrics
            • Updates
            • Deployment Methods
            • Architecture
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Ingestion & Querying
            • Updates
            • Deletions
            • Basic S3 Example
            • Running Locally
          • Advanced
            • Versioning
            • Time Traveling
            • Consolidation
            • Distributed Compute
            • RAG LLM
            • LLM Memory
            • File Search
            • Image Search
            • Protein Search
          • Performance
        • API Reference
      • ML Models
        • Introduction
        • Quickstart
        • Foundation
          • Basics
          • Storage
          • Cloud Execution
          • Why TileDB for Machine Learning
        • Tutorials
          • Ingestion
            • Data Ingestion
              • Dense Datasets
              • Sparse Datasets
            • ML Model Ingestion
          • Management
            • Array Schema
            • Machine Learning: Groups
            • Time Traveling
    • Life Sciences
      • Single-cell
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • Data Structures
            • Use of Apache Arrow
            • Join IDs
            • State Management
            • TileDB Cloud URIs
          • SOMA API Specification
        • Tutorials
          • Data Ingestion
          • Bulk Ingestion Tutorial
          • Data Access
          • Distributed Compute
          • Basic S3 Example
          • Multi-Experiment Queries
          • Appending Data to a SOMA Experiment
          • Add New Measurements
          • SQL Queries
          • Running Locally
          • Shapes in TileDB-SOMA
          • Drug Discovery App
        • Spatial
          • Introduction
          • Foundation
            • Spatial Data Model
            • Data Structures
          • Tutorials
            • Spatial Data Ingestion
            • Access Spatial Data
            • Manage Coordinate Spaces
        • API Reference
      • Population Genomics
        • Introduction
        • Quickstart
        • Foundation
          • Data Model
          • Key Concepts
            • The N+1 Problem
            • Architecture
            • Arrays
            • Ingestion
            • Reads
            • Variant Statistics
            • Annotations
            • User-Defined Functions
            • Tables and SQL
            • Distributed Compute
          • Storage Format Spec
        • Tutorials
          • Basics
            • Basic Ingestion
            • Basic Queries
            • Export to VCF
            • Add New Samples
            • Deleting Samples
            • Basic S3 Example
            • Basic TileDB Cloud
          • Advanced
            • Scalable Ingestion
            • Scalable Queries
            • Query Transforms
            • Handling Large Queries
            • Annotations
              • Finding Annotations
              • Embedded Annotations
              • External Annotations
              • Annotation VCFs
              • Ingesting Annotations
            • Variant Statistics
            • Tables and SQL
            • User-Defined Functions
            • Sample Metadata
            • Split VCF
          • Performance
        • API Reference
          • Command Line Interface
          • Python API
          • Cloud API
      • Biomedical Imaging
        • Introduction
        • Foundation
          • Data Model
          • Key Concepts
            • Arrays
            • Ingestion
            • Reads
            • User Defined Functions
          • Storage Format Spec
        • Quickstart
        • Tutorials
          • Basics
            • Ingestion
            • Read
              • OpenSlide
              • TileDB-Py
          • Advanced
            • Batched Ingestion
            • Chunked Ingestion
            • Machine Learning
              • PyTorch
            • Napari
    • Files
  • API Reference
  • Self-Hosting
    • Installation
    • Upgrades
    • Administrative Tasks
    • Image Customization
      • Customize User-Defined Function Images
      • AWS ECR Container Registry
      • Customize Jupyter Notebook Images
    • Single Sign-On
      • Configure Single Sign-On
      • OpenID Connect
      • Okta SCIM
      • Microsoft Entra
  • Glossary

On this page

  • Setup
  • Ingest CSV into a 1D dense array
  • Ingest CSV into a 2D sparse array
  • Clean up
  1. Structure
  2. Tables
  3. Tutorials
  4. Basics
  5. CSV Ingestion

CSV Ingestion Tutorial

tutorials
tables
ingestion
This tutorial covers the various modes of CSV ingestion with TileDB.
How to run this tutorial

We recommend running this tutorial, as well as the other tutorials in the Tutorials section, inside TileDB Cloud. By using TileDB Cloud, you can experiment while avoiding all the installation, deployment, and configuration hassles. Sign up for the free tier, spin up a TileDB Cloud notebook with a Python kernel, and follow the tutorial instructions. If you wish to learn how to run tutorials locally on your machine, read the Tutorials: Running Locally tutorial.

This tutorial shows you how to create tables and ingest data to them via directly ingesting a CSV file. You will first perform some basic setup steps, and then you will create two different types of tables, one represented as a 1D dense array, and one as a 2D sparse array. If you wish to understand their differences and impact on performance, read the Tables Data Model section.

Setup

First, import the necessary libraries, set the URIs you will use in this tutorial, and delete any already-created tables with the same name.

import os
import shutil
import warnings

import tiledb

warnings.filterwarnings("ignore")
import numpy as np
import tiledb.sql

# Print library versions
print("TileDB core version: {}".format(tiledb.libtiledb.version()))
print("TileDB-Py version: {}".format(tiledb.version()))
print("TileDB-SQL version: {}".format(tiledb.sql.version))

# Set table dataset URIs, and the URI to an example CSV
dense_table_uri = "my_dense_table"
sparse_table_uri = "my_sparse_table"
example_csv_uri = (
    "s3://tiledb-inc-demo-data/examples/notebooks/nyc_yellow_tripdata/taxi_first_10.csv"
)

# Set configuration parameters.
cfg = tiledb.Config({"vfs.s3.no_sign_request": "true", "vfs.s3.region": "us-east-1"})
ctx = tiledb.Ctx(cfg)

# Clean up the tables if they already exist
if os.path.exists(dense_table_uri):
    shutil.rmtree(dense_table_uri)
if os.path.exists(sparse_table_uri):
    shutil.rmtree(sparse_table_uri)

You will use a subset from the latest New York City Taxi and Limousine Commission Trip Record Data dataset.

Ingest CSV into a 1D dense array

You can ingest a CSV file into a TileDB table (which TileDB will create if it doesn’t already exist) as follows. Observe that all you need is the source CSV file and the TileDB table URI. Parameter parse_dates will force the representation of certain CSV fields as datetimes.

tiledb.from_csv(
    dense_table_uri,
    example_csv_uri,
    ctx=ctx,
    parse_dates=["tpep_dropoff_datetime", "tpep_pickup_datetime"],
)

Prepare the table for reading using the Python API.

# Open the Dataset in read mode
table = tiledb.open(dense_table_uri, mode="r", ctx=ctx)

Inspect the schema of the underlying array.

# Show which samples were ingested
print(table.schema)
ArraySchema(
  domain=Domain(*[
    Dim(name='__tiledb_rows', domain=(0, 9), tile=10, dtype='uint64', filters=FilterList([ZstdFilter(level=-1), ])),
  ]),
  attrs=[
    Attr(name='VendorID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_pickup_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_dropoff_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='passenger_count', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='trip_distance', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='RatecodeID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='store_and_fwd_flag', dtype='<U0', var=True, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='PULocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='DOLocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='payment_type', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='fare_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='extra', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='mta_tax', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tip_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tolls_amount', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='improvement_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='total_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='congestion_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
  ],
  cell_order='row-major',
  tile_order='row-major',
  sparse=False,
)

The created table is modeled as a 1D dense TileDB array with a __tiledb_rows dimension added, along with the CSV fields.

# Show which samples were ingested
print(table.schema)
ArraySchema(
  domain=Domain(*[
    Dim(name='__tiledb_rows', domain=(0, 9), tile=10, dtype='uint64', filters=FilterList([ZstdFilter(level=-1), ])),
  ]),
  attrs=[
    Attr(name='VendorID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_pickup_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_dropoff_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='passenger_count', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='trip_distance', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='RatecodeID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='store_and_fwd_flag', dtype='<U0', var=True, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='PULocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='DOLocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='payment_type', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='fare_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='extra', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='mta_tax', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tip_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tolls_amount', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='improvement_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='total_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='congestion_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
  ],
  cell_order='row-major',
  tile_order='row-major',
  sparse=False,
)

Read data into a dataframe with the .df[] method:

# Read entire dataset into a pandas dataframe
df = table.df[:]  # Equivalent to: A.df[0:9]
df
VendorID tpep_pickup_datetime tpep_dropoff_datetime passenger_count trip_distance RatecodeID store_and_fwd_flag PULocationID DOLocationID payment_type fare_amount extra mta_tax tip_amount tolls_amount improvement_surcharge total_amount congestion_surcharge
0 1 2020-01-01 00:28:15 2020-01-01 00:33:03 1 1.20 1 N 238 239 1 6.00 3.0 0.5 1.47 0 0.3 11.27 2.5
1 1 2020-01-01 00:35:39 2020-01-01 00:43:04 1 1.20 1 N 239 238 1 7.00 3.0 0.5 1.50 0 0.3 12.30 2.5
2 1 2020-01-01 00:47:41 2020-01-01 00:53:52 1 0.60 1 N 238 238 1 6.00 3.0 0.5 1.00 0 0.3 10.80 2.5
3 1 2020-01-01 00:55:23 2020-01-01 01:00:14 1 0.80 1 N 238 151 1 5.50 0.5 0.5 1.36 0 0.3 8.16 0.0
4 2 2020-01-01 00:01:58 2020-01-01 00:04:16 1 0.00 1 N 193 193 2 3.50 0.5 0.5 0.00 0 0.3 4.80 0.0
5 2 2020-01-01 00:09:44 2020-01-01 00:10:37 1 0.03 1 N 7 193 2 2.50 0.5 0.5 0.00 0 0.3 3.80 0.0
6 2 2020-01-01 00:39:25 2020-01-01 00:39:29 1 0.00 1 N 193 193 1 2.50 0.5 0.5 0.01 0 0.3 3.81 0.0
7 2 2019-12-18 15:27:49 2019-12-18 15:28:59 1 0.00 5 N 193 193 1 0.01 0.0 0.0 0.00 0 0.3 2.81 2.5
8 2 2019-12-18 15:30:35 2019-12-18 15:31:35 4 0.00 1 N 193 193 1 2.50 0.5 0.5 0.00 0 0.3 6.30 2.5
9 1 2020-01-01 00:29:01 2020-01-01 00:40:28 2 0.70 1 N 246 48 1 8.00 3.0 0.5 2.35 0 0.3 14.15 2.5

Ingest CSV into a 2D sparse array

You will ingest the same CSV file, but now in a 2D sparse array. Perform the ingestion as follows. Here are some difference to the 1D dense array case:

  • Set sparse to True.
  • Set the dimensions in index_dims.
  • Set allows_duplicates to True if you wish to allow rows that have the same values across the dimensions.
  • Set dim_filters and attr_filters if you wish to set your preferred compression filters, instead of using the defaults.
  • Set dtype if you wish to force a data type on certain CSV fields.
tiledb.from_csv(
    sparse_table_uri,
    example_csv_uri,
    ctx=ctx,
    sparse=True,
    index_dims=["tpep_pickup_datetime", "PULocationID"],
    allows_duplicates=True,
    dim_filters={
        "tpep_pickup_datetime": tiledb.FilterList([tiledb.GzipFilter(level=-1)])
    },
    attr_filters={"passenger_count": tiledb.FilterList([tiledb.GzipFilter(level=-1)])},
    dtype={"fare_amount": np.float32},
    parse_dates=["tpep_dropoff_datetime", "tpep_pickup_datetime"],
)

Prepare the table for reading using the Python API.

# Open the table in read mode
table = tiledb.open(sparse_table_uri, mode="r", ctx=ctx)

Inspect the schema of the underlying array.

# Show which samples were ingested
print(table.schema)
ArraySchema(
  domain=Domain(*[
    Dim(name='tpep_pickup_datetime', domain=(numpy.datetime64('2019-12-18T15:27:49.000000000'), numpy.datetime64('2020-01-01T00:55:23.000000000')), tile=numpy.timedelta64(1000,'ns'), dtype='datetime64[ns]', filters=FilterList([GzipFilter(level=-1), ])),
    Dim(name='PULocationID', domain=(7, 246), tile=240, dtype='int64', filters=FilterList([ZstdFilter(level=-1), ])),
  ]),
  attrs=[
    Attr(name='VendorID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_dropoff_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='passenger_count', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([GzipFilter(level=-1), ])),
    Attr(name='trip_distance', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='RatecodeID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='store_and_fwd_flag', dtype='<U0', var=True, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='DOLocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='payment_type', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='fare_amount', dtype='float32', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='extra', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='mta_tax', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tip_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tolls_amount', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='improvement_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='total_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='congestion_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
  ],
  cell_order='row-major',
  tile_order='row-major',
  capacity=10000,
  sparse=True,
  allows_duplicates=True,
)

The created table is modeled as a 2D dense TileDB array as expected.

# Show which samples were ingested
print(table.schema)
ArraySchema(
  domain=Domain(*[
    Dim(name='tpep_pickup_datetime', domain=(numpy.datetime64('2019-12-18T15:27:49.000000000'), numpy.datetime64('2020-01-01T00:55:23.000000000')), tile=numpy.timedelta64(1000,'ns'), dtype='datetime64[ns]', filters=FilterList([GzipFilter(level=-1), ])),
    Dim(name='PULocationID', domain=(7, 246), tile=240, dtype='int64', filters=FilterList([ZstdFilter(level=-1), ])),
  ]),
  attrs=[
    Attr(name='VendorID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tpep_dropoff_datetime', dtype='datetime64[ns]', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='passenger_count', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([GzipFilter(level=-1), ])),
    Attr(name='trip_distance', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='RatecodeID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='store_and_fwd_flag', dtype='<U0', var=True, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='DOLocationID', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='payment_type', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='fare_amount', dtype='float32', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='extra', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='mta_tax', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tip_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='tolls_amount', dtype='int64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='improvement_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='total_amount', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
    Attr(name='congestion_surcharge', dtype='float64', var=False, nullable=False, enum_label=None, filters=FilterList([ZstdFilter(level=-1), ])),
  ],
  cell_order='row-major',
  tile_order='row-major',
  capacity=10000,
  sparse=True,
  allows_duplicates=True,
)

Read data into a dataframe with the .df[] method:

# Read entire dataset into a pandas dataframe
df = table.df[:]
df
VendorID tpep_dropoff_datetime passenger_count trip_distance RatecodeID store_and_fwd_flag DOLocationID payment_type fare_amount extra mta_tax tip_amount tolls_amount improvement_surcharge total_amount congestion_surcharge
tpep_pickup_datetime PULocationID
2019-12-18 15:27:49 193 2 2019-12-18 15:28:59 1 0.00 5 N 193 1 0.01 0.0 0.0 0.00 0 0.3 2.81 2.5
2019-12-18 15:30:35 193 2 2019-12-18 15:31:35 4 0.00 1 N 193 1 2.50 0.5 0.5 0.00 0 0.3 6.30 2.5
2020-01-01 00:01:58 193 2 2020-01-01 00:04:16 1 0.00 1 N 193 2 3.50 0.5 0.5 0.00 0 0.3 4.80 0.0
2020-01-01 00:09:44 7 2 2020-01-01 00:10:37 1 0.03 1 N 193 2 2.50 0.5 0.5 0.00 0 0.3 3.80 0.0
2020-01-01 00:28:15 238 1 2020-01-01 00:33:03 1 1.20 1 N 239 1 6.00 3.0 0.5 1.47 0 0.3 11.27 2.5
2020-01-01 00:29:01 246 1 2020-01-01 00:40:28 2 0.70 1 N 48 1 8.00 3.0 0.5 2.35 0 0.3 14.15 2.5
2020-01-01 00:35:39 239 1 2020-01-01 00:43:04 1 1.20 1 N 238 1 7.00 3.0 0.5 1.50 0 0.3 12.30 2.5
2020-01-01 00:39:25 193 2 2020-01-01 00:39:29 1 0.00 1 N 193 1 2.50 0.5 0.5 0.01 0 0.3 3.81 0.0
2020-01-01 00:47:41 238 1 2020-01-01 00:53:52 1 0.60 1 N 238 1 6.00 3.0 0.5 1.00 0 0.3 10.80 2.5
2020-01-01 00:55:23 238 1 2020-01-01 01:00:14 1 0.80 1 N 151 1 5.50 0.5 0.5 1.36 0 0.3 8.16 0.0

Clean up

Delete the created tables.

# Clean up the tables if they already exist
if os.path.exists(dense_table_uri):
    shutil.rmtree(dense_table_uri)
if os.path.exists(sparse_table_uri):
    shutil.rmtree(sparse_table_uri)
Ingestion with SQL
Basic S3 Example