sm4rtm4art commented on code in PR #18051: URL: https://github.com/apache/datafusion/pull/18051#discussion_r2435980243
########## docs/source/user-guide/arrow-introduction.md: ########## @@ -0,0 +1,301 @@ +<!--- + Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. +--> + +# A Gentle Introduction to Arrow & RecordBatches (for DataFusion users) + +```{contents} +:local: +:depth: 2 +``` + +This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. + +**Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators. + +## Why Columnar? The Arrow Advantage + +Apache Arrow is an open **specification** that defines how analytical data should be organized in memory. Think of it as a blueprint that different systems agree to follow, not a database or programming language. + +### Row-oriented vs Columnar Layout + +Traditional databases often store data row-by-row: + +``` +Row 1: [id: 1, name: "Alice", age: 30] +Row 2: [id: 2, name: "Bob", age: 25] +Row 3: [id: 3, name: "Carol", age: 35] +``` + +Arrow organizes the same data by column: + +``` +Column "id": [1, 2, 3] +Column "name": ["Alice", "Bob", "Carol"] +Column "age": [30, 25, 35] +``` + +Visual comparison: + +``` +Traditional Row Storage: Arrow Columnar Storage: +┌──────────────────┐ ┌─────────┬─────────┬──────────┐ +│ id │ name │ age │ │ id │ name │ age │ +├────┼──────┼──────┤ ├─────────┼─────────┼──────────┤ +│ 1 │ A │ 30 │ │ [1,2,3] │ [A,B,C] │[30,25,35]│ +│ 2 │ B │ 25 │ └─────────┴─────────┴──────────┘ +│ 3 │ C │ 35 │ ↑ ↑ ↑ +└──────────────────┘ Int32Array StringArray Int32Array +(read entire rows) (process entire columns at once) +``` + +### Why This Matters + +- **Vectorized Execution**: Process entire columns at once using SIMD instructions +- **Better Compression**: Similar values stored together compress more efficiently +- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data +- **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead + +DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can exchange data without expensive serialization/deserialization steps. + +## What is a RecordBatch? (And Why Batch?) + +A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema. + +### Why Not Process Entire Tables? + +- **Memory Constraints**: A billion-row table might not fit in RAM +- **Pipeline Processing**: Start producing results before reading all data +- **Parallel Execution**: Different threads can process different batches + +### Why Not Process Single Rows? + +- **Lost Vectorization**: Can't use SIMD instructions on single values +- **Poor Cache Utilization**: Jumping between rows defeats CPU cache optimization +- **High Overhead**: Managing individual rows has significant bookkeeping costs + +### RecordBatches: The Sweet Spot + +RecordBatches typically contain thousands of rows—enough to benefit from vectorization but small enough to fit in memory. DataFusion streams these batches through operators, achieving both efficiency and scalability. + +**Key Properties**: + +- Arrays are immutable (create new batches to modify data) +- NULL values tracked via efficient validity bitmaps +- Variable-length data (strings, lists) use offset arrays for efficient access + +## From files to Arrow + +When you call [`read_csv`], [`read_parquet`], [`read_json`] or [`read_avro`], DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches. + +The example below shows how to read data from different file formats. Each `read_*` method returns a [`DataFrame`] that represents a query plan. When you call [`.collect()`], DataFusion executes the plan and returns results as a `Vec<RecordBatch>`—the actual columnar data in Arrow format. + +```rust +use datafusion::prelude::*; + +#[tokio::main] +async fn main() -> datafusion::error::Result<()> { + let ctx = SessionContext::new(); + + // Pick ONE of these per run (each returns a new DataFrame): + let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?; + // let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?; + // let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature + // let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature + + let batches = df + .select(vec![col("id")])? + .filter(col("id").gt(lit(10)))? + .collect() + .await?; // Vec<RecordBatch> + + Ok(()) +} +``` + +## Streaming Through the Engine + +DataFusion processes queries as pull-based pipelines where operators request batches from their inputs. This streaming approach enables early result production, bounds memory usage (spilling to disk only when necessary), and naturally supports parallel execution across multiple CPU cores. + +``` +A user's query: SELECT name FROM 'data.parquet' WHERE id > 10 + +The DataFusion Pipeline: +┌─────────────┐ ┌──────────────┐ ┌────────────────┐ ┌──────────────────┐ ┌──────────┐ +│ Parquet │───▶│ Scan │───▶│ Filter │───▶│ Projection │───▶│ Results │ +│ File │ │ Operator │ │ Operator │ │ Operator │ │ │ +└─────────────┘ └──────────────┘ └────────────────┘ └──────────────────┘ └──────────┘ + (reads data) (id > 10) (keeps "name" col) + RecordBatch ───▶ RecordBatch ────▶ RecordBatch ────▶ RecordBatch +``` + +In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flow between the different stages of query execution. Each operator processes batches incrementally, enabling the system to produce results before reading the entire input. + +## Minimal: build a RecordBatch in Rust + +Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`]. + +You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/std/sync/struct.Arc.html)) is used frequently—this is how Arrow enables efficient, zero-copy data sharing. Instead of copying data, different parts of the query engine can safely share read-only references to the same underlying memory. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`, representing a reference to any Arrow array type. Review Comment: You're absolutely right - Arc is a Rust implementation detail, not core to understanding Arrow conceptually. I included it because users will see Arc/ArrayRef in code examples, but I'm giving it too much emphasis. I'll either: 1. Move the Arc explanation to a small note: "Note: You'll see Arc in Rust code - it's how Rust safely shares data between threads" 2. Remove it entirely and let users learn about Arc when they actually need to write code Which approach would you prefer? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
