This is an automated email from the ASF dual-hosted git repository.

xushiyan pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 85f312d63a30 docs: blog edits and docs update (#14350)
85f312d63a30 is described below

commit 85f312d63a30c21356bc40e157e90d232cc7f339
Author: Shiyan Xu <[email protected]>
AuthorDate: Tue Nov 25 03:09:58 2025 -0600

    docs: blog edits and docs update (#14350)
---
 .../blog/2025-11-25-apache-hudi-release-1-1-announcement.md  |  4 ++--
 website/docs/catalog_polaris.md                              |  2 +-
 website/docs/hudi_stack.md                                   | 12 ++++++------
 website/versioned_docs/version-1.1.0/catalog_polaris.md      |  2 +-
 website/versioned_docs/version-1.1.0/hudi_stack.md           | 12 ++++++------
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md 
b/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
index 1c11d6f16951..5c41f1fcd6d6 100644
--- a/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
+++ b/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
@@ -211,7 +211,7 @@ Flink is a popular choice for real-time data pipelines, and 
Hudi 1.1 brings subs
 
 ### Flink 2.0 Support
 
-Hudi 1.1 provides full support for Flink 2.0, the first major Flink release in 
nine years. This brings disaggregated state storage (ForSt) that decouples 
state from compute for unlimited scalability, asynchronous state execution for 
improved resource utilization, adaptive broadcast joins for efficient query 
processing, and materialized tables for simplified stream-batch unification. 
Use the new `hudi-flink2.0-bundle:1.1.0` artifact to get started.
+Hudi 1.1 brings support for Flink 2.0, the first major Flink release in nine 
years. Flink 2.0 introduced disaggregated state storage (ForSt) that decouples 
state from compute for unlimited scalability, asynchronous state execution for 
improved resource utilization, adaptive broadcast join for efficient query 
processing, and materialized tables for simplified stream-batch unification. 
Use the new `hudi-flink2.0-bundle:1.1.0` artifact to get started.
 
 ### Engine-Native Record Support
 
@@ -223,7 +223,7 @@ The above shows a benchmark that inserted 500 million 
records with a schema of 1
 
 ### Buffer Sort
 
-For append-only tables, Hudi 1.1 introduces in-memory buffer sorting that 
pre-sorts records before flushing to Parquet. This delivers 15-30% better 
compression (via improved dictionary/run-length encoding) and faster queries 
through better min/max filtering. Enable with `write.buffer.sort.enabled=true` 
and specify sort keys via `write.buffer.sort.keys` (e.g., 
"timestamp,event_type"), ensuring sufficient task manager memory via 
`write.buffer.size` (default 128MB).
+For append-only tables, Hudi 1.1 introduces in-memory buffer sorting that 
pre-sorts records before flushing to Parquet. This delivers 15-30% better 
compression and faster queries through better min/max filtering. You can enable 
this feature with `write.buffer.sort.enabled=true` and specify sort keys via 
`write.buffer.sort.keys` (e.g., "timestamp,event_type"). You may also adjust 
the buffer size for sorting via `write.buffer.size` (default 1000 records).
 
 ## New Integration: Apache Polaris (Incubating)
 
diff --git a/website/docs/catalog_polaris.md b/website/docs/catalog_polaris.md
index cab2df6988c0..10ff3986d296 100644
--- a/website/docs/catalog_polaris.md
+++ b/website/docs/catalog_polaris.md
@@ -7,7 +7,7 @@ keywords: [hudi, polaris, catalog, integration]
 ---
 
 :::warning Polaris Integration Status
-Hudi 1.1.0 added support for Apache Polaris catalog integration (see [PR 
#13558](https://github.com/apache/hudi/pull/13558)). However, **the 
corresponding changes on the Polaris side are still pending** and need to be 
merged and published in a Polaris release (refer to [this 
PR](https://github.com/apache/polaris/pull/1862)) before this integration will 
be fully functional.
+Hudi 1.1.0 added support for Apache Polaris catalog integration (see [PR 
#13558](https://github.com/apache/hudi/pull/13558)). However, a Polaris release 
that includes [this PR](https://github.com/apache/polaris/pull/1862) is pending 
before this integration to be available.
 :::
 
 ## Overview
diff --git a/website/docs/hudi_stack.md b/website/docs/hudi_stack.md
index e36deb5c173b..189c9840b727 100644
--- a/website/docs/hudi_stack.md
+++ b/website/docs/hudi_stack.md
@@ -44,10 +44,10 @@ Future updates aim to integrate diverse formats like 
unstructured data (e.g., JS
 Hudi's layout scheme encodes all changes to a Log File as a sequence of blocks 
(data, delete, rollback). By making data available in open file formats (such 
as Parquet/Avro), Hudi enables users to
 bring any compute engine for specific workloads.
 
-## Table Format
+## Native Table Format
 
 ![Table Format](/assets/images/blog/hudistack/table_format_1.png)
-<p align = "center">Hudi's Table format</p>
+<p align = "center">Hudi's Native Table format</p>
 
 Drawing an analogy to file formats, a table format simply concerns with how 
files are distributed with the table, partitioning schemes, schema and metadata 
tracking changes. Hudi organizes files within a table or partition into
 File Groups. Updates are captured in log files tied to these File Groups, 
ensuring efficient merges. There are three major components related to Hudi’s 
table format.
@@ -63,13 +63,13 @@ the file-group is uniquely identified by the write that 
created its base file or
 It leverages a 
[SSTable](https://cassandra.apache.org/doc/stable/cassandra/architecture/storage-engine.html#sstables)
 based file format for quick, indexed key lookups,
 storing vital information like file paths, column statistics and schema. This 
approach streamlines operations by reducing the necessity for expensive cloud 
file listings.
 
-### Pluggable Table format
-
-Starting with Hudi 1.1, Hudi introduces a pluggable table format framework 
that extends Hudi's powerful storage engine capabilities beyond its native 
format to other table formats like Apache Iceberg and Delta Lake. This 
framework decouples Hudi's core capabilities—transaction management, indexing, 
concurrency control, and table services—from the specific storage format used 
for data files. Hudi provides native format support (configured via 
`hoodie.table.format=native` by default), whil [...]
-
 Hudi's approach of recording updates into Log Files is more efficient and 
involves low merge overhead than systems like Hive ACID, where merging all 
delta records against
 all Base Files is required. Read more about the various table types in Hudi 
[table types documentation](table_types).
 
+## Pluggable Table format
+
+Starting with Hudi 1.1, Hudi introduces a pluggable table format framework 
that extends Hudi's powerful storage engine capabilities beyond its native 
format to other table formats like Apache Iceberg and Delta Lake. This 
framework decouples Hudi's core capabilities—transaction management, indexing, 
concurrency control, and table services—from the specific storage format used 
for data files. Hudi provides native format support (configured via 
`hoodie.table.format=native` by default), whil [...]
+
 ## Storage Engine
 
 The storage layer of Hudi comprises the core components that are responsible 
for the fundamental operations and services that enable Hudi to store, 
retrieve, and manage data
diff --git a/website/versioned_docs/version-1.1.0/catalog_polaris.md 
b/website/versioned_docs/version-1.1.0/catalog_polaris.md
index cab2df6988c0..10ff3986d296 100644
--- a/website/versioned_docs/version-1.1.0/catalog_polaris.md
+++ b/website/versioned_docs/version-1.1.0/catalog_polaris.md
@@ -7,7 +7,7 @@ keywords: [hudi, polaris, catalog, integration]
 ---
 
 :::warning Polaris Integration Status
-Hudi 1.1.0 added support for Apache Polaris catalog integration (see [PR 
#13558](https://github.com/apache/hudi/pull/13558)). However, **the 
corresponding changes on the Polaris side are still pending** and need to be 
merged and published in a Polaris release (refer to [this 
PR](https://github.com/apache/polaris/pull/1862)) before this integration will 
be fully functional.
+Hudi 1.1.0 added support for Apache Polaris catalog integration (see [PR 
#13558](https://github.com/apache/hudi/pull/13558)). However, a Polaris release 
that includes [this PR](https://github.com/apache/polaris/pull/1862) is pending 
before this integration to be available.
 :::
 
 ## Overview
diff --git a/website/versioned_docs/version-1.1.0/hudi_stack.md 
b/website/versioned_docs/version-1.1.0/hudi_stack.md
index e36deb5c173b..189c9840b727 100644
--- a/website/versioned_docs/version-1.1.0/hudi_stack.md
+++ b/website/versioned_docs/version-1.1.0/hudi_stack.md
@@ -44,10 +44,10 @@ Future updates aim to integrate diverse formats like 
unstructured data (e.g., JS
 Hudi's layout scheme encodes all changes to a Log File as a sequence of blocks 
(data, delete, rollback). By making data available in open file formats (such 
as Parquet/Avro), Hudi enables users to
 bring any compute engine for specific workloads.
 
-## Table Format
+## Native Table Format
 
 ![Table Format](/assets/images/blog/hudistack/table_format_1.png)
-<p align = "center">Hudi's Table format</p>
+<p align = "center">Hudi's Native Table format</p>
 
 Drawing an analogy to file formats, a table format simply concerns with how 
files are distributed with the table, partitioning schemes, schema and metadata 
tracking changes. Hudi organizes files within a table or partition into
 File Groups. Updates are captured in log files tied to these File Groups, 
ensuring efficient merges. There are three major components related to Hudi’s 
table format.
@@ -63,13 +63,13 @@ the file-group is uniquely identified by the write that 
created its base file or
 It leverages a 
[SSTable](https://cassandra.apache.org/doc/stable/cassandra/architecture/storage-engine.html#sstables)
 based file format for quick, indexed key lookups,
 storing vital information like file paths, column statistics and schema. This 
approach streamlines operations by reducing the necessity for expensive cloud 
file listings.
 
-### Pluggable Table format
-
-Starting with Hudi 1.1, Hudi introduces a pluggable table format framework 
that extends Hudi's powerful storage engine capabilities beyond its native 
format to other table formats like Apache Iceberg and Delta Lake. This 
framework decouples Hudi's core capabilities—transaction management, indexing, 
concurrency control, and table services—from the specific storage format used 
for data files. Hudi provides native format support (configured via 
`hoodie.table.format=native` by default), whil [...]
-
 Hudi's approach of recording updates into Log Files is more efficient and 
involves low merge overhead than systems like Hive ACID, where merging all 
delta records against
 all Base Files is required. Read more about the various table types in Hudi 
[table types documentation](table_types).
 
+## Pluggable Table format
+
+Starting with Hudi 1.1, Hudi introduces a pluggable table format framework 
that extends Hudi's powerful storage engine capabilities beyond its native 
format to other table formats like Apache Iceberg and Delta Lake. This 
framework decouples Hudi's core capabilities—transaction management, indexing, 
concurrency control, and table services—from the specific storage format used 
for data files. Hudi provides native format support (configured via 
`hoodie.table.format=native` by default), whil [...]
+
 ## Storage Engine
 
 The storage layer of Hudi comprises the core components that are responsible 
for the fundamental operations and services that enable Hudi to store, 
retrieve, and manage data

Reply via email to