kbuci commented on code in PR #12856:
URL: https://github.com/apache/hudi/pull/12856#discussion_r2023881827


##########
rfc/rfc-79/rfc-90.md:
##########
@@ -0,0 +1,310 @@
+w<!-- Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+file distributed with this work for additional information regarding copyright 
ownership. The ASF licenses this file to
+You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with the 
License. You may obtain a copy of the License
+at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software 
distributed under the License is distributed on an "
+AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific
+language governing permissions and limitations under the License. -->
+
+# Add support for cancellable clustering table service plans
+
+## Proposers
+
+Krishen Bhan (kbuci)
+
+## Approvers
+
+Sivabalan Narayanan (nsivabalan)
+
+## Status
+
+In Progress
+
+JIRA: HUDI-7946
+
+## Abstract
+
+Clustering is a table service useed to optimize table/files layout in HUDI in 
order to speed up read queries. Currently
+ingestion writers will abort if they attempt to write to the same data 
targetted by a pending clustering write.
+As a result, clustering table service plans can indirectly delay ingestion 
writes from updating a dataset with recent data. 
+Furthermore, a clustering plan that isn't executed to completion for a large 
amount of time (due to repeated failures, application
+misconfiguration, or insufficient resources) will degrade the read/write 
performance of a dataset due to delaying clean,
+archival, and metadata table compaction. This is because currently HUDI 
clustering plans, upon being scheduled, must be
+executed to completion. This RFC proposes to support "Cancellable" Clustering 
plans. Support for such cancellable clustering plans 
+will provide HUDI an avenue to fully cancel a clustering plan and allow other 
table service and ingestion writers to proceed and avoiding
+starvation based on user needs.
+
+## Background
+
+### Current state of Execution of table service operations in Hudi
+
+As of now, the table service operations `COMPACT` and `CLUSTER` are implicitly 
"immutable" plans by default, meaning
+that once a plan is scheduled, it will stay as a pending instant until a 
caller invokes the table service execute API on
+the table service instant and successfully completes it (referred to as 
"executing" a table service). Specifically, if an inflight
+execution fails after transitioning the instant to inflight, the next 
execution attempt will implictly create and execute a rollback
+plan (which will delete all new instant/data files), but will keep the table 
service plan. And then the table service will be
+re-attempted. This process will repeat until the instant is completed. The 
below visualization captures these transitions at a high level
+
+![table service lifecycle 
(1)](https://github.com/user-attachments/assets/4a656bde-4046-4d37-9398-db96144207aa)
+
+## Goals
+
+### (A) An ingestion job should be able to cancel and ignore any inflight 
cancellable clustering instants targeting the same data as the ingestion writer.
+
+The current requirement of HUDI needing to execute a clustering plan to 
completion forces ingestion writers to abort a
+commit if a conflicting table service plan is present. Becuase an ingestion 
writer typically determines the exact file groups it
+will be updating/replacing after building a workload profile and performing 
record tagging, the writer may have already
+spent a lot of time and resources before realizing that it needs to abort. In 
the face of frequent table service plans
+or an old inflight plan, this will cause delays in adding recent upstream 
records to the dataset as well as
+unnecessairly take away resources from other applications in the data lake 
(such as Spark executors in the case of the Spark engine).
+Making the clustering plan cancellable should avoid this situation by 
permitting an ingestion writer to request all
+conflicting cancellable clustering plans to be "cancelled" and ignore inflight 
plans that already have been requested
+for cancellation. The latter will ensure that ingestion writers can ignore any 
incomplete cancellable clustering instants that have been requested
+for cancellation but have not yet been aborted.
+
+### (B) A cancellable table service plan should be eligible for cancellation 
at any point before committing
+
+In conjunction with (A), any caller (ingestion writer and potentially other 
users) should be able to request cancellation for an inflight
+cancellable clustering plan. We should not need any synchronous mechanism 
where in the clustering plan of interest
+should be aborted and cleaned up completely before which the ingestion writer 
can proceeed. We should have a light weight mechanism with
+which the ingestion writer make a cancellation request and moves on to carry 
out its operation with the assumption that
+the respective clustering plan will be aborted. This requirement is needed due 
to presence of concurrent and async
+writers for clustering execution, as another worker should not need to wait 
(for the respective concurrent clustering
+worker to proceed with execution or fail) before confirming that its 
cancellation request will be honored. Once the
+request for cancellation succeeds, all interested entities like the ingestion 
writer, reader, asynchronous clustering
+execution job should assume the clustering plan is cancelled.
+
+## Design
+
+### Enabling a clustering plan to be cancellable
+
+To satisfy goal (A), a new config flag named "cancellable" can be added to a 
clustering plan. A writer that intends to
+schedule a cancellable table service plan, can enable the flag in the 
serialized plan metadata. Any writer executing the
+plan can infer that the plan is cancellable, and when trying to commit the 
instant should abort, if it detects that is
+has been requested for cancellation. As a future optimization, the cancellable 
clustering worker can continually poll
+during its execution to see if it has been requested for cancellation. On the 
other side, with the ingestion writer
+flow, the commit finalization logic for ingestion writers can be updated to 
ignore any inflight clustering plans if they
+are cancellable. For the purpose of this design proposal, consider the 
existing ingestion write flow as having three
+steps:
+
+1. Schedule itself on the timeline with a new instant time in a .requested file
+2. Process/record tag incoming records, build a workload profile, and write 
the updating/replaced file groups to a "inflight"
+   instant file on the timeline. Check for conflicts and abort if needed.
+4. Perform write conflict checks and commit the instant on the timeline
+
+The aforementioned changes to ingestion and clustering flow will ensure that 
in the event of a conflicting ingestion and
+cancellable table service writer, the ingestion job will take precedence (and 
cause the cancellable table service
+instant to eventually cancel) as long as a cancellable clustering plan hasn't 
be completed before (2). Since if the
+cancellable table service has already been completed before (2), the ingestion 
job will see that a completed instant (a
+cancellable table service action) conflicts with its ongoing inflight write, 
and therefore it would not be legal to
+proceed. On such cases, ingestion writer will have to abort itself instead of 
proceeding to completion.
+
+### Adding a cancel action and aborted state for cancellable plans
+
+This proposed design will also involve adding a new instant state and internal 
hoodie metadata directory, by making the
+following changes:
+
+#### Cancel action
+
+* We are proposing to add a new .hoodie/.cancel folder, where each file 
corresponds to an instant time that a writer
+  requested for cancellation. As will be detailed below, a writer can cancel 
an inflight plan by adding the instant to
+  this directory and execution of table service will not allow an instant to 
be committed, if it appears in this
+  /.cancel directory. The new /.cancel folder will enable goals (A) & (B) by 
allowing writers to permentantly prevent an
+  ongoing cancellable table service write from committing by requesting for 
cancellation, without needing to block/wait
+  for the table service writer. Once an instant is requested for cancellation 
(added to /.cancel) it cannot be revoked (
+  or "
+  un-cancelled") - it must be eventually transitioned to aborted state, as 
detailed below. To implement (A), ingestion
+  will be updated such that during write-conflict detection, it will create an 
entry in /.cancel for any cancellable
+  plans with a detected write conflict and will ignore any candidate inflight 
plans that have an entry in /.cancel.
+
+#### Aborted state
+
+* We are proposing to add an ".aborted" state type for cancellable table 
service plan. This state is terminal and with
+  this new addition, an instant can only be transitioned to .*commit or 
.aborted (not both) or could be rolledback (
+  ingestion writes). The new ".aborted" state will allow writers to infer 
whether a cancelled table service plan still needs to
+  have its partial data writes cleaned up from the dataset or can be deleted 
from the active timeline during archival
+  (as the active timeline should not grow unbounded). Once an instant appears 
in /.cancel folder, it can and must eventually be
+  transitioned to .aborted state. To summarize, this new state will ensure 
that cancelled instants are eventually "cleaned up" from the dataset
+  and internal timeline metadata.
+
+### Handling cancellation of plans
+
+In order to ensure that other writers can indeed permanantely cancel a 
cancellable clustering plan (such that it can
+no longer be executed), additional changes to cluster table service flow will 
be need to be added as well, as will be
+proposed below. In addition to clustering being able to cleanup/abort an 
instant, a user may want to setup
+a seperate utility application to directly cancel/abort cancellable cluster 
instants, in order to manually ensure that clean/archival
+for a dataset progresses immediately or confirm that a cancellable table 
service plan will not be completed or attempted again.
+The two new cancel APIs in the below proposal provide a method to achieve this.
+
+#### Enabling clustering execution cancellation and automatic cleanup
+
+The clustering execution flow will be updated to check the /.cancel folder 
during a pre-commit check before completing
+the instant. If the instant is a target of /.cancel, then all of its data 
files will be deleted and the instant will be
+transitioned to .aborted. These checks will be performed within a transaction, 
to guard against callers cancelling an
+already-committed instant. In addition, in order to avoid the scenario of 
writer executing an instant but having its
+data files being deleted by a concurrent caller cancelling & aborting the 
instant, the clustering execution flow will
+perform heartbeats. If an instant has an active heartbeat it can be requested 
for cancellation (by adding an instant in
+/.cancel) but it cannot yet be cleaned up and transitioned to .aborted state - 
this is sufficient to safely implement
+goal (B) with respect to concurrent workers.
+
+The below visualization shows the flow for cancellable table service plans 
(steps that are already in existing table
+service flow are grey-ed out)
+
+![cancel table service lifecycle with lock 
(6)](https://github.com/user-attachments/assets/087aa35e-eb87-477d-88f9-9d0ab6649b04)
+
+
+Having this new .hoodie/.cancel folder (in addition to only having the 
.aborted state) is needed not only to allow any
+caller to forcebily block an instant from being committed, but also to prevent 
the need for table service workers to
+also perform write conflict detection (that ingestion already will perform) or 
unnecessarily re-attempt execution of the
+instant if it has been already been requested for cancellation but not 
succefully aborted yet. The below visualized
+scenario shows how this clustering attempt will "short circuit" in this manner 
by checking /.cancel to see if clustering
+execution should even proceed. This scenario also includes an example of 
concurrent writers to show how transaction and
+heartbeating in the above proposed flow will allow correct behavior even in 
the face of concurrent writers attempting to
+execute and/or cancel the instant.
+
+![cancel flow table serivce 
(1)](https://github.com/user-attachments/assets/f130f326-952f-49eb-bdbb-b0b34206f677)
+
+Aside from modifications to the clustering execution flow, a new pair of 
cancel APIs request_cancel and execute_abort

Review Comment:
   Oh we only expect ingestion to call request_cancel. The only "built-in" 
usage of execute_abort would just be CLEAN, as mentioned in the `Optional 
feature: Ensure incomplete cancellable clustering plan eventually have their 
partial writes cleaned up by CLEAN` . But let me update RFC to clarify that 
ingestion will not call execute_abort



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to