yuzhaojing commented on code in PR #4309: URL: https://github.com/apache/hudi/pull/4309#discussion_r867337630
########## rfc/rfc-43/rfc-43.md: ########## @@ -0,0 +1,257 @@ +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +# RFC-43: Implement Table Management Service for Hudi + +## Proposers + +- @yuzhaojing + +## Approvers + +- @vinothchandar +- @Raymond + +## Status + +JIRA: [https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016) + +## Abstract + +Hudi table needs table management operations. Currently, schedule these job provides Three ways: + +- Inline, execute these job and writing job in the same application, perform the these job and writing job serially. + +- Async, execute these job and writing job in the same application, Async parallel execution of these job and write job. + +- Independent compaction/clustering job, execute an async compaction/clustering job of another application. + +With the increase in the number of HUDI tables, due to a lack of management capabilities, maintenance costs will become +higher. This proposal is to implement an independent compaction/clustering Service to manage the Hudi +compaction/clustering job. + +## Background + +In the current implementation, if the HUDI table needs do compact/cluster, it only has three ways: + +1. Use inline compaction/clustering, in this mode the job will be block writing job. + +2. Using Async compaction/clustering, in this mode the job execute async but also sharing the resource with HUDI to + write a job that may affect the stability of job writing, which is not what the user wants to see. + +3. Using independent compaction/clustering job is a better way to schedule the job, in this mode the job execute async + and do not sharing resources with writing job, but also has some questions: + 1. Users have to enable lock service providers so that there is not data loss. Especially when compaction/clustering + is getting scheduled, no other writes should proceed concurrently and hence a lock is required. + 2. The user needs to manually start an async compaction/clustering application, which means that the user needs to + maintain two jobs. + 3. With the increase in the number of HUDI jobs, there is no unified service to manage compaction/clustering jobs ( + monitor, retry, history, etc...), which will make maintenance costs increase. + +With this effort, we want to provide an independent compaction/clustering Service, it will have these abilities: + +- Provides a pluggable execution interface that can adapt to multiple execution engines, such as Spark and Flink. + +- With the ability to failover, need to be persisted compaction/clustering message. + +- Perfect metrics and reuse HoodieMetric expose to the outside. + +- Provide automatic failure retry for compaction/clustering job. + +## Implementation + + + +### Client + +Register to Service when the job starts. + +### Request Handler + +Receive the client's request and save the register message in storage. + +### Storage + +#### Lectotype + +**Requirements:** support single row ACID transactions. Almost all write operations require it, like operation creation, +status changing and so on. + +There are the candidates, + +**Hudi table** + +pros: + +- No external components are introduced and maintained. + +crons: + +- Each write to hudi table will be a deltacommit, this will further lower the number of possible requests / sec that can + be served. + +**RDBMS** + +pros: + +- database that is suitable for structured data like metadata to store. + +- can describe the relation between many kinds of metadata. + +crons: + +- introduce another system to maintain. + +**File system** + +pros: + +- No external components are introduced and maintained. + +crons: + +- not suitable for the situation that requires high performance. + +- have to do extra work to support the metadata organization. + +**Key-value storage** + +pros: + +- database that is suitable for structured data like metadata to store. + +- in-memory data store so that read and write faster. + +crons: + +- introduce another system to maintain. + +- stroage capacity is a limitation. + +Through the storage of server is pluggable, considering the general situation of disk storage, good performance of read +and write, convenience of development, RDBMS may be a better one to be chosen. + +### Scheduler + +- Periodically scan the storage and submit operation job according to user-specified rules, like priority, queue, owner Review Comment: We can trigger based on the commit event and configure a parameter of the maximum number of threads to control the maximum number of tasks submitted at the same time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
