This is an automated email from the ASF dual-hosted git repository.

leesf pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new e5cb883  fix typo (#1587)
e5cb883 is described below

commit e5cb883a8440d53e97f3837b583831eef7db2ae5
Author: wanglisheng81 <[email protected]>
AuthorDate: Wed May 6 19:16:28 2020 +0800

    fix typo (#1587)
---
 docs/_docs/1_3_use_cases.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/_docs/1_3_use_cases.md b/docs/_docs/1_3_use_cases.md
index 25b35bb..c6a5623 100644
--- a/docs/_docs/1_3_use_cases.md
+++ b/docs/_docs/1_3_use_cases.md
@@ -48,7 +48,7 @@ Unfortunately, in today's post-mobile & pre-IoT world, __late 
data from intermit
 In such cases, the only remedy to guarantee correctness is to [reprocess the 
last few 
hours](https://falcon.apache.org/FalconDocumentation.html#Handling_late_input_data)
 worth of data,
 over and over again each hour, which can significantly hurt the efficiency 
across the entire ecosystem. For e.g; imagine reprocessing TBs worth of data 
every hour across hundreds of workflows.
 
-Hudi comes to the rescue again, by providing a way to consume new data 
(including late data) from an upsteam Hudi table `HU` at a record granularity 
(not folders/partitions),
+Hudi comes to the rescue again, by providing a way to consume new data 
(including late data) from an upstream Hudi table `HU` at a record granularity 
(not folders/partitions),
 apply the processing logic, and efficiently update/reconcile late data with a 
downstream Hudi table `HD`. Here, `HU` and `HD` can be continuously scheduled 
at a much more frequent schedule
 like 15 mins, and providing an end-end latency of 30 mins at `HD`.
 

Reply via email to