[ 
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=416288&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-416288
 ]

ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Apr/20 23:43
            Start Date: 05/Apr/20 23:43
    Worklog Time Spent: 10m 
      Work Description: ecapoccia commented on issue #11037: [BEAM-9434] 
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#issuecomment-609504865
 
 
   > Sorry about the long delay
   
   My time to apology @lukecwik. I was busy on something else as well as trying 
to settling in this new life of working from home due to the virus
   
   > It looks like the spark translation is copying the number of partitions 
from the upstream transform for the reshuffle translation and in your case this 
is likely 1
   
   Gotcha. And yes, I can confirm that the value for the partitions is 1, not 
only in my case. Fact of the matter, the number of partitions is calculated (in 
a bizarre way) only for the root RDD (Create), containing only the pattern for 
the s3 files -- a string like `s3://my-bucket-name/*.avro`. From that moment 
onwards it is copied all the way through. So with one pattern is always one.
   This confirms my initial impression when I wrote:
   
   > The impression I have is that when the physical plan is created, there is 
only one task detected that is bound to do the entire reading on one executor
   
   I have changed the PR, reverting the original one and now - after your 
analysis - I am setting the number of the partitions in the reshuffle transform 
translator.
   I am using the value of the default parallelism for Spark, already available 
in the Spark configuration options for Beam.
   So essentially with this PR the Spark configuration:
   `--conf spark.default.parallelism=10` is the replacement for the hint I 
wrote initially.
   
   I have tested this PR with the same configuration as the initial one, and 
the performance is identical. I can now see all the executors and nodes 
processing a partition of the read, as one expects. I also did a back-to-back 
run with the vanilla Beam and I can confirm the problem is still there.
   
   I deem this implementation is superior to the first one. Let me have your 
opinions on it. Also paging in @iemejia 
   
   I have seen the previous build failing, I think the failing tests were 
unrelated to the changes; keen to see a new build with these code changes.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 416288)
    Time Spent: 3h  (was: 2h 50m)

> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-aws, sdk-java-core
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to