[ 
https://issues.apache.org/jira/browse/KUDU-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liupengcheng updated KUDU-3054:
-------------------------------
    Description: 
Currently, we encountered a issue in kudu-spark that will causing spark sql 
query failure:

```

Job aborted due to stage failure: Total size of serialized results of 942 tasks 
(2.0 GB) is bigger than spark.driver.maxResultSize (2.0 GB)

```

After carefully debug, we find out that it's the kudu.write_duration 
accumulators causing single spark task larger than 2M, thus all tasks size of 
the stage will bigger than the limit.

However, this stage is just reading kudu table and do shuffle exchange, no 
writing any kudu tables.

So I think should init this accumulator lazily in KuduContext to avoid such 
issues.

!https://issues.apache.org/jira/secure/attachment/12993451/durationHisto_large.png!

 

!https://phabricator.d.xiaomi.net/file/data/rf4vosfntshsqf645iwi/PHID-FILE-oaion5kpfutxoxjtjzrs/image.png!

!https://phabricator.d.xiaomi.net/file/data/7cirhhktqarpxsdyisek/PHID-FILE-drhuwotg3dr7hnmocdex/image.png!

  was:
Currently, we encountered a issue in kudu-spark that will causing spark sql 
query failure:

```

Job aborted due to stage failure: Total size of serialized results of 942 tasks 
(2.0 GB) is bigger than spark.driver.maxResultSize (2.0 GB)

```

After carefully debug, we find out that it's the kudu.write_duration 
accumulators causing single spark task larger than 2M, thus all tasks size of 
the stage will bigger than the limit.

However, this stage is just reading kudu table and do shuffle exchange, no 
writing any kudu tables.

So I think should init this accumulator lazily in KuduContext to avoid such 
issues.

!https://phabricator.d.xiaomi.net/file/data/fnnfiseg2cy4pzs4rlj7/PHID-FILE-6tj4q4w4hsvtba2xiaew/image.png!

 

!https://phabricator.d.xiaomi.net/file/data/rf4vosfntshsqf645iwi/PHID-FILE-oaion5kpfutxoxjtjzrs/image.png!

!https://phabricator.d.xiaomi.net/file/data/7cirhhktqarpxsdyisek/PHID-FILE-drhuwotg3dr7hnmocdex/image.png!


> Init kudu.write_duration accumulator lazily
> -------------------------------------------
>
>                 Key: KUDU-3054
>                 URL: https://issues.apache.org/jira/browse/KUDU-3054
>             Project: Kudu
>          Issue Type: Improvement
>          Components: spark
>    Affects Versions: 1.9.0
>            Reporter: liupengcheng
>            Priority: Major
>         Attachments: durationHisto_large.png, durationhisto.png, 
> read_kudu_and_shuffle.png
>
>
> Currently, we encountered a issue in kudu-spark that will causing spark sql 
> query failure:
> ```
> Job aborted due to stage failure: Total size of serialized results of 942 
> tasks (2.0 GB) is bigger than spark.driver.maxResultSize (2.0 GB)
> ```
> After carefully debug, we find out that it's the kudu.write_duration 
> accumulators causing single spark task larger than 2M, thus all tasks size of 
> the stage will bigger than the limit.
> However, this stage is just reading kudu table and do shuffle exchange, no 
> writing any kudu tables.
> So I think should init this accumulator lazily in KuduContext to avoid such 
> issues.
> !https://issues.apache.org/jira/secure/attachment/12993451/durationHisto_large.png!
>  
> !https://phabricator.d.xiaomi.net/file/data/rf4vosfntshsqf645iwi/PHID-FILE-oaion5kpfutxoxjtjzrs/image.png!
> !https://phabricator.d.xiaomi.net/file/data/7cirhhktqarpxsdyisek/PHID-FILE-drhuwotg3dr7hnmocdex/image.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to