If there's no built in local groupBy, You could do something like that:
df.groupby(C1,C2).agg(...).flatmap(x=>x.groupBy(C1)).agg
Thank you.
Daniel
On 29 Jan 2017, at 18:33, Mendelson, Assaf
mailto:assaf.mendel...@rsa.com>> wrote:
Hi,
Consider the following example:
df.groupby(C1,C2).agg(s
No
Thank you.
Daniel
On 20 Jan 2017, at 23:28, kant kodali
mailto:kanth...@gmail.com>> wrote:
Hi,
I am running spark standalone with no storage. when I use spark-submit to
submit my job I get the following Exception and I wonder if this is something
to worry about?
java.io.IOException: HAD