So, why not make a fake key and aggregate on it?

On Mon, Mar 28, 2016 at 6:21 PM, sujeet jog <sujeet....@gmail.com> wrote:

> Hi,
>
> I have a RDD  like this .
>
> [ 12, 45 ]
> [ 14, 50 ]
> [ 10, 35 ]
> [ 11, 50 ]
>
> i want to aggreate values of first two rows into 1 row and subsequenty the
> next two rows into another single row...
>
> i don't have a key to aggregate for using some of the aggregate pyspark
> functions, how to achieve it ?
>
>
>


-- 
Regards,
Alexander

Reply via email to