lable as sqlContext.
>> Loading test.spark...
>> pairs: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at
>> makeRDD at :21
>> 15/08/21 09:58:51 WARN SizeEstimator: Failed to check whether
>> UseCompressedOops is set; assuming yes
>> res0: Array[(Int, I
akeRDD at :21
> 15/08/21 09:58:51 WARN SizeEstimator: Failed to check whether
> UseCompressedOops is set; assuming yes
> res0: Array[(Int, Int)] = Array((0,3), (1,50), (2,40))
>
> Yong
>
>
> --------------
> Date: Fri, 21 Aug 2015 19:24:09 +0530
> Subject:
sformation not happening for reduceByKey or GroupByKey
From: jsatishchan...@gmail.com
To: abhis...@tetrationanalytics.com
CC: user@spark.apache.org
HI Abhishek,
I have even tried that but rdd2 is empty
Regards,Satish
On Fri, Aug 21, 2015 at 6:47 PM, Abhishek R. Singh
wrote:
You had:
> RDD.reduceB
HI Abhishek,
I have even tried that but rdd2 is empty
Regards,
Satish
On Fri, Aug 21, 2015 at 6:47 PM, Abhishek R. Singh <
abhis...@tetrationanalytics.com> wrote:
> You had:
>
> > RDD.reduceByKey((x,y) => x+y)
> > RDD.take(3)
>
> Maybe try:
>
> > rdd2 = RDD.reduceByKey((x,y) => x+y)
> > rdd2.ta
You had:
> RDD.reduceByKey((x,y) => x+y)
> RDD.take(3)
Maybe try:
> rdd2 = RDD.reduceByKey((x,y) => x+y)
> rdd2.take(3)
-Abhishek-
On Aug 20, 2015, at 3:05 AM, satish chandra j wrote:
> HI All,
> I have data in RDD as mentioned below:
>
> RDD : Array[(Int),(Int)] = Array((0,1), (0,2),(1,20)
ne it is just
>> invoking "spark-shell".
>>
>> I don't know too much about the original problem though.
>>
>> Yong
>>
>> ------
>> Date: Fri, 21 Aug 2015 18:19:49 +0800
>> Subject: Re: Transformation not hap
: Fri, 21 Aug 2015 18:19:49 +0800
Subject: Re: Transformation not happening for reduceByKey or GroupByKey
From: zjf...@gmail.com
To: jsatishchan...@gmail.com
CC: robin.e...@xense.co.uk; user@spark.apache.org
Hi Satish,
I don't see where spark support "-i", so suspect it is provided
Hi Satish,
I don't see where spark support "-i", so suspect it is provided by DSE. In
that case, it might be bug of DSE.
On Fri, Aug 21, 2015 at 6:02 PM, satish chandra j
wrote:
> HI Robin,
> Yes, it is DSE but issue is related to Spark only
>
> Regards,
> Satish Chandra
>
> On Fri, Aug 21, 2
HI Robin,
Yes, it is DSE but issue is related to Spark only
Regards,
Satish Chandra
On Fri, Aug 21, 2015 at 3:06 PM, Robin East wrote:
> Not sure, never used dse - it’s part of DataStax Enterprise right?
>
> On 21 Aug 2015, at 10:07, satish chandra j
> wrote:
>
> HI Robin,
> Yes, below mention
Yes, DSE 4.7
Regards,
Satish Chandra
On Fri, Aug 21, 2015 at 3:06 PM, Robin East wrote:
> Not sure, never used dse - it’s part of DataStax Enterprise right?
>
> On 21 Aug 2015, at 10:07, satish chandra j
> wrote:
>
> HI Robin,
> Yes, below mentioned piece or code works fine in Spark Shell but
HI Robin,
Yes, below mentioned piece or code works fine in Spark Shell but the same
when place in Script File and executed with -i it creating an
empty RDD
scala> val pairs = sc.makeRDD(Seq((0,1),(0,2),(1,20),(1,30),(2,40)))
pairs: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[77]
HI All,
Could anybody let me know what is that i missing here, it should work as
its a basic transformation
Please let me know if any additional information required
Regards,
Satish
On Thu, Aug 20, 2015 at 3:35 PM, satish chandra j
wrote:
> HI All,
> I have data in RDD as mentioned below:
>
>
HI All,
I have data in RDD as mentioned below:
RDD : Array[(Int),(Int)] = Array((0,1), (0,2),(1,20),(1,30),(2,40))
I am expecting output as Array((0,3),(1,50),(2,40)) just a sum function on
Values for each key
Code:
RDD.reduceByKey((x,y) => x+y)
RDD.take(3)
Result in console:
RDD: org.apache.s
13 matches
Mail list logo