So is there a reason you want to shuffle Hadoop types rather than the Java
types?
As for your specific question, for Kyro you also need to register your
serializers, did you do that?
On Sun, Dec 3, 2017 at 10:02 AM pradeepbaji wrote:
> Hi,
>
> Is there any recommended way of serializing Hadoop
Hi,
Is there any recommended way of serializing Hadoop Writables' in Spark?
Here is my problem.
Question1:
I have a pair RDD which is created by reading a SEQ[LongWritable,
BytesWritable]:
RDD[(LongWritable, BytesWritable)]
I have these two settings set in spark conf.
spark.serializer=org.apach
Hi,
Is there any recommended way of serializing Hadoop Writables' in Spark?
Here is my problem.
Question1:
I have a pair RDD which is created by reading a SEQ[LongWritable,
BytesWritable]:
RDD[(LongWritable, BytesWritable)]
I have these two settings set in spark conf.
spark.serializer=org.apach