ich Talebzadeh
> To: Praveen Devarao/India/IBM@IBMIN, "user @spark" <
> user@spark.apache.org>
> Date:26/04/2016 04:03 pm
> Subject:Re: Splitting spark dstream into separate fields
>
> --
>
>
>
> Thanks Pr
n't always roar. Sometimes courage is the quiet voice at the
end of the day saying I will try again"
From: Mich Talebzadeh
To: Praveen Devarao/India/IBM@IBMIN, "user @spark"
Date: 26/04/2016 04:03 pm
Subject:Re: Splitting spark dstream into separate fields
Thanks Praveen.
With regard to key/value pair. My kafka takes the following rows as input
cat ${IN_FILE} | ${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list
rhes564:9092 --topic newtopic
That ${IN_FILE} is the source of prices (1000 as follows
ID TIMESTAMP
Hi Mich,
>>
val lines = dstream.map(_._2)
This maps the record into components? Is that the correct
understanding of it
<<
Not sure what you refer to when said record into components. The
above function is basically giving you the tuple