Then apply a transformation over the dstream to pull those required
information. :)
Thanks
Best Regards
On Tue, Jun 23, 2015 at 3:22 PM, anshu shukla
wrote:
> Thanks alot ,
>
> Because i just want to log timestamp and unique message id and not full
> RDD .
>
> On Tue, Jun 23, 2015 at 12:41 PM,
Thanks alot ,
Because i just want to log timestamp and unique message id and not full
RDD .
On Tue, Jun 23, 2015 at 12:41 PM, Akhil Das
wrote:
> Why don't you do a normal .saveAsTextFiles?
>
> Thanks
> Best Regards
>
> On Mon, Jun 22, 2015 at 11:55 PM, anshu shukla
> wrote:
>
>> Thanx for rep
Why don't you do a normal .saveAsTextFiles?
Thanks
Best Regards
On Mon, Jun 22, 2015 at 11:55 PM, anshu shukla
wrote:
> Thanx for reply !!
>
> YES , Either it should write on any machine of cluster or Can you please
> help me ... that how to do this . Previously i was using writing using
Thanx for reply !!
YES , Either it should write on any machine of cluster or Can you please
help me ... that how to do this . Previously i was using writing using
collect () , so some of my tuples are missing while writing.
//previous logic that was just creating the file on master -
Is spoutLog just a non-spark file writer? If you run that in the map call
on a cluster its going to be writing in the filesystem of the executor its
being run on. I'm not sure if that's what you intended.
On Mon, Jun 22, 2015 at 1:35 PM, anshu shukla
wrote:
> Running perfectly in local system bu
Running perfectly in local system but not writing to file in cluster
mode .ANY suggestions please ..
//msgid is long counter
JavaDStream newinputStream=inputStream.map(new
Function() {
@Override
public String call(String v1) throws Exception {
String s1=msgId+"@"+v1;
System.
Can not we write some data to a txt file in parallel with multiple
executors running in parallel ??
--
Thanks & Regards,
Anshu Shukla