eachPartition(part => {
>>
>> part.foreach(item_row => {
>>
>> if (item_row("table_name") == "kismia.orders_paid") { ...} else if
>> (...) {...}
>>
>>
>>
>>
>> 2016-08-01 9:39 GMT+03:00 Sumit Kha
rs_paid") { ...} else if
> (...) {...}
>
>
>
>
> 2016-08-01 9:39 GMT+03:00 Sumit Khanna :
>
>> Any ideas guys? What are the best practices for multiple streams to be
>> processed?
>> I could trace a few Stack overflow comments wherein they better recom
better recommend
> a jar separate for each stream / use case. But that isn't pretty much what
> I want, as in it's better if one / multiple spark streaming contexts can
> all be handled well within a single jar.
>
> Guys please reply,
>
> Awaiting,
>
> Thanks,
>
ified yet. I could skim through something like
>>
>>
>> http://stackoverflow.com/questions/29612726/how-do-you-setup-multiple-spark-streaming-jobs-with-different-batch-durations
>>
>>
>> http://stackoverflow.com/questions/37006565/multiple-spark-streaming-contexts-on-one-worker
>>
>> Thanks in Advance!
>> Sumit
>>
>
>
same but the air isn't pretty much
> demystified yet. I could skim through something like
>
>
> http://stackoverflow.com/questions/29612726/how-do-you-setup-multiple-spark-streaming-jobs-with-different-batch-durations
>
>
> http://stackoverflow.com/questions/37006565/multiple
same but the air isn't pretty much
demystified yet. I could skim through something like
http://stackoverflow.com/questions/29612726/how-do-you-setup-multiple-spark-streaming-jobs-with-different-batch-durations
http://stackoverflow.com/questions/37006565/multiple-spark-streaming-contexts-o