Hi,
I have a cluster of 4 machines for Spark. I want my Spark app to run on 2
machines only. And rest 2 machines for other Spark apps.
So my question is, can I restrict my app to run on that 2 machines only by
passing some IP at the time of setting SparkConf or by any other setting?
Thanks,
Sham
Hey Hareesh,
Thanks for the help, they were starving. I increased the core + memory on
that machine. Now it is working fine.
Thanks again
On Tue, May 3, 2016 at 12:57 PM, Shams ul Haque wrote:
> No, i made a cluster of 2 machines. And after submission to master, this
> app moves on
cess any data coming
>> from kafka. And when i kill that app by pressing Ctrl + C on terminal, then
>> it start processing all data received from Kafka and then get shutdown.
>>
>> I am trying to figure out why is this happening. Please help me if you
>> know anything.
>>
>> Thanks and regards
>> Shams ul Haque
>>
>
>
start processing all data received from Kafka and then get shutdown.
I am trying to figure out why is this happening. Please help me if you know
anything.
Thanks and regards
Shams ul Haque
Any one have any idea? or should i raise a bug for that?
Thanks,
Shams
On Fri, Mar 11, 2016 at 3:40 PM, Shams ul Haque wrote:
> Hi,
>
> I want to kill a Spark Streaming job gracefully, so that whatever Spark
> has picked from Kafka have processed. My Spark version is: 1.6.0
>
Hi,
I want to kill a Spark Streaming job gracefully, so that whatever Spark has
picked from Kafka have processed. My Spark version is: 1.6.0
When i tried killing a Spark Streaming Job from Spark UI dosen't stop app
completely. In Spark-UI job is moved to COMPLETED section, but in log it
continuou
pastebin.com/0LjTWLfm
Thanks
Shams
On Thu, Mar 10, 2016 at 8:11 PM, Ted Yu wrote:
> Can you provide a bit more information ?
>
> Release of Spark
> command for submitting your app
> code snippet of your app
> pastebin of log
>
> Thanks
>
> On Thu, Mar 10, 2016 at
Hi,
I have developed a spark realtime app and started spark-standalone on my
laptop. But when i tried to submit that app in Spark it is always
in WAITING state & Cores is always Zero.
I have set:
export SPARK_WORKER_CORES="2"
export SPARK_EXECUTOR_CORES="1"
in spark-env.sh, but still nothing hap
Hi,
I want to implement Streaming using Mongo Tailable. Please give me hint how
can i do this.
I think i have to extend some class and used its method to do the stuff.
Please give me a hint.
Thanks and regards
Shams ul Haque
klaskowski/ |
> http://blog.jaceklaskowski.pl
> Mastering Spark https://jaceklaskowski.gitbooks.io/mastering-apache-spark/
> Follow me at https://twitter.com/jaceklaskowski
> Upvote at http://stackoverflow.com/users/1305344/jacek-laskowski
>
>
> On Tue, Dec 1, 2015 at 10:47 AM, Sh
Hi All,
I have made 3 RDDs of 3 different dataset, all RDDs are grouped by
CustomerID in which 2 RDDs have value of Iterable type and one has signle
bean. All RDDs have id of Long type as CustomerId. Below are the model for
3 RDDs:
JavaPairRDD>
JavaPairRDD>
JavaPairRDD
Now, i have to merge all th
Hi,
I have grouped all my customers in JavaPairRDD>
by there customerId (of Long type). Means every customerId have a List or
ProductBean.
Now i want to save all ProductBean to DB irrespective of customerId. I got
all values by using method
JavaRDD> values = custGroupRDD.values();
Now i want to
12 matches
Mail list logo