it might be a network issue. The error states failed to bind the server IP
address
Chester
Sent from my iPhone
> On Jul 18, 2015, at 11:46 AM, Amjad ALSHABANI wrote:
>
> Does anybody have any idea about the error I m having.. I am really
> clueless... And appreciate any idea :)
>
> Thanks i
I just implemented this in our application. The impersonation is done before
the job is submitted. In spark yarn (we are using yarn cluster mode) , it just
takes the current User from UserGroupInfoemation and summitted to yarn resource
manager.
If one use Kinit from command line, the who Jvm
They should be the same except the package names are changed to avoid protopuf
conflict. You can use it just like other Akka jars
Chester
Sent from my iPhone
> On Oct 17, 2014, at 5:56 AM, "Ruebenacker, Oliver A"
> wrote:
>
>
> Hello,
>
> My SBT pulls in, among others, the followi
I am working on a PR that allows one to send the same spark listener event
message back to the application in yarn cluster mode.
So far I have put this function in our application, our UI will receive and
display the same spark job event message such as progress, job start, completed
etc
Esse
Akka actor are managed under a thread pool, so the same actor can be under
different thread.
If you create HiveContext in the actor, is it possible that you are essentially
create different instance of HiveContext ?
Sent from my iPhone
> On Sep 17, 2014, at 10:14 PM, Du Li wrote:
>
> Thanks
Archit
We are using yarn-cluster mode , and calling spark via Client class
directly from servlet server. It works fine.
To establish a communication channel to give further requests,
It should be possible with yarn client, but not with yarn server. Yarn
client mode, spark driver i
Since you are running in yarn-cluster mode, and you are supply the spark
assembly jar file. There is no need to install spark on each node. Is it
possible two spark jars have different version ?
Chester
Sent from my iPad
On Jul 16, 2014, at 22:49, cmti95035 wrote:
> Hi,
>
> I need some hel
In Yarn cluster mode, you can either have spark on all the cluster nodes or
supply the spark jar yourself. In the 2nd case, you don't need install spark on
cluster at all. As you supply the spark assembly as we as your app jar
together.
I hope this make it clear
Chester
Sent from my iPhone