Another possible reason behind this maybe that there are two versions of
Akka present in the classpath, which are interfering with each other. This
could happen through many scenarios.
1. Launching Spark application with Scala brings in Akka from Scala, which
interferes with Spark's Akka
2. Multip
Alan/TD,
We are facing the problem in a project going to production.
Was there any progress on this? Are we able to confirm that this is a
bug/limitation in the current streaming code? Or there is anything wrong in
user scope?
Regards,
Rohit
*Founder & CEO, **Tuplejump, Inc.*
__
This looks like a bug to me. This happens because we serialize the code
that starts the receiver and send it across. And since we have not
registered the classes of akka library it does not work. I have not tried
myself, but may be by including something like chill-akka (
https://github.com/xitrum-
The stack trace was from running the Actor count sample directly, without a
spark cluster, so I guess the logs would be from both? I enabled more logging
and got this stack trace
4/07/25 17:55:26 [INFO] SecurityManager: Changing view acls to: alan
14/07/25 17:55:26 [INFO] SecurityManager: Secu
Is this error on the executor or on the driver? Can you provide a larger
snippet of the logs, driver as well as if possible executor logs.
TD
On Thu, Jul 24, 2014 at 10:28 PM, Alan Ngai wrote:
> bump. any ideas?
>
> On Jul 24, 2014, at 3:09 AM, Alan Ngai wrote:
>
> it looks like when you con
bump. any ideas?
On Jul 24, 2014, at 3:09 AM, Alan Ngai wrote:
> it looks like when you configure sparkconfig to use the kryoserializer in
> combination of using an ActorReceiver, bad things happen. I modified the
> ActorWordCount example program from
>
> val sparkConf = new SparkConf(