Ted, thanks for the reply.
Yeah, there were just three nodes with hdfs and spark workers colocated.
There were actually one more with spark master (standalone) and namenode.
And I've thrown one more spark worker node, which sees whole hdfs pretty
well, but doesn't have colocated datanode process.
bq. I haven't added one more HDFS node to a hadoop cluster
Does each of three nodes colocate with hdfs data nodes ?
The absence of 4th data node might have something to do with the partition
allocation.
Can you show your code snippet ?
Thanks
On Sat, Mar 5, 2016 at 2:54 PM, Eugene Morozov
wro
Hi,
My cluster (standalone deployment) consisting of 3 worker nodes was in the
middle of computations, when I added one more worker node. I can see that
new worker is registered in master and that my job actually get one more
executor. I have configured default parallelism as 12 and thus I see tha