Hi all,
I see that spark server opens up random ports, especially in the workers.
is there any way to fix these ports or give an set of ports for the worker
to choose from?
cheers
--
Niranda
Tom - sorry for the delay. If you try OpenJDK (on a smaller heap), do you
see the same problem? Would be great to isolate whether the problem is
related to J9 or not. In either case we should fix it though.
On Fri, Mar 13, 2015 at 9:33 AM, Tom Hubregtsen
wrote:
> I use the spark-submit script an
Hi, all:
Spark1.3.0 hadoop2.2.0
I put the following params in the spark-defaults.conf
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.minExecutors 20
spark.dynamicAllocation.maxExecutors 300
spark.dynamicAllocation.executorIdleTimeout 300
spark.shuffle.service.enabled true
I
Hi, all:
Spark1.3.0 hadoop2.2.0
I put the following params in the spark-defaults.conf
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.minExecutors 20
spark.dynamicAllocation.maxExecutors 300
spark.dynamicAllocation.executorIdleTimeout 300
spark.shuffle.service.enabled true
I
I'm trying to figure out how I might be able to use Spark with a SOCKS proxy.
That is, my dream is to be able to write code in my IDE then run it without
much trouble on a remote cluster, accessible only via a SOCKS proxy between the
local development machine and the master node of the cluster
Let me put a quick summary. #4 got majority vote with CamelCase but
not UPPERCASE. The following is a minimal implementation that works
for both Scala and Java. In Python, we use string for enums. This
proposal is only for new public APIs. We are not going to change
existing ones. -Xiangrui
~~~
se
ping?
On Sun, Mar 15, 2015 at 9:38 PM, David Hall wrote:
> snapshot is pushed. If you verify I'll publish the new artifacts.
>
> On Sun, Mar 15, 2015 at 1:14 AM, Yu Ishikawa > wrote:
>
>> David Hall who is a breeze creator told me that it's a bug. So, I made a
>> jira
>> ticket about this issue
Hi Alexander,
The stack trace is a little misleading here: all of the time is spent in
MemoryStore, but that's because MemoryStore is unrolling an iterator (note
the iterator.next()) call so that it can be stored in-memory. Essentially
all of the computation for the tasks happens as part of that