ningless. If you want to also calculate cpu resource, you
> should choose DominantResourceCalculator.
>
> Thanks
> Jerry
>
> On Sat, Sep 9, 2017 at 6:54 AM, Xiaoye Sun wrote:
>
>> Hi,
>>
>> I am using Spark 1.6.1 and Yarn 2.7.4.
>> I want to submit a S
Hi,
I am using Spark 1.6.1 and Yarn 2.7.4.
I want to submit a Spark application to a Yarn cluster. However, I found
that the number of vcores assigned to a container/executor is always 1,
even if I set spark.executor.cores=2. I also found the number of tasks an
executor runs concurrently is 2. So,
you may need to check if spark can get the size of your table. If spark
cannot get the table size, it won't do broadcast.
On Sat, Jul 1, 2017 at 11:37 PM Paley Louie wrote:
> Thank you for your reply, I have tried to add broadcast hint to the base
> table, but it just cannot be broadcast out.
>
Hi all,
I am running a Spark (v1.6.1) application using the ./bin/spark-submit
script. I made some changes to the HttpBroadcast module. However, after the
application finishes completely, the spark master program hangs at the end
of the application. The ShutdownHook is supposed to be called at thi
Hi,
I am using Spark 1.6.1, and I am looking at the Event Timeline on "Details
for Stage" Spark UI web page in detail.
I found that the "scheduler delay" on event timeline is somehow
misrepresented. I want to confirm if my understanding is correct.
Here is the detailed description:
In Spark's co
Hi,
I am running some experiments with OnlineLDAOptimizer in Spark 1.6.1. My
Spark cluster has 30 machines.
However, I found that the Scheduler delay at job/stage "reduce at
LDAOptimizer.scala:452" is extremely large when the LDA model is large. The
delay could be tens of seconds.
Does anyone kn
Hi,
Currently, I am running Spark using the standalone scheduler with 3
machines in our cluster. For these three machines, one runs Spark Master
and the other two run Spark Worker.
We run a machine learning application on this small-scale testbed. A
particular stage in my application is divided i
Hi all,
I am currently making some changes in Spark in my research project.
In my development, after an application has been submitted to the spark
master, the master needs to get the IP addresses of all the slaves used by
that application, so that the spark master is able to talk to the
slave ma
I don't see how spark could affect cpu affinity.
>
> regards,
> --Jakob
>
> On Tue, Sep 13, 2016 at 7:56 PM, Xiaoye Sun wrote:
> > Hi,
> >
> > In my experiment, I pin one very important process on a fixed CPU. So the
> > performance of Spark task executi
Hi,
In my experiment, I pin one very important process on a fixed CPU. So the
performance of Spark task execution will be affected if the executors or
the worker uses that CPU. I am wondering if it is possible to let the Spark
executors not using a particular CPU.
I tried to 'taskset -p [cpumask]
Hi,
In my experiment, I pin one very important process on a fixed CPU. So the
performance of Spark task execution will be affected if the executors or
the worker uses that CPU. I am wondering if it is possible to let the Spark
executors not using a particular CPU.
I tried to 'taskset -p [cpumask]
Hi all,
I am currently making some changes in Spark in my research project.
In my development, after an application has been submitted to the spark
master, I want to get the IP addresses of all the slaves used by that
application, so that the spark master is able to talk to the slave machines
thr
ml
>
> On Thu, Mar 3, 2016 at 6:00 AM, Jeff Zhang wrote:
>
>> The executor may fail to start. You need to check the executor logs, if
>> there's no executor log then you need to check node manager log.
>>
>> On Wed, Mar 2, 2016 at 4:26 PM, Xiaoye Sun wrote:
>
Hi all,
I am very new to spark and yarn.
I am running a BroadcastTest example application using spark 1.6.0 and
Hadoop/Yarn 2.7.1. in a 5 nodes cluster.
I configured my configuration files according to
https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
1. copy
Hi all,
I am very new to spark and yarn.
I am running a BroadcastTest example application using spark 1.6.0 and
Hadoop/Yarn 2.7.1. in a 5 nodes cluster.
I configured my configuration files according to
https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
1. copy
15 matches
Mail list logo