hour.
We want to keep around the labels and the sample ids for the next iteration
(N+1) where we want to do a join with the new sample window to inherit the
labels of samples that existed in the previous (N) iteration.
--
Regards,
Ofer Eliassaf
anyone? please? is this getting any priority?
On Tue, Sep 27, 2016 at 3:38 PM, Ofer Eliassaf
wrote:
> Is there any plan to support python spark running in "cluster mode" on a
> standalone deployment?
>
> There is this famous survey mentioning that more than 50% of the
back with the JIRA number once I've got it
>> created - will probably take awhile before it lands in a Spark release
>> (since 2.1 has already branched) but better debugging information for
>> Python users is certainly important/useful.
>>
>> On Thu, Nov 24, 20
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
>
> --
> Cell : 425-233-8271
> Twitter: https://twitter.com/holdenkarau
>
--
Regards,
Ofer Eliassaf
applications will get the total
amount of cores until a new application arrives...
--
Regards,
Ofer Eliassaf
; Just want some ideas.
>
> Thank,
> Ben
>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Regards,
Ofer Eliassaf
I start a cluster of 3? SPARK_WORKER_INSTANCES is the only
> way I see to start the standalone cluster and the only way I see to define
> it is in spark-env.sh. The spark submit option, SPARK_EXECUTOR_INSTANCES
> and spark.executor.instances are all related to submitting the job.
>
>
>
> Any ideas?
>
> Thanks
>
> Assaf
>
--
Regards,
Ofer Eliassaf
vailabilty in python spark.
Cuurently only Yarn deployment supports it. Bringing the huge Yarn
installation just for this feature is not fun at all
Does someone have time estimation for this?
--
Regards,
Ofer Eliassaf