Hi All,
does spark label expression really support "&&" or "||" or even "!" for label
based schedulering?
I tried that but it does NOT work.
Best Regards,
Allen
'~' for 'not' when building DataFrame
boolean expressions.
example:
>>> df = sqlContext.range(10)
>>> df.where( (df.id==1) | ~(df.id==1))
DataFrame[id: bigint]
On Wed, Dec 16, 2015 at 4:32 PM, Allen Zhang wrote:
Hi All,
does spark label expression re
plus 1
在 2015-12-17 09:39:39,"Joseph Bradley" 写道:
+1
On Wed, Dec 16, 2015 at 5:26 PM, Reynold Xin wrote:
+1
On Wed, Dec 16, 2015 at 5:24 PM, Mark Hamstra wrote:
+1
On Wed, Dec 16, 2015 at 1:32 PM, Michael Armbrust
wrote:
Please vote on releasing the following candidate as Apa
expressions.
example:
>>> df = sqlContext.range(10)
>>> df.where( (df.id==1) | ~(df.id==1))
DataFrame[id: bigint]
On Wed, Dec 16, 2015 at 4:32 PM, Allen Zhang wrote:
Hi All,
does spark label expression really support "&&" or "||" or even "!" for label
based schedulering?
I tried that but it does NOT work.
Best Regards,
Allen
--
-- 張雅軒
more details commands:
2. yarn rmadmin -replaceLabelsOnNode spark-dev:54321,foo;
yarn rmadmin -replaceLabelsOnNode sut-1:54321,bar;
yarn rmadmin -replaceLabelsOnNode sut-2:54321,bye;
yarn rmadmin -replaceLabelsOnNode sut-3:54321,foo;
At 2015-12-17 10:31:20, "Allen Zhang&qu
^_^ , Thanks Ted.
At 2015-12-18 03:38:46, "Ted Yu" wrote:
I consulted with YARN developer, the notion presented in Allen's email is not
supported yet.
Only single node label should be specified.
Cheers
On Wed, Dec 16, 2015 at 6:40 PM, Allen Zhang wrote:
more detail
plus dev
在 2015-12-22 15:15:59,"Allen Zhang" 写道:
Hi Reynold,
Any new API support for GPU computing in our 2.0 new version ?
-Allen
在 2015-12-22 14:12:50,"Reynold Xin" 写道:
FYI I updated the master branch's Spark version to 2.0.0-SNAPSHOT.
On Tue
please start a new thread.
On Mon, Dec 21, 2015 at 11:18 PM, Allen Zhang wrote:
plus dev
在 2015-12-22 15:15:59,"Allen Zhang" 写道:
Hi Reynold,
Any new API support for GPU computing in our 2.0 new version ?
-Allen
在 2015-12-22 14:12:50,"Reynold Xin" 写道:
FYI I
+1 (non-binding)
I have just tarball a new binary and tested am.nodelabelexpression and
executor.nodelabelexpression manully, result is expected.
At 2015-12-23 21:44:08, "Iulian Dragoș" wrote:
+1 (non-binding)
Tested Mesos deployments (client and cluster-mode, fine-grained and
coarse
Try -pl option in mvn command, and append -am or amd for more choice.
for instance:
mvn clean install -pl :spark-mllib_2.10 -DskipTests
At 2015-12-25 17:57:41, "salexln" wrote:
>One more question:
>Is there a way only to build the MLlib using command line?
>
>
>
>
>--
>View this message in
format issue I think, go ahead
At 2015-12-30 13:36:05, "Ted Yu" wrote:
Hi,
I noticed that there are a lot of checkstyle warnings in the following form:
To my knowledge, we use two spaces for each tab. Not sure why all of a sudden
we have so many IndentationCheck warnings:
grep 'hil
Hi Kazuaki,
I am looking at http://kiszk.github.io/spark-gpu/ , can you point me where is
the kick-start scripts that I can give it a go?
to be more specifically, what does *"off-loading"* mean? aims to reduce the
copy overhead between CPU and GPU?
I am a newbie for GPU, how can I specify how
plus 1,
we are currently using python 2.7.2 in production environment.
在 2016-01-05 18:11:45,"Meethu Mathew" 写道:
+1
We use Python 2.7
Regards,
Meethu Mathew
On Tue, Jan 5, 2016 at 12:47 PM, Reynold Xin wrote:
Does anybody here care about us dropping support for Python 2.6 in Spark
Hi Kazuaki,
Jcuda is actually a wrapper of the **pure** CUDA, as your wiki page shows that
3.15x performance boost of logistic regression seems slower than BIDMat-cublas
or pure CUDA.
Could you elaborate on why you chose Jcuda other then JNI to call CUDA directly?
Regards,
Allen Zhang
why not use maven
At 2016-02-25 21:55:49, "lgieron" wrote:
>The Spark projects generated by sbt eclipse plugin have incorrect dependent
>projects (as visible on Properties -> Java Build Path -> Projects tab). All
>dependent project are missing the "_2.11" suffix (for example, it's
>"spark-
dev/change-scala-version 2.10 may help you?
At 2016-02-25 21:55:49, "lgieron" wrote:
>The Spark projects generated by sbt eclipse plugin have incorrect dependent
>projects (as visible on Properties -> Java Build Path -> Projects tab). All
>dependent project are missing the "_2.11" suffix (
well, I am using IDEA to import the code base.
At 2016-02-25 22:13:11, "Łukasz Gieroń" wrote:
I've just checked, and "mvn eclipse:eclipse" generates incorrect projects as
well.
On Thu, Feb 25, 2016 at 3:04 PM, Allen Zhang wrote:
why not use maven
17 matches
Mail list logo