works perfectly! Thanks Herman.
Am Di., 25. Aug. 2020 um 12:03 Uhr schrieb Herman van Hovell <
her...@databricks.com>:
> Hi Robert,
>
> Your Spark 3.0 code is missing the encoder that converts the Row to an
> InternalRow. Your Spark 3.0 code should look like this:
>
&
al enc: ExpressionEncoder[C] =
Encoders.product[C].asInstanceOf[ExpressionEncoder[C]].resolveAndBind()
rowToCaseClass[C](r)
Cheers, Robert
Caused by: java.lang.RuntimeException: Error while decoding:
java.lang.ClassCastException:
org.apache.spark.sql.catalyst.expressions.GenericRow cannot be c
, I feel the only
realistic option is to only test and support JDK such as JDK 11
LTS and future LTS release. I would like to have a discussion on
this in Spark community.
Thanks,
DB Tsai | Siri Open Source Technologies [not a contribution] |
Apple, Inc
--
Robert Stupp
@snazy
Me and my colleagues built one for running spark builds on circleci. The
images are at
https://hub.docker.com/r/palantirtechnologies/circle-spark-python/
(circle-spark-r if you want to build sparkr). Dockerfiles for those images
can be found at
https://github.com/palantir/spark/tree/master/dev/dock
-1 since https://issues.apache.org/jira/browse/SPARK-17213 is a correctness
regression from 2.0 release. The commit that caused it is
776d183c82b424ef7c3cae30537d8afe9b9eee83.
Robert
From: Reynold Xin
Date: Tuesday, November 29, 2016 at 1:25 AM
To: "dev@spark.apache.org"
I am new to the spark sql development. I have a json file with nested arrays.
I can extract/query these arrays. However, when I add order by clause, I get
exceptions: here is the step:
1) val a = sparkSession.sql("SELECT Tables.TableName, Tables.TableType,
Tables.TableExecOrder, Tables.Columns
SPARK-16991 (https://github.com/apache/spark/pull/14661) would be nice
Robert
From: Reynold Xin
Date: Monday, August 22, 2016 at 8:14 PM
To: "dev@spark.apache.org"
Subject: critical bugs to be fixed in Spark 2.0.1?
We should work on a 2.0.1 release soon, since we have fo
t;single", "double”)
df1.where(df1.col("single").cast("string").equalTo("1"))
What’s the expected behaviour of type checking unresolved expression?
- Robert
smime.p7s
Description: S/MIME cryptographic signature
nd it seems like it would
be a good idea to have a uniform testing environment between the PR
and the Spark package.
best,
Robert Dodier
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional command
ject which
was doing something similar.
Thanks in advance for any light you can shed on this problem.
Robert Dodier
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
very large range of outputs -- something like
-6*10^6 to -400, with a mean of about -3. If you look into it, let us
know what you find, I would be interested to hear about it.
best,
Robert Dodier
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Pr
would want to print something else, all that matters is to
give some context so that the user can find the problem more quickly.
Hope this helps in some way.
Robert Dodier
PS.
diff --git a/mllib/src/main/scala/org/apache/spark/mllib/util/MLUtils.scala
b/mllib/src/main/scala/org/apache/spark/
Nicholas,
FWIW the --ip option seems to have been deprecated in commit d90d2af1,
but that was a pretty big commit, lots of other stuff changed, and there
isn't any hint in the log message as to the reason for changing --ip.
best,
Robert D
ged to disallow --ip without propagating that backwards into any scripts
that call it?
Hope this helps in some way.
Robert Dodier
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/SPARK-MASTER-IP-actually-expects-a-DNS-n
ly simple) but it seems to do a lot more work than
just running that one test, and I still get the out-of-memory errors.
Aside from getting a machine with more memory (which is not out of the
question), are there any stretegies for coping with out-of-memory
errors in Maven and/or sbt?
Thanks in adva
a very high level, the API for this framework would specify
methods to compute conditional distributions, marginalizing
as necessary via MCMC. Other operations could include
computing the expected value of a variable or function.
All this is very reminiscent of BUGS, of course.
seful contribution. Thanks for your interest
and I look forward to your comments.
Robert Dodier
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
+1
I verified that the REPL jars published work fine with the Spark Kernel
project (can build/test against them).
Signed,
Chip Senkbeil
From: Krishna Sankar
To: Sean Owen
Cc: Patrick Wendell , "dev@spark.apache.org"
Date: 01/28/2015 02:52 PM
Subject:Re: [VOT
ns to connect to the
Spark Kernel without needing to implement the ZeroMQ protocol.
Signed,
Chip Senkbeil
From: Sam Bessalah
To: Robert C Senkbeil/Austin/IBM@IBMUS
Date: 12/12/2014 04:20 PM
Subject:Re: IBM open-sources Spark Kernel
Wow. Thanks. Can't wait to try this out.
We are happy to announce a developer preview of the Spark Kernel which
enables remote applications to dynamically interact with Spark. You can
think of the Spark Kernel as a remote Spark Shell that uses the IPython
notebook interface to provide a common entrypoint for any application. The
Spark
Hi there,
I wanted to ask whether or not anyone has successfully used Jython with the
pyspark library. I wasn't sure if the C extension support was needed for
pyspark itself or was just a bonus of using Cython.
There was a claim (
http://apache-spark-developers-list.1001551.n3.nabble.com/PySpar
I've created a new pull request, which can be found at
https://github.com/apache/spark/pull/1929. Since Spark is using Scala
2.10.3 and there is a known issue with Scala 2.10.x not supporting the :cp
command (https://issues.scala-lang.org/browse/SI-6502), the Spark shell
does not have the ability
Please remove me from the mail list.
-邮件原件-
发件人: CodingCat [mailto:g...@git.apache.org]
发送时间: 2014年3月7日 7:38
收件人: dev@spark.apache.org
主题: [GitHub] spark pull request: Fix #SPARK-1149 Bad partitioners can cause
Spa...
Github user CodingCat commented on a diff in the pull request:
ht
23 matches
Mail list logo