Yes, as[T] is lazy as any transformation is, but in terms of data
processing not schema. You seem to imply the as[T] is lazy in terms of
the schema, where I do no know of any other transformation that behaves
like this.
Your proposed solution works, because the map transformation returns the
I think it's simply because as[T] is lazy. You will see the right schema if
you do `df.as[T].map(identity)`.
On Tue, Jan 7, 2020 at 4:42 PM Enrico Minack wrote:
> Hi Devs,
>
> I'd like to propose a stricter version of as[T]. Given the interface def
> as[T](): Dat
Hi Devs,
I'd like to propose a stricter version of as[T]. Given the interface def
as[T](): Dataset[T], it is counter-intuitive that the schema of the
returned Dataset[T] is not agnostic to the schema of the originating
Dataset. The schema should always be derived only from T.
I am prop
why visitCreateFileFormat doesn`t support hive STORED BY ,just support story
as
when i update spark1.6.2 to spark2.0.1
so what i want to ask is .does it on plan to support hive stored by ? or never
support that ?
configureOutputJobProperties is quit important ,is there any other method to
Yea WRT Options maybe I'm thinking about it incorrectly or misrepresenting
it as relating to Encoders or to pure Option encoder. The semantics I'm
thinking of are around the deserialization of a type T and lifting it into
Option[T] via the Option.apply function which converts null to N
te a proper Option[T]
> encoder, but it's not straightforward without deep spark-sql source
> knowledge. Is it unexpected for users to need to write their own Encoders
> with the availability of ExpressionEncoder.apply and the bean encoder
> method?
>
> As additional color for t
Are there any user or dev guides for writing Encoders? I'm trying to read
through the source code to figure out how to write a proper Option[T]
encoder, but it's not straightforward without deep spark-sql source
knowledge. Is it unexpected for users to need to write their own Encoder
Not that rdd.dependencies(n).rdd.asInstanceOf[RDD[T]] is terrible, but
rdd.parent[T](n) better captures the intent.
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/104
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37109170
Ok I merged this.
Not sure about Maven off the top of my head. All these build plugins are
pretty arcane to me.
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37104948
LGTM @rxin is there an equivalent thing to this in maven or no? Seems to me
like maybe this is sbt only.
---
If your project is set up for it, you can reply to this emai
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37096375
Very cool, finally we have this !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37095430
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13070/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37095429
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37094470
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/104#issuecomment-37094472
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/104
Update junitxml plugin to the latest version to avoid recompilation in
every SBT command.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rxin/spar
18 matches
Mail list logo