>> res17: String =
>> {"type":"struct","fields":[{"name":"value","type":"integer","nullable":false,"metadata":{}}]}
>>
>>
>>
>> But obviously Dataset[Row] is not internally Data
gt; But obviously Dataset[Row] is not internally Dataset[Row(value: Row)].
>
>
>
> *From:* Reynold Xin [mailto:r...@databricks.com]
> *Sent:* Friday, February 26, 2016 3:55 PM
> *To:* Sun, Rui
> *Cc:* Koert Kuipers ; dev@spark.apache.org
>
> *Subject:* Re: [discuss] DataFr
nternally Dataset[Row(value: Row)].
From: Reynold Xin [mailto:r...@databricks.com]
Sent: Friday, February 26, 2016 3:55 PM
To: Sun, Rui
Cc: Koert Kuipers ; dev@spark.apache.org
Subject: Re: [discuss] DataFrame vs Dataset in Spark 2.0
The join and joinWith are just two different join semantics, and is not
;
>
> *From:* Reynold Xin [mailto:r...@databricks.com]
> *Sent:* Friday, February 26, 2016 8:52 AM
> *To:* Koert Kuipers
> *Cc:* dev@spark.apache.org
> *Subject:* Re: [discuss] DataFrame vs Dataset in Spark 2.0
>
>
>
> Yes - and that's why source compatibility is
...@databricks.com]
Sent: Friday, February 26, 2016 8:52 AM
To: Koert Kuipers
Cc: dev@spark.apache.org
Subject: Re: [discuss] DataFrame vs Dataset in Spark 2.0
Yes - and that's why source compatibility is broken.
Note that it is not just a "convenience" thing. Conceptually DataFrame is a
Da
Yes - and that's why source compatibility is broken.
Note that it is not just a "convenience" thing. Conceptually DataFrame is a
Dataset[Row], and for some developers it is more natural to think about
"DataFrame" rather than "Dataset[Row]".
If we were in C++, DataFrame would've been a type alias
since a type alias is purely a convenience thing for the scala compiler,
does option 1 mean that the concept of DataFrame ceases to exist from a
java perspective, and they will have to refer to Dataset?
On Thu, Feb 25, 2016 at 6:23 PM, Reynold Xin wrote:
> When we first introduced Dataset in 1.6
Frame, as a way to isolate the 1000+ extra
> lines to a Java compatibility layer/class?
>
>
> --
> *From:* Reynold Xin
> *To:* "dev@spark.apache.org"
> *Sent:* Thursday, February 25, 2016 4:23 PM
> *Subject:* [discuss] DataFrame vs Data
M
Subject: [discuss] DataFrame vs Dataset in Spark 2.0
When we first introduced Dataset in 1.6 as an experimental API, we wanted to
merge Dataset/DataFrame but couldn't because we didn't want to break the
pre-existing DataFrame API (e.g. map function should return Dataset, rather
tha
vote for Option 1.
1) Since 2.0 is major API, we are expecting some API changes,
2) It helps long term code base maintenance with short term pain on Java
side
3) Not quite sure how large the code base is using Java DataFrame APIs.
On Thu, Feb 25, 2016 at 3:23 PM, Reynold Xin wrote:
>
When we first introduced Dataset in 1.6 as an experimental API, we wanted
to merge Dataset/DataFrame but couldn't because we didn't want to break the
pre-existing DataFrame API (e.g. map function should return Dataset, rather
than RDD). In Spark 2.0, one of the main API changes is to merge DataFram
11 matches
Mail list logo