hi, devs,
I found some descriptions were not updated in
https://spark.apache.org/developer-tools.html#profiling
# I think `SPARK_JAVA_OPTS` not used anymore.
not sure who makes the website updated though,
this is just a head-up.
Thanks,
--
---
Takeshi Yamamuro
True, can you make a pull request vs github.com/apache/spark-website? I
think users probably have to add this to
spark.{executor|driver}.extraJavaOptions
On Wed, Sep 6, 2017 at 8:08 AM Takeshi Yamamuro
wrote:
> hi, devs,
>
> I found some descriptions were not updated in
> https://spark.apache.or
ok, I will. thanks!
On Wed, Sep 6, 2017 at 4:19 PM, Sean Owen wrote:
> True, can you make a pull request vs github.com/apache/spark-website? I
> think users probably have to add this to spark.{executor|driver}.
> extraJavaOptions
>
> On Wed, Sep 6, 2017 at 8:08 AM Takeshi Yamamuro
> wrote:
>
>>
Hi all,
Thank you for voting and suggestions.
As Wenchen mentioned and also we're discussing at JIRA, we need to discuss
the size hint for the 0-parameter UDF.
But I believe we got a consensus about the basic APIs except for the size
hint, I'd like to submit a pr based on the current proposal and
Thanks, I can do that. We're then in the funny position of having one
deprecated Kafka API, and one experimental one.
Is the Kafka 0.10 integration as stable as it is going to be, and worth
marking as such for 2.3.0?
On Tue, Sep 5, 2017 at 4:12 PM Cody Koeninger wrote:
> +1 to going ahead and g
I kind of doubt the kafka 0.10 integration is going to change much at
all before the upgrade to 0.11
On Wed, Sep 6, 2017 at 8:57 AM, Sean Owen wrote:
> Thanks, I can do that. We're then in the funny position of having one
> deprecated Kafka API, and one experimental one.
>
> Is the Kafka 0.10 int
Hi all,
I've submitted a PR for a basic data source v2, i.e., only contains
features we already have in data source v1. We can discuss API details like
naming in that PR: https://github.com/apache/spark/pull/19136
In the meanwhile, let's keep this vote open and collecting more feedbacks.
Thanks
I'm all for keeping this moving and not getting too far into the details
(like naming), but I think the substantial details should be clarified
first since they are in the proposal that's being voted on.
I would prefer moving the write side to a separate SPIP, too, since there
isn't much detail in
Hi all,
In the previous discussion, we decided to split the read and write path of
data source v2 into 2 SPIPs, and I'm sending this email to call a vote for
Data Source V2 read path only.
The full document of the Data Source API V2 is:
https://docs.google.com/document/d/1n_vUVbF4KD3gxTmkNEon5qdQ
Hi Ryan,
Yea I agree with you that we should discuss some substantial details during
the vote, and I addressed your comments about schema inference API in my
new PR, please take a look.
I've also called a new vote for the read path, please vote there, thanks!
On Thu, Sep 7, 2017 at 7:55 AM, Ryan
adding my own +1 (binding)
On Thu, Sep 7, 2017 at 10:29 AM, Wenchen Fan wrote:
> Hi all,
>
> In the previous discussion, we decided to split the read and write path of
> data source v2 into 2 SPIPs, and I'm sending this email to call a vote for
> Data Source V2 read path only.
>
> The full docum
+1
Xiao
2017-09-06 19:37 GMT-07:00 Wenchen Fan :
> adding my own +1 (binding)
>
> On Thu, Sep 7, 2017 at 10:29 AM, Wenchen Fan wrote:
>
>> Hi all,
>>
>> In the previous discussion, we decided to split the read and write path
>> of data source v2 into 2 SPIPs, and I'm sending this email to call
+1
On Wed, Sep 6, 2017 at 8:53 PM, Xiao Li wrote:
> +1
>
> Xiao
>
> 2017-09-06 19:37 GMT-07:00 Wenchen Fan :
>
>> adding my own +1 (binding)
>>
>> On Thu, Sep 7, 2017 at 10:29 AM, Wenchen Fan wrote:
>>
>>> Hi all,
>>>
>>> In the previous discussion, we decided to split the read and write path
>
Hi,
when I use spark-shell to get the logical plan of sql, an error occurs
scala> spark.sessionState
:30: error: lazy value sessionState in class SparkSession cannot
be accessed in org.apache.spark.sql.SparkSession
spark.sessionState
^
But if I use spark-submit to access the
+1 (non-binding)
> On Sep 6, 2017, at 7:29 PM, Wenchen Fan wrote:
>
> Hi all,
>
> In the previous discussion, we decided to split the read and write path of
> data source v2 into 2 SPIPs, and I'm sending this email to call a vote for
> Data Source V2 read path only.
>
> The full document of
Hi,
may I know which version of spark you are using, in 2.2 I tried with
below query in spark-shell for viewing the logical plan and it's working
fine
spark.sql("explain extended select * from table1")
The above query you can use for seeing logical plan.
Thanks,
Sujith
On Thu, 7 Sep 2017 at
spark-2.1.1 I use
2017-09-07 14:00 GMT+08:00 sujith chacko :
> Hi,
> may I know which version of spark you are using, in 2.2 I tried with
> below query in spark-shell for viewing the logical plan and it's working
> fine
>
> spark.sql("explain extended select * from table1")
>
> The above qu
If your intention is to just view the logical plan in spark shell then I
think you can follow the query which I mentioned in previous mail. In
spark 2.1.0 sessionState is a private member which you cannot access.
Thanks.
On Thu, 7 Sep 2017 at 11:39 AM, ChenJun Zou wrote:
> spark-2.1.1 I use
thanks,
my mistake
2017-09-07 14:21 GMT+08:00 sujith chacko :
> If your intention is to just view the logical plan in spark shell then I
> think you can follow the query which I mentioned in previous mail. In
> spark 2.1.0 sessionState is a private member which you cannot access.
>
> Thanks.
>
I am examined the code and found lazy val is added recently in 2.2.0
2017-09-07 14:34 GMT+08:00 ChenJun Zou :
> thanks,
> my mistake
>
> 2017-09-07 14:21 GMT+08:00 sujith chacko :
>
>> If your intention is to just view the logical plan in spark shell then I
>> think you can follow the query whic
20 matches
Mail list logo