> Currently, I have heard some ideas or attitudes that I consider to be
overly motivated by fear of unlikely occurrences.
> And I've heard some statements disregard widely accepted principles of
inclusiveness at the Apache Software Foundation.
> But I suspect that there's more to the attitude of no
Hi,
I am using Spark SQL 2.3.3 to read a hive table which is partitioned by
day, hour, platform, request_status and is_sampled. The underlying data is
in parquet format on HDFS.
Here is the SQL query to read just *one partition*.
```
spark.sql("""
SELECT rtb_platform_id, SUM(e_cpm)
FROM raw_logs.f
All right we could support both Python 2 and Python 3 for spark 3.0.
On Wed, Aug 7, 2019 at 6:10 PM Hyukjin Kwon wrote:
> We didn't drop Python 2 yet although it's deprecated. So I think It should
> support both Python 2 and Python 3 at the current status.
>
> 2019년 8월 7일 (수) 오후 6:54, Weichen Xu
We didn't drop Python 2 yet although it's deprecated. So I think It should
support both Python 2 and Python 3 at the current status.
2019년 8월 7일 (수) 오후 6:54, Weichen Xu 님이 작성:
> Hi all,
>
> I would like to discuss the compatibility for dev scripts. Because we
> already decided to deprecate python
Hi all,
I would like to discuss the compatibility for dev scripts. Because we
already decided to deprecate python2 in spark 3.0, for development scripts
under dev/ , we have two choice:
1) Migration from Python 2 to Python 3
2) Support both Python 2 and Python 3
I tend to option (2) which is more
On Tue, Aug 6, 2019 at 7:57 PM Sean Owen wrote:
> On Tue, Aug 6, 2019 at 11:45 AM Myrle Krantz wrote:
> > I had understood your position to be that you would be willing to make
> at least some non-coding contributors to committers but that your "line" is
> somewhat different than my own. My re
Do you use the HiveContext in Spark? Do you configure the same options there?
Can you share some code?
> Am 07.08.2019 um 08:50 schrieb Rishikesh Gawade :
>
> Hi.
> I am using Spark 2.3.2 and Hive 3.1.0.
> Even if i use parquet files the result would be same, because after all
> sparkSQL isn't