+Cheng
Hi Reynold,
I think you are referring to bucketing in in-memory columnar cache.
I am proposing that if we have a parquet structure like following :-
//file1/id=1/
//file1/id=2/
and if we read and cache it, it should create 2 RDD[CachedBatch] (each per
value of "id")
Is this what you we
Sorry for using the wrong mailing list. Next time onward I will be using
the user list.
Thank you
On Tue, Nov 29, 2016 at 11:10 AM, Nishadi Kirielle
wrote:
> Hi all,
>
> I am trying to use bitwise AND operation between integers on top of Spark
> SQL. Is this functionality supported and if so, ca
Bcc dev@ and add user@
The dev list is not meant for users to ask questions on how to use Spark.
For that you should use StackOverflow or the user@ list.
scala> sql("select 1 & 2").show()
+---+
|(1 & 2)|
+---+
| 0|
+---+
scala> sql("select 1 & 3").show()
+---+
|(1 & 3)|
+-
Hi all,
I am trying to use bitwise AND operation between integers on top of Spark
SQL. Is this functionality supported and if so, can I have any
documentation on how to use bitwise AND operation?
Thanks & regards
--
Nishadi Kirielle
Undergraduate
University of Moratuwa - Sri Lanka
Mobile : +9
This one:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
On Mon, Nov 28, 2016 at 9:00 PM, Prasanna Santhanam wrote:
>
>
> On Tue, Nov 29, 2016 at 6:55 AM, Reynold Xin wrote:
>
>> Please vote on releasing the following candidate as Apache Spar
On Tue, Nov 29, 2016 at 6:55 AM, Reynold Xin wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Thursday, December 1, 2016 at 18:00 UTC and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as
Please vote on releasing the following candidate as Apache Spark version
2.1.0. The vote is open until Thursday, December 1, 2016 at 18:00 UTC and
passes if a majority of at least 3 +1 PMC votes are cast.
[ ] +1 Release this package as Apache Spark 2.1.0
[ ] -1 Do not release this package because
Thank you, Sean.
Now, I agree with you that it's too rare to do something for Apache Spark
major versions.
Bests,
Dongjoon
On Mon, Nov 28, 2016 at 04:17 Sean Owen wrote:
> Yeah, there's no official position on this. BTW see the new home of what
> info is published on this topic:
> http://spark
Yeah, there's no official position on this. BTW see the new home of what
info is published on this topic:
http://spark.apache.org/versioning-policy.html
The answer is indeed that minor releases have a target cadence, but
maintenance releases are as-needed, as defined by the release manager's
judgm