Hi Folks,
I'm continuing my adventures to make Spark on containers party and I
was wondering if folks have experience with the different batch
scheduler options that they prefer? I was thinking so that we can
better support dynamic allocation it might make sense for us to
support using different s
+1 same result as ever. Signatures are OK, tags look good, tests pass.
On Thu, Jun 17, 2021 at 5:11 AM Yi Wu wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 3.0.3.
>
> The vote is open until Jun 21th 3AM (PST) and passes if a majority +1 PMC
> votes are cast,
Any suggestion or comment on this? They are going to remove the package by
6-28
Seems to me if we have a switch to opt in to install (and not by default
on), or prompt the user in interactive session, should be good as user
confirmation.
On Sun, Jun 13, 2021 at 11:25 PM Felix Cheung
wrote:
>
Thank you for the correction, Yikun.
Yes, it's 3.3.1. :)
On 2021/06/17 09:03:55, Yikun Jiang wrote:
> - Apache Hadoop 3.3.2 becomes the default Hadoop profile for Apache Spark
> 3.2 via SPARK-29250 today. We are observing big improvements in S3 use
> cases. Please try it and share your experienc
Please vote on releasing the following candidate as Apache Spark version
3.0.3.
The vote is open until Jun 21th 3AM (PST) and passes if a majority +1 PMC
votes are cast, with
a minimum of 3 +1 votes.
[ ] +1 Release this package as Apache Spark 3.0.3
[ ] -1 Do not release this package because ...
- Apache Hadoop 3.3.2 becomes the default Hadoop profile for Apache Spark
3.2 via SPARK-29250 today. We are observing big improvements in S3 use
cases. Please try it and share your experience.
It should be Apache Hadoop 3.3.1 [1]. : )
Note that Apache hadoop 3.3.0 is the first Hadoop release inc
Ok the first link throws some clues
.*... Hive excels in batch disc processing with a map reduce execution
engine. Actually, Hive can also use Spark as its execution engine which
also has a Hive context allowing us to query Hive tables. Despite all the
great things Hive can solve, this post is to
Hi Talebzadeh,
Looks I confused, Sorry.. Now I changed to subject to make it clear.
Facebook has tried migration from hive to spark. Check the following links for
same.
https://www.dcsl.com/migrating-from-hive-to-spark/
https://databricks.com/session/experiences-migrating-hive-workload-to-sparks