Hi Folks,
I'm wondering what people think about the idea of having the Spark UI
(optionally) act as a proxy to the executors? This could help with exec UI
access in some deployment environments.
Cheers,
Holden :)
--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performan
ooh, this is fun,
v2 isn't safe to use unless every task attempt generates files with exactly
the same names and it is okay to intermingle the output of two task
attempts.
This is because task commit can felt partway through (or worse, that
process pause for a full GC), and a second attempt commi
Please vote on releasing the following candidate as Apache Spark version 3.2
.0.
The vote is open until 11:59pm Pacific time Aug 25 and passes if a majority
+1 PMC votes are cast, with a minimum of 3 +1 votes.
[ ] +1 Release this package as Apache Spark 3.2.0
[ ] -1 Do not release this package be
So it turns out Delta Lake isn't compatible out of the box due to it's
mixed use of the FileContext API for writes and the FileSystem API for
reads on the driver. Bringing that up with those devs now but in the
meantime the auto-msync-only-on-driver trick is already coming in handy,
thanks!
On Wed
Hi Dev
Environment details
Hadoop 3.2
Hive 3.1
Spark 3.0.3
Cluster : Kerborized .
1) Hive server is running fine
2) Spark sql , sparkshell, spark submit everything is working as expected.
3) Connecting Hive through beeline is working fine (after kinit)
beeline -u "jdbc:hive2://:/default;princip
So personally I think its fine to comment post merge but I think an issue
should also be filed (that might just be me though). This change was reviewed
and committed so if someone found a problem with it, then it should be
officially tracked as a bug.
I would think a -1 on a already committed