I think this is the right place, just a hard question :) As far as I
know, there's no "case insensitive flag", so YMMV
On Mon, Nov 21, 2022 at 5:40 PM Patrick Tucci wrote:
>
> Is this the wrong list for this type of question?
>
> On 2022/11/12 16:34:48 Patrick Tucci wrote:
> > Hello,
> >
> > I
Is this the wrong list for this type of question?
On 2022/11/12 16:34:48 Patrick Tucci wrote:
> Hello,
>
> Is there a way to set string comparisons to be case-insensitive
globally? I
> understand LOWER() can be used, but my codebase contains 27k lines of SQL
> and many string comparisons. I wou
Correct: as per the code below from SecurityManager.scala, if acls aren't
enabled, we skip the vulnerable code path (getCurrentUserGroups)
private def isUserInACL(
user: String,
aclUsers: Set[String],
aclGroups: Set[String]): Boolean = {
if (user == null ||
!aclsEna
I have not used standalone for a good while. The standard dataproc uses
YARN as the resource manager. The vanilla dataproc is Google's answer to
Hadoop on the cloud. Move your analytics workload from on-premise to Cloud
with little effort with the same look and feel. Google then introduced dynamic
CCing Kostya for a better view, but I believe that this will not be an
issue if you're not using the ACLs in Spark, yes.
On Mon, Nov 21, 2022 at 2:38 PM Andrew Pomponio
wrote:
> I am using Spark 2.3.0 and trying to mitigate
> https://nvd.nist.gov/vuln/detail/CVE-2022-33891. The correct thing to
I am using Spark 2.3.0 and trying to mitigate
https://nvd.nist.gov/vuln/detail/CVE-2022-33891. The correct thing to do is to
update. However, I am told this is not happening. Thus, I am trying to
determine if the following are set:
spark.acls.enable false
spark.history.ui.acls.enable false
The
Out of curiosity : are there functional limitations in Spark Standalone
that are of concern? Yarn is more configurable for running non-spark
workloads and how to run multiple spark jobs in parallel. But for a single
spark job it seems standalone launches more quickly and does not miss any
features
Hi,
I have not tested this myself but Google have brought up *Dataproc Serverless
for Spar*k. in a nutshell Dataproc Serverless lets you run Spark batch
workloads without requiring you to provision and manage your own cluster.
Specify workload parameters, and then submit the workload to the Datapr