No, as I say, it seems to just generate a warning. OOPS can't be used with
>= 32GB heap, so it just isn't. That's why I am asking what the problem is.
Spark doesn't set this value as far as I can tell; maybe your env does.
This is in any event not a Spark issue per se.
On Mon, Mar 23, 2020 at 9:40
If -Xmx is bigger then 32g, vm will not to use UseCompressedOops as default, We can see a case, If we set spark.driver.memory is 64g, set -XX:+UseCompressedOops in spark.executor.extralJavaOptions, and set SPARK_DAEMON_MEMORY = 6g, Use current code , vm will got command like wit
I'm still not sure if you are trying to enable it or disable it, and what
the issue is?
There is no logic in Spark that sets or disables this flag that I can see.
On Mon, Mar 23, 2020 at 9:27 AM angers.zhu wrote:
> Hi Sean,
>
> Yea, I set -XX:+UseCompressedOops in driver(you can see in command
Hi Sean,
Yea, I set -XX:+UseCompressedOops in driver(you can see in command line) and these days, we have more user and I set spark.driver.memory to 64g, in Non-default VM flags it should be +XX:-UseCompressdOops , but it’s still +XX:-UseCompressdOops. I have find the r
I don't think Spark sets UseCompressedOops in any defaults; are you setting
it?
It can't be used with heaps >= 32GB. It doesn't seem to cause an error if
you set it with large heaps, just a warning.
What's the problem?
On Mon, Mar 23, 2020 at 6:21 AM angers.zhu wrote:
> Hi developers,
>
> These
Hi developers, These day I meet a strange problem and I can’t find whyWhen I start a spark thrift server with spark.driver.memory 64g, then use jdk8/bin/jinfo pid to see vm flags got below information,In 64g vm, UseCompressedOops should be closed by default, why spark t