Re: Spark Thrift Server java vm problem need help

2020-03-23 Thread Sean Owen
No, as I say, it seems to just generate a warning. OOPS can't be used with >= 32GB heap, so it just isn't. That's why I am asking what the problem is. Spark doesn't set this value as far as I can tell; maybe your env does. This is in any event not a Spark issue per se. On Mon, Mar 23, 2020 at 9:40

Re: Spark Thrift Server java vm problem need help

2020-03-23 Thread angers . zhu
If -Xmx is bigger then 32g, vm will not to use  UseCompressedOops as default, We can see a case, If we set spark.driver.memory is 64g, set -XX:+UseCompressedOops in spark.executor.extralJavaOptions, and set SPARK_DAEMON_MEMORY = 6g, Use current code , vm will got command like wit

Re: Spark Thrift Server java vm problem need help

2020-03-23 Thread Sean Owen
I'm still not sure if you are trying to enable it or disable it, and what the issue is? There is no logic in Spark that sets or disables this flag that I can see. On Mon, Mar 23, 2020 at 9:27 AM angers.zhu wrote: > Hi Sean, > > Yea, I set -XX:+UseCompressedOops in driver(you can see in command

Re: Spark Thrift Server java vm problem need help

2020-03-23 Thread angers . zhu
Hi Sean, Yea,  I set  -XX:+UseCompressedOops in driver(you can see in command line) and these days, we have more user and I set spark.driver.memory to 64g, in Non-default VM flags it should be +XX:-UseCompressdOops , but it’s still +XX:-UseCompressdOops. I have find the r

Re: Spark Thrift Server java vm problem need help

2020-03-23 Thread Sean Owen
I don't think Spark sets UseCompressedOops in any defaults; are you setting it? It can't be used with heaps >= 32GB. It doesn't seem to cause an error if you set it with large heaps, just a warning. What's the problem? On Mon, Mar 23, 2020 at 6:21 AM angers.zhu wrote: > Hi developers, > > These