This change came in Go 1.14 as part of an allocator scalability improvement 
(a specific data structure benefits from having a large memory 
reservation), but with it also came an effort to mitigate the (potential) 
negative effects you mention.

I think all of the possible problematic cases you listed were checked 
before release to work just fine (overcommit in various configurations, 
cgroups, etc.), and AFAIK the only remaining issue is ulimit -v 
(https://github.com/golang/go/issues/38010) (we intentionally moved forward 
knowing this). Generally, this only seems to only be a problem in systems 
where the user doesn't have control over the environment, so just simply 
turning off ulimit isn't an option for them. Broadly speaking, ulimit -v 
isn't terribly useful these days; limiting virtual memory use is not a 
great proxy for limiting actual memory use.

You mention overcommit=2 specifically, but I wasn't able to reproduce the 
scenario you're describing in the past. Overcommit on Linux (in any 
configuration) ignores anonymous read-only (and also PROT_NONE) pages that 
haven't been touched yet 
(https://www.kernel.org/doc/Documentation/vm/overcommit-accounting). The Go 
runtime is careful to only make a large reservation but never to do 
anything to indicate those pages should be committed (which just amounts to 
not mapping it as writable until it is needed).

If your machine has 64-128 MiB of RAM, I think it's less likely that it's 
the 600 MiB or so of reservation that we make that's the problem and 
actually the arena size. I get the impression that you're running on 64-bit 
hardware with that much RAM. If that's the case, I believe our heap arena 
size is 64 MiB in those cases, and that we *do *map as read-write, and 
could indeed cause issues with overcommit in such environments (if your 
process uses 64 MiB + 1 bytes of heap, then suddenly the runtime will try 
to map an additional 64 MiB and whoops! you're out). 32-bit platforms and 
Windows use a 4 MiB arena, on the other hand.

This issue has come up in the past 
(https://github.com/golang/go/issues/39400) and I'm not sure we have an 
easy fix (otherwise it would've been fixed already, haha). The arena size 
impacts the performance of a critical runtime data structure used 
frequently by the garbage collector. If you're willing, please comment on 
that issue with details about your specific problem; the more context we 
have from more users, the better.

On Thursday, January 14, 2021 at 9:28:28 AM UTC-5 Amnon wrote:

> Engineering is about trade-offs. You decide what your priority is, and 
> that largely determines
> the characteristics of what you produce. The Go core team prioritised 
> maximising throughput on datacentre
> servers, where cores are plentiful,  memory is cheap, and virtual memory 
> is free. And this is reflected in the behaviour 
> of their implementation of Go. Other people implementing the Go language 
> have different priorities and get different 
> results.
>
> On Thursday, 14 January 2021 at 13:42:47 UTC laos...@gmail.com wrote:
>
>> yes I'm aware of that but still, why the super large size VSS in Golang? 
>> It does have side effects some are pretty bad.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/ceaec904-2b2b-4dcf-a23a-cc5826197566n%40googlegroups.com.

Reply via email to