Thanks. Just filed an issue: https://github.com/golang/go/issues/43699

On Thursday, January 14, 2021 at 11:46:14 AM UTC-6 Michael Knyszek wrote:

> On Thursday, January 14, 2021 at 12:27:25 PM UTC-5 laos...@gmail.com 
> wrote:
>
>> I will add info to issues 39400 in the future.
>>
>> while Golang is run at data centers, we still need be memory efficient no 
>> matter how cheap memory is, especially when you want to run thousands of 
>> them in parallel as microservices on one machine.
>>
> I don't disagree. But I believe with modern hardware and modern operating 
> system design VSS has little to do with memory efficiency. *Uncommitted 
> reservations of address space* are vanishingly cheap.
>
>>
>> I run the helloworld net/http on 128MB MIPS 32bit cpu, helloworld takes 
>> 700MB VSS each, I then run 40 of them(each takes 4M RSS) in parallel, and I 
>> got 'can't fork: out of memory' if I set overcommit_memory to 2, change it 
>> to 0 made this disappear. However for embedded systems I normally set 
>> overcommit as 2 and no swap to avoid OOM in the field.
>>
> That is quite strange, and certainly changes things. The runtime *should 
> not* be making a 600 MiB mapping for any 32-bit platform. Those mappings 
> are several orders of magnitude smaller, to the point of generally being a 
> non-issue (on the order of KiB). That sounds like a bug and I would ask 
> that you please file a new issue at https://github.com/golang/go/issues. 
> Please include as much detail about your environment as you're able to 
> (e.g. Linux kernel version, GOARCH, GOOS, etc.).
>
>>
>> Write a C/c++ helloworld http server which takes 3MB VSS, I can run 
>> hundreds of them in parallel without issues.
>>
>> and yes ulimit -v does not really work for go apps, can't limit its VSS 
>> at all.
>>
>> On Thursday, January 14, 2021 at 10:18:03 AM UTC-6 Michael Knyszek wrote:
>>
>>> This change came in Go 1.14 as part of an allocator scalability 
>>> improvement (a specific data structure benefits from having a large memory 
>>> reservation), but with it also came an effort to mitigate the (potential) 
>>> negative effects you mention.
>>>
>>> I think all of the possible problematic cases you listed were checked 
>>> before release to work just fine (overcommit in various configurations, 
>>> cgroups, etc.), and AFAIK the only remaining issue is ulimit -v (
>>> https://github.com/golang/go/issues/38010) (we intentionally moved 
>>> forward knowing this). Generally, this only seems to only be a problem in 
>>> systems where the user doesn't have control over the environment, so just 
>>> simply turning off ulimit isn't an option for them. Broadly speaking, 
>>> ulimit -v isn't terribly useful these days; limiting virtual memory use is 
>>> not a great proxy for limiting actual memory use.
>>>
>>> You mention overcommit=2 specifically, but I wasn't able to reproduce 
>>> the scenario you're describing in the past. Overcommit on Linux (in any 
>>> configuration) ignores anonymous read-only (and also PROT_NONE) pages that 
>>> haven't been touched yet (
>>> https://www.kernel.org/doc/Documentation/vm/overcommit-accounting). The 
>>> Go runtime is careful to only make a large reservation but never to do 
>>> anything to indicate those pages should be committed (which just amounts to 
>>> not mapping it as writable until it is needed).
>>>
>>> If your machine has 64-128 MiB of RAM, I think it's less likely that 
>>> it's the 600 MiB or so of reservation that we make that's the problem and 
>>> actually the arena size. I get the impression that you're running on 64-bit 
>>> hardware with that much RAM. If that's the case, I believe our heap arena 
>>> size is 64 MiB in those cases, and that we *do *map as read-write, and 
>>> could indeed cause issues with overcommit in such environments (if your 
>>> process uses 64 MiB + 1 bytes of heap, then suddenly the runtime will try 
>>> to map an additional 64 MiB and whoops! you're out). 32-bit platforms and 
>>> Windows use a 4 MiB arena, on the other hand.
>>>
>>> This issue has come up in the past (
>>> https://github.com/golang/go/issues/39400) and I'm not sure we have an 
>>> easy fix (otherwise it would've been fixed already, haha). The arena size 
>>> impacts the performance of a critical runtime data structure used 
>>> frequently by the garbage collector. If you're willing, please comment on 
>>> that issue with details about your specific problem; the more context we 
>>> have from more users, the better.
>>>
>>> On Thursday, January 14, 2021 at 9:28:28 AM UTC-5 Amnon wrote:
>>>
>>>> Engineering is about trade-offs. You decide what your priority is, and 
>>>> that largely determines
>>>> the characteristics of what you produce. The Go core team prioritised 
>>>> maximising throughput on datacentre
>>>> servers, where cores are plentiful,  memory is cheap, and virtual 
>>>> memory is free. And this is reflected in the behaviour 
>>>> of their implementation of Go. Other people implementing the Go 
>>>> language have different priorities and get different 
>>>> results.
>>>>
>>>> On Thursday, 14 January 2021 at 13:42:47 UTC laos...@gmail.com wrote:
>>>>
>>>>> yes I'm aware of that but still, why the super large size VSS in 
>>>>> Golang? It does have side effects some are pretty bad.
>>>>>
>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/702345a1-c797-409e-bd86-3cf941a96b5bn%40googlegroups.com.

Reply via email to