Re: [go-nuts] Change in virtual memory patterns in Go 1.12

2019-04-02 Thread 'Austin Clements' via golang-nuts
Hi Rémy. We often fight with vm.max_map_count in the runtime, sadly. Most
likely this comes from the way the runtime interacts with Linux's
transparent huge page support. When we scavenge (release to the OS) only
part of a huge page, we tell the OS not to turn that huge page frame back
into a huge page since that would just make that memory used again.
Unfortunately, each time we do this counts as a separate "mapping" just to
track that one flag. These "mappings" are always at least 2MB, but you have
a large enough virtual address space to reach the max_map_count even then.
You can see this in /proc/PID/smaps, which should list mostly contiguous
neighboring regions that differ only in a single "VmFlags" bit.

We did make memory scavenging more aggressive in Go 1.12 (+Michael Knyszek
), though I would have expected it to converge to
roughly the same "huge page flag fragmentation" as before over the course
of five to ten minutes. Is your application's virtual memory footprint the
same between 1.11 and 1.12 or do that grow?

You could try disabling the huge page flag manipulation to confirm and/or
fix this. In $GOROOT/src/runtime/internal/sys/arch_amd64.go (or whichever
GOARCH is appropriate), set HugePageSize to 0. Though there's a danger that
Linux's transparent huge pages could blow up your application's resident
set size if you do that.

On Tue, Apr 2, 2019 at 3:49 AM Rémy Oudompheng 
wrote:

> Hello,
>
> In a large heap program I am working on, I noticed a peculiar change in
> the way virtual memory is reserved by the runtime : with comparable heap
> size (about 150GB) and virtual memory size (growing to 400-500GB probably
> due to a kind of fragmentation), the number of distinct memory mappings has
> apparently increased between Go 1.11 and Go 1.12 reaching the system limit
> (Linux setting vm.max_map_count).
>
> Is it something that has been experienced by someone else ? I don't
> believe this classifies as a bug, but I was a bit surprised (especially as
> I wasn't aware of that system limit).
>
> Rémy
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why will it deadlock if a goroutine acquire a mutex while pinned to its P?

2019-04-09 Thread 'Austin Clements' via golang-nuts
Acquiring a mutex while pinned can cause deadlock because pinning prevents
a stop-the-world. For example, the following sequence could result in a
deadlock:

M1: Acquires mutex l.
M2: Pins the M.
M2: Attempts to acquire mutex l.
M3: Initiates stop-the-world
M3: Stops M1
M3: Attempts to stop M2, but can't because M2 is pinned.

At this point, M1 can't make progress to release mutex l because M3 stopped
it, which means M2 won't be able to finish acquiring the mutex (so it will
never release the pin), which means M3 won't be able to finish stopping the
world (so it will never start M1 back up).

On Mon, Apr 8, 2019 at 2:31 AM Cholerae Hu  wrote:

> I'm reading this commit
> https://github.com/golang/go/commit/d5fd2dd6a17a816b7dfd99d4df70a85f1bf0de31 .
> Inside runtime_procPin we only increases the m.lock count, why will it
> cause deadlock when acquiring a mutex after pin?
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Change in virtual memory patterns in Go 1.12

2019-04-16 Thread 'Austin Clements' via golang-nuts
On Tue, Apr 16, 2019 at 1:23 AM Rémy Oudompheng 
wrote:

> Thanks Austin,
>
> The application workload is a kind of fragmentation torture test as it
> involves a mixture of many long-lived small and large (>100 MB)
> objects, with regularly allocated short-lived small and large objects.
> I have tried creating a sample synthetic reproducer but did not
> succeed at the moment.
>
> Regarding the max_map_count, your explanation is very clear and I
> apparently missed the large comment in the runtime explaining all of
> that.
> Do you expect a significant drawback between choosing 2MB or 16MB as
> the granularity of the huge page flag manipulation in the case of huge
> heaps ?
>

Most likely this will just cause less use of huge pages in your
application. This could slow it down by putting more pressure on the TLB.
In a sense, this is a self-compounding issue since huge pages can be highly
beneficial to huge heaps.

Regarding the virtual memory footprint, it changed radically with Go
> 1.12. It basically looks like a leak and I saw it grow to more than
> 1TB where the actual heap total size never exceeds 180GB.
> Although I understand that it is easy to construct a situation where
> there is repeatedly no available contiguous interval of >100MB in the
> address space, it is a significant difference from Go 1.11 where the
> address space would grow to 400-500GB for a similar workload and stay
> flat after that, and I could not find an obvious change in the
> allocator explaining the phenomenon (and unfortunately my resources do
> not allow for an easy live comparison of both program lifetimes).
>
> Am I right saying that scavenging method or frequency does not
> (cannot) affect at all virtual memory footprint and dynamics ?
>

It certainly can affect virtual memory footprint because of how scavenging
affects the allocator's placement policy. Though even with the increased
VSS, I would expect your application to have lower RSS with 1.12. In almost
all cases, lower RSS with higher VSS is a fine trade-off, though lower RSS
with the same VSS would obviously be better. But it can be problematic when
it causes the map count (which is roughly proportional to the VSS) to grow
too large. It's also unfortunate that Linux even has this limit; it's the
only OS Go runs on that limits the map count.

We've seen one other application experience VSS growth with the 1.12
changes, and it does seem to require a pretty unique allocation pattern.
Michael (cc'd) may be zeroing in on the causes of this and may have some
patches for you to try if you don't mind. :)

Regards,
> Rémy.
>
> Le mar. 2 avr. 2019 à 16:15, Austin Clements  a écrit :
> >
> > Hi Rémy. We often fight with vm.max_map_count in the runtime, sadly.
> Most likely this comes from the way the runtime interacts with Linux's
> transparent huge page support. When we scavenge (release to the OS) only
> part of a huge page, we tell the OS not to turn that huge page frame back
> into a huge page since that would just make that memory used again.
> Unfortunately, each time we do this counts as a separate "mapping" just to
> track that one flag. These "mappings" are always at least 2MB, but you have
> a large enough virtual address space to reach the max_map_count even then.
> You can see this in /proc/PID/smaps, which should list mostly contiguous
> neighboring regions that differ only in a single "VmFlags" bit.
> >
> > We did make memory scavenging more aggressive in Go 1.12 (+Michael
> Knyszek), though I would have expected it to converge to roughly the same
> "huge page flag fragmentation" as before over the course of five to ten
> minutes. Is your application's virtual memory footprint the same between
> 1.11 and 1.12 or do that grow?
> >
> > You could try disabling the huge page flag manipulation to confirm
> and/or fix this. In $GOROOT/src/runtime/internal/sys/arch_amd64.go (or
> whichever GOARCH is appropriate), set HugePageSize to 0. Though there's a
> danger that Linux's transparent huge pages could blow up your application's
> resident set size if you do that.
> >
> > On Tue, Apr 2, 2019 at 3:49 AM Rémy Oudompheng 
> wrote:
> >>
> >> Hello,
> >>
> >> In a large heap program I am working on, I noticed a peculiar change in
> the way virtual memory is reserved by the runtime : with comparable heap
> size (about 150GB) and virtual memory size (growing to 400-500GB probably
> due to a kind of fragmentation), the number of distinct memory mappings has
> apparently increased between Go 1.11 and Go 1.12 reaching the system limit
> (Linux setting vm.max_map_count).
> >>
> >> Is it something that has been experienced by someone else ? I don't
> believe this classifies as a bug, but I was a bit surprised (especially as
> I wasn't aware of that system limit).
> >>
> >> Rémy
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups "golang-nuts" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an email to go

[go-nuts] Re: Getting a snapshot of point-in-time allocation

2016-09-30 Thread 'Austin Clements' via golang-nuts
-base is certainly helpful here.

To more directly answer your questions, though, I proposed
https://github.com/golang/go/issues/13463#issuecomment-235048896 a while
ago (not yet implemented or necessarily agreed upon). I think this
extension to the heap proflie would answer your questions without
introducing another profile mode or more API.

On Thu, Sep 29, 2016 at 5:14 PM, Caleb Spare  wrote:

> Of course now that I sent this email, I have just noticed this pprof flag:
>
>   -baseShow delta from this profile
>
> I haven't tried it yet, but this seems like it might solve these problems.
>
> -Caleb
>
> On Thu, Sep 29, 2016 at 2:12 PM, Caleb Spare  wrote:
> > pprof gives two kinds of heap profiles: (please let me know if any of
> > this is not correct)
> >
> > - inuse_space -- a profile of the currently live objects/bytes on the
> heap
> > - alloc_space -- a profile of the allocated objects/bytes since program
> startup
> >
> > When I need to figure out why my heap is so big, inuse_space is
> > usually very helpful. However, I've found that alloc_space is
> > sometimes less helpful for the other kind of memory analysis I
> > typically need to perform.
> >
> > Here are two scenarios I've hit in which alloc_space doesn't quite cut
> it:
> >
> > - A server operates by allocating a bunch of large data structures on
> > startup and then starts handling requests. I'd like to optimize the
> > request handling and reduce some allocations but alloc_space is
> > initially dominated by the initialization.
> > - A server has been running normally for days and then suddenly starts
> > allocating somewhat more than usual. I look at an alloc_space profile,
> > but it's dominated by the allocations from normal operation.
> >
> > What I'd really like is some better way to profile recent allocations.
> > It seems like two options could be (a) another heap profile mode that
> > shows allocations since the last GC or (b) a way to ask the runtime to
> > reset allocation counts.
> >
> > Am I making sense? Did I misunderstand how all this works or miss some
> > profiling feature?
> >
> > Thanks!
> > Caleb
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] runtime.GC - documentation

2016-11-30 Thread 'Austin Clements' via golang-nuts
On Tue, Nov 29, 2016 at 6:57 PM, Rick Hudson  wrote:

> That is correct.


... but not guaranteed. :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: GODEBUG=gctrace=1 vs. debug.GCStats

2016-06-22 Thread 'Austin Clements' via golang-nuts
I think what you're seeing is simply rounding in the values printed by the
gctrace. You're correct that the two metrics are reporting the same thing.
In fact, they come from the exact same time stamps internally. But
formatting floating point numbers is hard. :) The gctrace printer simply
truncates the printed value to a reasonable number of digits, so, for
example, the 0.20ms in your second GC might actually be 0.207ms.

On Wed, Jun 22, 2016 at 5:55 PM, Caleb Spare  wrote:

> Hi,
>
> I'm looking at GC statistics using both GODEBUG=gctrace=1 and
> debug.ReadGCStats. My question is: should the pause durations reported
> in debug.GCStats match the sum of the two STW phases listed in the
> gctrace?
>
> I ask because they are generally close but not the same. I have a
> trivial program (https://play.golang.org/p/9NYDjBW6ei) that prints gc
> stats; when I run it with GODEBUG=gctrace=1, I see output like this:
>
> LastGC: 1969-12-31T16:00:00-08:00 NumGC: 0 PauseTotal: 0 Pause: []
> LastGC: 1969-12-31T16:00:00-08:00 NumGC: 0 PauseTotal: 0 Pause: []
> LastGC: 1969-12-31T16:00:00-08:00 NumGC: 0 PauseTotal: 0 Pause: []
> gc 1 @3.427s 0%: 0.046+0.20+0.060 ms clock, 0.14+0.024/0.13/0.13+0.18
> ms cpu, 4->4->0 MB, 5 MB goal, 4 P
> LastGC: 2016-06-22T16:46:13-07:00 NumGC: 1 PauseTotal: 106.965µs
> Pause: [106.965µs]
> LastGC: 2016-06-22T16:46:13-07:00 NumGC: 1 PauseTotal: 106.965µs
> Pause: [106.965µs]
> LastGC: 2016-06-22T16:46:13-07:00 NumGC: 1 PauseTotal: 106.965µs
> Pause: [106.965µs]
> LastGC: 2016-06-22T16:46:13-07:00 NumGC: 1 PauseTotal: 106.965µs
> Pause: [106.965µs]
> gc 2 @7.320s 0%: 0.008+0.14+0.20 ms clock, 0.033+0/0.098/0.13+0.82 ms
> cpu, 4->4->0 MB, 5 MB goal, 4 P
> LastGC: 2016-06-22T16:46:17-07:00 NumGC: 2 PauseTotal: 322.029µs
> Pause: [215.064µs 106.965µs]
>
> For the first GC, gctrace shows 0.046ms + 0.060ms = 106µs vs 106.965µs
> from ReadGCStats.
> For the second GC, gctrace shows 0.008ms + 0.20ms = 208µs vs 215.064µs
> from ReadGCStats.
>
> I'm trying to understand whether these two metrics are reporting
> essentially the same thing (as is my understanding), or whether there
> is some source of STW pause time ReadGCStats is showing me that isn't
> exposed in the gctrace.
>
> (I see similar results on 1.6.2 and tip.)
>
> Thanks!
> Caleb
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Debugging long GC pauses

2017-02-23 Thread 'Austin Clements' via golang-nuts
AFAIK, the only thing that can cause this in Go 1.8 is a non-preemptible
loop. It's not related to the heap size at all.

To test this theory, you can set GOEXPERIMENT=preemptibleloops and rebuild
your Go tree (the compiler has to be built with this, so you can't just
turn it on to build your project). I wouldn't recommend running in
production with this, but if it eliminates the long pauses, we'll at least
know that's the culprit.

Since these are quite long, the other thing you can do is run with the
execution tracer (https://godoc.org/runtime/trace). You'll be able to see
exactly what's happening at the beginning of each GC cycle. If you do have
non-preemptible loops, you should also see goroutines executing for much
longer than 10ms at a time, which is the default preemption bound.

On Thu, Feb 23, 2017 at 1:46 PM, Oliver Beattie  wrote:

> I am looking for some advice about how I can debug some long GC pauses I
> am observing in our production workloads under go 1.8 (the problem is not
> specific to 1.8, though). This is a very simple network server – basically
> a HTTP ping endpoint – but I regularly see tail request latencies of
> >100ms. I have enabled GODEBUG=gctrace=1, and I can see some quite long
> STW pauses amid lots of much less worrying pauses:
>
> gc 54 @348.007s 0%: 0.061+81+0.040 ms clock, 0.12+0.39/81/81+0.081 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P
> gc 55 @358.007s 0%: 0.21+83+0.019 ms clock, 0.43+80/2.7/81+0.039 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P
> *gc 56 @367.507s 0%: 80+1.3+0.065 ms clock, 161+0.080/1.2/82+0.13 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P*
> gc 57 @377.726s 0%: 0.054+63+0.023 ms clock, 0.10+0.68/61/0.44+0.046 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P
> gc 58 @388.007s 0%: 0.033+81+0.036 ms clock, 0.067+0.32/80/81+0.072 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P
> gc 59 @398.007s 0%: 0.021+82+0.019 ms clock, 0.043+0.17/80/82+0.038 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P
> gc 60 @407.630s 0%: 0.012+57+0.031 ms clock, 0.025+0.25/0.64/57+0.063 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P
> gc 61 @418.007s 0%: 0.19+1.0+79 ms clock, 0.38+0.28/0.69/0.98+159 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P
> gc 62 @427.507s 0%: 0.21+81+0.29 ms clock, 0.42+81/0.96/81+0.58 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P
> gc 63 @437.507s 0%: 0.015+81+0.053 ms clock, 0.031+0.29/0.98/80+0.10 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P
> *gc 64 @443.507s 0%: 81+1.2+0.032 ms clock, 162+0.040/1.2/0.44+0.065 ms
> cpu, 4->4->1 MB, 5 MB goal, 2 P*
> scvg2: inuse: 4, idle: 2, sys: 7, released: 0, consumed: 7 (MB)
> gc 65 @453.507s 0%: 0.13+81+0.051 ms clock, 0.26+0.20/81/82+0.10 ms cpu,
> 4->4->1 MB, 5 MB goal, 2 P
>
> If I am reading this correctly, some of these STW pauses are 80+
> milliseconds, in order to scan a minuscule heap. I am not experienced with
> debugging the GC in Go, so I'd appreciate any pointers as to why this could
> happening and what I can do to get to the bottom of the behaviour. Many
> thanks :)
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Considering dropping GO386=387

2020-07-14 Thread 'Austin Clements' via golang-nuts
Hi everyone. We’re exploring the possibility of dropping 387 floating-point
support and requiring SSE2 support for GOARCH=386 in the native gc
compiler, potentially in Go 1.16. This would raise the minimum GOARCH=386
requirement to the Intel Pentium 4 (released in 2000) or AMD Opteron/Athlon
64 (released in 2003).

There are several reasons we’re considering this:

   1. While 387 support isn’t a huge maintenance burden, it does take time
   away from performance and feature work and represents a fair amount of
   latent complexity.
   2. 387 support has been a regular source of bugs (#36400
   , #27516
   , #22429 ,
   #17357 , #13923
   , #12970 ,
   #4798 , just to name a few).
   3. 387 bugs often go undetected for a long time because we don’t have
   builders that support only 387 (so unsupported instructions can slip in
   unnoticed).
   4. Raising the minimum requirement to SSE2 would allow us to also assume
   many other useful architectural features, such as proper memory fences and
   128 bit registers, which would simplify the compiler and runtime and allow
   for much more efficient implementations of core functions like memmove on
   386.
   5. We’re exploring switching to a register-based calling convention in
   Go 1.16, which promises significant performance improvements, but retaining
   387 support will definitely complicate this and slow our progress.


The gccgo toolchain will continue to support 387 floating-point, so this
remains an option for projects that absolutely must use 387 floating-point.

We’d like to know if there are still significant uses of GO386=387,
particularly for which using gccgo would not be a viable option.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALgmw190ym7heSN2pb3xS%2BiA_UN97aP-2KW4tqC_9FFtYt9W3Q%40mail.gmail.com.


Re: [go-nuts] Re: Considering dropping GO386=387

2020-07-16 Thread 'Austin Clements' via golang-nuts
Thanks for that data point, Nick. It's a good idea to make the build fail
if GO386 is set to 387 if we drop support. It already fails if GO386 is set
to any unsupported value, but we could continue to check GO386 even though
there would only be one supported value, and perhaps give a nicer error if
it's set to 387.


On Wed, Jul 15, 2020 at 1:21 PM Nick Craig-Wood  wrote:

> I make a GO386=387 build for rclone, eg
>
> https://github.com/rclone/rclone/issues/437
>
> People love running rclone on ancient computers to rescue data off them I
> guess.
>
> This would affect a very small percentage of users and there are always
> older versions of rclone they can use so I'm not too bothered if support is
> dropped.
>
> I haven't tried compiling rclone with gccgo for a while.
>
> It would be helpful if the build fails rather than silently ignoring the
> GO386 flag if this proposal does go forward.
>
> On Tuesday, 14 July 2020 at 13:56:58 UTC+1 aus...@google.com wrote:
>
>> Hi everyone. We’re exploring the possibility of dropping 387
>> floating-point support and requiring SSE2 support for GOARCH=386 in the
>> native gc compiler, potentially in Go 1.16. This would raise the minimum
>> GOARCH=386 requirement to the Intel Pentium 4 (released in 2000) or AMD
>> Opteron/Athlon 64 (released in 2003).
>>
>> There are several reasons we’re considering this:
>>
>>1. While 387 support isn’t a huge maintenance burden, it does take
>>time away from performance and feature work and represents a fair amount 
>> of
>>latent complexity.
>>2. 387 support has been a regular source of bugs (#36400
>>, #27516
>>, #22429
>>, #17357
>>, #13923
>>, #12970
>>, #4798 ,
>>just to name a few).
>>3. 387 bugs often go undetected for a long time because we don’t have
>>builders that support only 387 (so unsupported instructions can slip in
>>unnoticed).
>>4. Raising the minimum requirement to SSE2 would allow us to also
>>assume many other useful architectural features, such as proper memory
>>fences and 128 bit registers, which would simplify the compiler and 
>> runtime
>>and allow for much more efficient implementations of core functions like
>>memmove on 386.
>>5. We’re exploring switching to a register-based calling convention
>>in Go 1.16, which promises significant performance improvements, but
>>retaining 387 support will definitely complicate this and slow our 
>> progress.
>>
>>
>> The gccgo toolchain will continue to support 387 floating-point, so this
>> remains an option for projects that absolutely must use 387 floating-point.
>>
>> We’d like to know if there are still significant uses of GO386=387,
>> particularly for which using gccgo would not be a viable option.
>>
>> Thanks!
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/babd66fd-fe7f-4840-b1aa-8cb32b499b67n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALgmw1-sDfxmeQ7CUvJ8wSUWm3vWCzzb2ZRrH-vdu_Yvqvb_MQ%40mail.gmail.com.


Re: [go-nuts] Re: Go 1.18 Beta 1 is released

2021-12-15 Thread 'Austin Clements' via golang-nuts
Jan, assuming you're running on an AMD CPU, this is go.dev/issue/34988 (if
you're not running on an AMD CPU, that would be very interesting to know!)
The TL;DR is that this appears to be a kernel bug, and we have a C
reproducer, but we do not yet have a fix or a workaround.


On Wed, Dec 15, 2021 at 7:57 AM Jan Mercl <0xj...@gmail.com> wrote:

> On Tue, Dec 14, 2021 at 8:51 PM Cherry Mui  wrote:
>
> > We have just released go1.18beta1, a beta version of Go 1.18.
> > It is cut from the master branch at the revision tagged go1.18beta1.
> >
> > Please try your production load tests and unit tests with the new
> version.
> > Your help testing these pre-release versions is invaluable.
> >
> > Report any problems using the issue tracker:
> > https://golang.org/issue/new
>
> The link requires a Github account, which I don't have so instead
> reporting here:
>
> When trying to build from source on netbsd/amd64 in qemu, both
> 'all.bash' and 'make.bash' crashes, logs attached. FYI, with Go1.17.5
> and the same qemu VM, 'all.bash' fails, but does not crash IIRC and
> 'make.bash' completes without issues.
>
> -j
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/CAA40n-UsTmUHXa-ao%3DykO9ejG-J8OD8scdmKiQtoxC2npC4HOg%40mail.gmail.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALgmw1-p0wSuBQt2hatAGEb%2BcgK9ezU%2BGW6HpK%2BV1R69x%3DeV2A%40mail.gmail.com.