It won't go on forever. The formal proofs are in the original Sapphire and
distributed Train algorithm papers. Informally the proofs show that there
is no way to create a new white object, no way to pass white object between
threads more than a bounded number of times, reachable non-black objec
> type Treenode struct {
> left *Treenode
> right *Treenode
> }
One could of course design a language where Treenode is called cons and
left is called car and right is called cdr and (car nil) is nil and (cdr
nil) is nil. You could implement such a language by putting 2 words of 0 at
location
Breaking the Go CGO pointer rules comes up periodically and the rules
have not changed. Applications have lived with the rules simply
because breaking them results in revisiting the application code
every time a new Go release comes out. Did the compiler improve and
some object is now allocated
One approach is to maintain a shadow stack holding the pointers in a place
the GC already knows about, like an array allocated in the heap. This can
be done in Go, the language. Dereferences would use a level of indirection.
Perhaps one would pass an index into the array instead of the pointer
When you say "set up GC rate(10%) to reduce memory usage down to normal"
what exactly did the program do?
Compute (CPU) costs money and heap memory (DRAM) costs money. Minimizing
the sum should be the goal. This requires one to have a model of the
relative costs of CPU vs. RAM, HW folks balan
can chime in on if it is
> possible.
>
> I will provide the logs from tonight though. Do you want them zipped here
> in the thread?
>
>
> tis 5 dec. 2017 kl 15:37 skrev Rick Hudson :
>
>> Glad to have helped. The runtime team would be interested in seeing what
>>
Glad to have helped. The runtime team would be interested in seeing what
these pauses look like in the beta. If you have the time could you send
them to us after the beta comes out.
On Tue, Dec 5, 2017 at 9:06 AM, Henrik Johansson
wrote:
> Ok so it's not bad, thats good!
>
> The inital ~20 sec
gc 347 @6564.164s 0%: 0.89+518+1.0 ms clock, 28+3839/4091/3959+33 ms cpu,
23813->23979->12265 MB, 24423 MB goal, 32 P
What I'm seeing here is that you have 32 HW threads and you spend .89+518+1
or 520 ms wall clock in the GC. You also spend 28+3839+4091+3959+33 or
11950 ms CPU time out of total o
> <https://golang.org/src/runtime/proc.go> and malloc.go
> <https://golang.org/src/runtime/malloc.go> with non-blocking mode. As I
> read the code, GC eschews the fancy concurrent behavior of the new garbage
> collector.
>
> On Tue, Nov 29, 2016 at 11:46 AM, Rick Hudson >
The documentation is correct. The current runtime.GC() implementation
invokes a Stop The World (STW) GC that completes before runtime.GC()
returns. It is useful when doing benchmarking to avoid some of the
non-determinism caused by the GC.
On Tue, Nov 29, 2016 at 1:15 PM, Ian Lance Taylor wrot
10 matches
Mail list logo