Those who follow some of my posts here might know that I was discussing the 
subject of channels and waitgroups recently, and I wrote a very slim and  
simple waitgroup that works purely with a channel.

Note that it is only one channel it requires, at first I had a ready and 
done channel, but found a way to use nil and close to replace the ready and 
done signals for the main thread. Here is the link to it:

https://git.parallelcoin.io/dev/9/src/branch/dev/pkg/util/chanwg/waitgroup.go

For comparison, here is the code in the sync library:

https://golang.org/src/sync/waitgroup.go

The first thing you will notice is that it is a LOT shorter. It does not 
make use of the race library, though I can see how that would allow me to 
allow callers to inspect the worker count, a function I tried to add but 
made races no matter which way the data fed out (even when copying it in 
the critical section there in the New function.

It is not racy if it is used exactly the way the API presents itself.

I haven't written a comparison benchmark to evaluate the differences in 
overhead between the two yet, but it seems to me that almost certainly my 
code is at least not any heavier in size and thus cache burden, and unless 
all those extra things relating to handling unsafafe pointers and race 
library are a lot more svelte code than they look, I'd guess that maybe 
even my waitgroup is lower overhead. But of course such guesses are 
worthless if microseconds are at stake. So I should really write a  
benchmark in the test.

The one last thing is that I avoid the need for use of atomic by using a 
concurrent replicated datatype design for the increment/decrement, which is 
not order-sensitive, given the same set of inputs, it makes no difference 
what order they are received, at the end the total will be the same. Ah 
yes, they are called Commutable Replicated Data Types.

This isn't a distributed system, but the order sensitivity of concurrent 
computations is the same problem no matter what pipes the messages pass 
through. This datatype is perfectly applicable in distributed as 
concurrent, in this type of use case.

I just wanted to present it here and any comments about it are most welcome.


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to