ok, I'm banning myself from this forum for a while. Sorry about this. I'm not right at the moment.
On Sunday, 5 May 2019 21:55:57 UTC+2, Louki Sumirniy wrote: > > I think the key thing is the Add function I have written is not concurrent > safe. I didn't intend it to be as I only had the use case of a single > thread managing a worker pool, and I am pretty sure it is fine for this and > for larger pools it has lower overhead of memory *and* processing. > > I have revised it so the 'we are started' clause also ensures the channel > is in the open and operational state as well, and the channel is closed if > it is open, which will, yes, cause a panic if the Add function is called > concurrently, which enforces the contract I specify. > > It does not cover all of the cases like sync.WaitGroup, but it covers the > biggest use case, with a lot less code (no imports at all) > > https://play.golang.org/p/FwdKAVnNMk- > > On Saturday, 4 May 2019 23:56:01 UTC+2, Robert Engels wrote: >> >> The reason your code is shorter is that it is broken. I tried to explain >> that to you. Try running the stdlib wait group tests against your code. >> They will fail. >> >> On May 4, 2019, at 4:22 PM, Louki Sumirniy <louki.sumi...@gmail.com> >> wrote: >> >> Those who follow some of my posts here might know that I was discussing >> the subject of channels and waitgroups recently, and I wrote a very slim >> and simple waitgroup that works purely with a channel. >> >> Note that it is only one channel it requires, at first I had a ready and >> done channel, but found a way to use nil and close to replace the ready and >> done signals for the main thread. Here is the link to it: >> >> >> https://git.parallelcoin.io/dev/9/src/branch/dev/pkg/util/chanwg/waitgroup.go >> >> For comparison, here is the code in the sync library: >> >> https://golang.org/src/sync/waitgroup.go >> >> The first thing you will notice is that it is a LOT shorter. It does not >> make use of the race library, though I can see how that would allow me to >> allow callers to inspect the worker count, a function I tried to add but >> made races no matter which way the data fed out (even when copying it in >> the critical section there in the New function. >> >> It is not racy if it is used exactly the way the API presents itself. >> >> I haven't written a comparison benchmark to evaluate the differences in >> overhead between the two yet, but it seems to me that almost certainly my >> code is at least not any heavier in size and thus cache burden, and unless >> all those extra things relating to handling unsafafe pointers and race >> library are a lot more svelte code than they look, I'd guess that maybe >> even my waitgroup is lower overhead. But of course such guesses are >> worthless if microseconds are at stake. So I should really write a >> benchmark in the test. >> >> The one last thing is that I avoid the need for use of atomic by using a >> concurrent replicated datatype design for the increment/decrement, which is >> not order-sensitive, given the same set of inputs, it makes no difference >> what order they are received, at the end the total will be the same. Ah >> yes, they are called Commutable Replicated Data Types. >> >> This isn't a distributed system, but the order sensitivity of concurrent >> computations is the same problem no matter what pipes the messages pass >> through. This datatype is perfectly applicable in distributed as >> concurrent, in this type of use case. >> >> I just wanted to present it here and any comments about it are most >> welcome. >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "golang-nuts" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to golan...@googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> >> -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.