Hi John/Egon/Augusto, I should point out is that all we need to guarantee (barring abnormal termination of course) is that once a task starts processing, it finishes. Partially processed messages are bad, but http requests that don't result in a message being processed at all are okay.
We don't need to guarantee that the result of every Accept in the HTTP server results in a processed message. We handle this on the OPS side by ensuring we stop sending requests to that instance before terminating the process. We just want to make sure, at that point, that the messages which did make it to the handler are flushed. So the case where: h.Handle(...) <-- gets past the closed channel check, calls go ..., butthe goroutine doesn't execute yet. h.Close() <-- closes the close channel, Locks and Unlocks,returns. ...now the goroutine executes and acquires the read lock. We actually don't care if "Handle" completes in this example. We only care if that our task handler starts processing a message that it completes the processing. Thanks again, Evan On Tuesday, 13 September 2016 13:24:03 UTC-7, aro...@gmail.com wrote: > > The mutex approach is fundamentally broken because you can't guarantee > that the tasks are all started (and have a read-lock acquired) before you > call close. > > Consider: > h.Handle(...) <-- gets past the closed channel check, calls go ..., > butthe goroutine doesn't execute yet. > h.Close() <-- closes the close channel, Locks and Unlocks,returns. > ...now the goroutine executes and acquires the read lock. > > So really, if you can't control the Handle() function, you need two > WaitGroups: one to verify that all goroutines have started before shutting > down the task handler and a second one for all goroutines to have finished. > However, it's tricky if we don't know the real use case. > > Sounds like you are trying to do graceful http shutdown. Have you looked > at other libraries that do that? If you don't have a way to account for > the time between Handle(..) is called and the goroutine starts, you always > might miss a task that got called near the time Close() was called. > > - Augusto > > > On Tuesday, September 13, 2016 at 12:50:50 PM UTC-7, Evan Digby wrote: >> >> Hi Aroman, >> >> Your approach using the WaitGroup is definitely better in this toy >> example. The reason I didn't use the WaitGroup is because the non-toy >> example is wrapping the HTTP Server handler. I have no way to inject an >> "add" before the goroutine is created since that's handled by Go's HTTP >> Server without re-implementing the accept->handle loop using the listener. >> >> Apologies for not giving the full context in the example. >> >> I'm not sure how it could block an outstanding task since the closed >> channel is called before the Lock(), so no additional calls to RLock will >> be made at that point, and the Lock will just wait until all of the RLocks >> are complete. >> >> Regarding your testing strategy, I do like it better than any of my >> current strategy; however, There still is a chance that a task could >> complete between lines 90 and 91: >> >> h.Close() >> events <- ALL_TASKS_FINISHED >> >> So this doesn't solve the racy-ness I'm concerned about unless you put an >> arbitrary sleep in the handlers, which I'm trying to avoid. >> >> On Tuesday, 13 September 2016 12:34:17 UTC-7, aro...@gmail.com wrote: >>> >>> The WaitGroup is better than the lock approach, since the lock approach >>> could block an outstanding task. The key to using waitgroups is to call >>> Add() outside of goroutines that might call done: >>> >>> https://play.golang.org/p/QVWoy8fCmI >>> >>> On Tuesday, September 13, 2016 at 12:19:16 PM UTC-7, Evan Digby wrote: >>>> >>>> Hi John, >>>> >>>> Thanks for the reply. I've tried many incarnations that include >>>> WaitGroups; however, none seem to achieve the desired result. >>>> >>>> If I add a WaitGroup with a defer done in the handler, and then wait >>>> after the Close() then the test itself implements the requirement and >>>> won't >>>> protect from future refactors. There's no way to test that a WaitGroup is >>>> done without waiting for it, and even if there was it would be racy >>>> because >>>> between the Close() and WaitGroup wait call tasks could complete. If I >>>> wrapped the wait and the done in goroutines to see which one happened >>>> first, also racy. >>>> >>>> If you have something else in mind can you elaborate on how it would >>>> help in this case? >>>> >>>> Thanks again! >>>> >>>> Evan >>>> >>>> On Tuesday, 13 September 2016 12:01:29 UTC-7, John Souvestre wrote: >>>>> >>>>> Have you considered using a sync.WaitGroup? >>>>> >>>>> >>>>> >>>>> John >>>>> >>>>> John Souvestre - New Orleans LA >>>>> >>>>> >>>>> >>>>> *From:* golan...@googlegroups.com [mailto:golan...@googlegroups.com] *On >>>>> Behalf Of *Evan Digby >>>>> *Sent:* 2016 September 13, Tue 13:56 >>>>> *To:* golang-nuts >>>>> *Subject:* [go-nuts] Having difficulty testing this "cleanly" >>>>> >>>>> >>>>> >>>>> Has anyone come across a good way, non-racy way to ensure that N tasks >>>>> are guaranteed to be completed after a function is called? Essentially I >>>>> have a “Close” function that must be guaranteed to block until all tasks >>>>> are finished. Achieving this was pretty simple: wrap each task in an >>>>> RLock, >>>>> and then a Lock on close. >>>>> >>>>> >>>>> >>>>> Example: https://play.golang.org/p/7lhBPUhkUE >>>>> >>>>> >>>>> >>>>> Now I want to write a solid test to guarantee Close will meet that >>>>> requirement of all tasks must finish first for posterity. In that >>>>> example, >>>>> try commenting out the RLock/RUnlock on lines 25/26. You'll see that it >>>>> no >>>>> longer outputs many, if any, lines. I'm trying to prevent that from >>>>> happening in the future by some cowboy refactor! >>>>> >>>>> >>>>> >>>>> All of the ways I can come up with involve Sleeping or launching more >>>>> tasks than I _think_ can be finished in time--obviously not good! >>>>> >>>>> >>>>> >>>>> I feel like I must be missing some obvious way to test this and I'll >>>>> end up feeling silly once someone replies with the solution. I'm okay >>>>> with >>>>> that! >>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "golang-nuts" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to golang-nuts...@googlegroups.com. >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.