David wrote:
> The typical way is to include a channel of capacity 1 in the "message" 
that's going to the worker. 

I like this idea. But you have to guarantee only one observer will ever get 
to check once
if the job is done.

On Tuesday, September 2, 2025 at 7:37:26 PM UTC+1 David Finkel wrote:

> On Tue, Sep 2, 2025 at 2:27 PM robert engels <ren...@ix.netcom.com> wrote:
>
>> Yes, but without external synchronization you have no ordering on the 
>> senders - so which actually blocks waiting for the receiver is random, 
>> further writers are blocked and are added to the list, but there is no 
>> ordering within them.
>>
> If they're all initiating a send at exactly the same time, yes, there's no 
> ordering. However, if there's any temporal separation, the ordering of 
> sends completing will be FIFO. (it's a linked-list-based queue)
>
>>
>> As the OP described, the writer needs to wait for a response, so the “
>> goroutine doesn't have to wait for another goroutine to schedule in 
>> order to pick up the next work in the queue.” doesn’t apply.
>>
> I disagree, that response has to be sent somehow.
> The typical way is to include a channel of capacity 1 in the "message" 
> that's going to the worker. 
>
>>
>> So having unbuffered on both the request and response channels when you 
>> only have a single worker simplifies things - but if the requestor doesn’t 
>> read the response (or none is provided) you will have a deadlock.
>>
> Right, that's another reason why it's a bad idea to have the response 
> channel unbuffered. (beyond unbuffered writes introducing goroutine 
> scheduling dependencies).
>
>>
>> On Sep 2, 2025, at 13:07, David Finkel <david....@gmail.com> wrote:
>>
>>
>> On Tue, Sep 2, 2025 at 11:59 AM robert engels <ren...@ix.netcom.com> 
>> wrote:
>>
>>> I don’t think this is correct. There is only a single select on the 
>>> consumer side - the order of sends by the producers is already random based 
>>> on go routine wakeup/scheduling.
>>>
>>> On Sep 2, 2025, at 10:46, Jason E. Aten <j.e....@gmail.com> wrote:
>>>
>>> Yes, but not in terms of performance. Using a buffered
>>> channel could provide more "fairness" in the sense of "first-come, first 
>>> served".
>>>
>>> If you depend on the (pseudo-randomized) select to decide on which
>>> producer's job gets serviced next, you could increase your response
>>> latency by arbitrarily delaying an early job for.a long time, while a
>>> late arriving job can "jump the queue" and get serviced immediately.
>>>
>>> The buffered channel will preserve some of the arrival order. But--this
>>> is only up to its length--after that the late arrivers will still be 
>>> randomly
>>> serviced--due to the pseudo random select mechanism. So if you
>>> demand true FIFO for all jobs, then you might well be better served
>>> by using a mutex and and a slice to keep your jobs anyway--such
>>> that the limit on your queue is available memory rather than a
>>> fixed channel buffer size.
>>>
>>> I don't think channel receive order is random when the senders are 
>> blocked.
>> Sending goroutines are queued in a linked list in FIFO order within the 
>> runtime's channel struct (hchan) 
>> <https://cs.opensource.google/go/go/+/master:src/runtime/chan.go;l=45;drc=a8564bd412d4495a6048f981d30d4d7abb1e45a7>
>> (different cases of a select are selected at random for fairness, though) 
>>  
>> I would recommend using a buffered channel with size 1 for any 
>> response-channel so the worker-goroutine
>> doesn't have to wait for another goroutine to schedule in order to pick 
>> up the next work in the queue.
>>
>>>
>>> Of course if you over-run all of your memory, you are in trouble 
>>> again. As usual, back-pressure is a really critical component
>>> of most designs. You usually want it.
>>>
>>>
>> Back-pressure is definitely helpful in cases like this.
>>
>>>
>>> On Tuesday, September 2, 2025 at 4:33:05 PM UTC+1 Egor Ponomarev wrote:
>>>
>>>> Hi Robert, Jason,
>>>>
>>>> Thank you both for your detailed and thoughtful responses — they helped 
>>>> me see the problem more clearly. Let me share some more details about our 
>>>> specific case:
>>>>
>>>>    - 
>>>>    
>>>>    We have exactly one consumer (worker), and we can’t add more 
>>>>    because the underlying resource can only be accessed by one process at 
>>>> a 
>>>>    time (think of it as exclusive access to a single connection).
>>>>    - 
>>>>    
>>>>    The worker operation is a TCP connection, which is usually fast, 
>>>>    but the network can occasionally be unreliable and introduce delays.
>>>>    - 
>>>>    
>>>>    We may have lots of producers, and each producer waits for a result 
>>>>    after submitting a request.
>>>>    
>>>> Given these constraints, can an unbuffered channel have any advantage 
>>>> over a buffered one for our case? 
>>>> My understanding is that producers will just end up blocking when the 
>>>> single worker can’t keep up — so whether the blocking happens at “enqueue 
>>>> time” (unbuffered channel) or later (buffered channel).
>>>>
>>>> What’s your view — is there any benefit in using an unbuffered/buffered 
>>>> channel in this situation?
>>>>
>>>> Thanks again for the guidance!
>>>>
>>>> понедельник, 1 сентября 2025 г. в 14:04:48 UTC-5, Jason E. Aten: 
>>>>
>>>>> Hi Egor,
>>>>>
>>>>> To add to what Robert advises -- there is no one-size-fits-all 
>>>>> guidance that covers all situations. You have to understand the 
>>>>> principles of operation and reason/measure from there. There are
>>>>> heuristics, but even then exceptions to the rules of thumb abound.
>>>>>
>>>>> As Robert said, in general the buffered channel will give you
>>>>> more opportunity for parallelism, and might move your bottleneck
>>>>> forward or back in the processing pipeline. 
>>>>>
>>>>> You could try to study the location of your bottleneck, and tracing
>>>>> ( https://go.dev/blog/execution-traces-2024 ) might help
>>>>> there (but I've not used it myself--I would just start with a
>>>>> basic CPU profile and see if there are hot spots).
>>>>>
>>>>> An old design heuristic in Go was to always start
>>>>> with unbuffered channels. Then add buffering to tune
>>>>> performance. 
>>>>>
>>>>> However there are plenty of times when I
>>>>> allocate a channel with a buffer of size 1 so that I know
>>>>> my initial sender can queue an initial value without itself
>>>>> blocking. 
>>>>>
>>>>> Sometimes, for flow-control, I never want to
>>>>> buffer a channel--in particular when going network <-> channel,
>>>>> because I want the local back-pressure to propagate
>>>>> through TCP/QUIC to the result in back-pressure on the
>>>>> remote side, and if I buffer then in effect I'm asking for work I 
>>>>> cannot
>>>>> yet handle. 
>>>>>
>>>>> If I'm using a channel as a test event history, then I typically
>>>>> give it a massive buffer, and even then also wrap it in a function
>>>>> that will panic if the channel reaches cap() capacity; because
>>>>> I never really want my tests to be blocked on creating
>>>>> a test execution event-specific "trace" that I'm going to
>>>>> assert over in the test.  So in that case I always want big buffers.
>>>>>
>>>>> As above, exceptions to most heuristics are common.
>>>>>
>>>>> In your particular example, I suspect your colleague is right
>>>>> and you are not gaining anything from channel buffering--of course
>>>>> it is impossible to know for sure without the system in front
>>>>> of you to measure.
>>>>>
>>>>> Lastly, you likely already realize this, but the request+response
>>>>> wait pattern you cited typically needs both request and waiting
>>>>> for the response to be wrapped in selects with a "bail-out" or 
>>>>> shutdown channel:
>>>>>
>>>>> jobTicket := makeJobTicketWithDoneChannel()
>>>>> select {
>>>>>   case sendRequestToDoJobChan <- jobTicket:
>>>>>   case <-bailoutOnShutDownChan: // or context.Done, etc
>>>>>       // exit/cleanup here
>>>>> }
>>>>> select {
>>>>>   case <-jobTicket.Done:
>>>>>   case <-bailoutOnShutDownChan:
>>>>>     // exit/cleanup here
>>>>> }
>>>>> in order to enable graceful stopping/shutdown of goroutines.
>>>>> On Monday, September 1, 2025 at 5:13:32 PM UTC+1 robert engels wrote:
>>>>>
>>>>>> There is not enough info to give a full recommendation but I suspect 
>>>>>> you are misunderstanding how it works.
>>>>>>
>>>>>> The buffered channels allow the producers to continue while waiting 
>>>>>> for the consumer to finish.
>>>>>>
>>>>>> If the producer can’t continue until the consumer runs and provides a 
>>>>>> value via a callback or other channel, then yes the buffered channel 
>>>>>> might 
>>>>>> not seem to provide any value - expect that in a highly concurrent 
>>>>>> environment go routines are usually not in a pure ‘reading the channel’ 
>>>>>> mode - they are finishing up a previous request - so the buffering 
>>>>>> allows 
>>>>>> some level of additional concurrency in the state.
>>>>>>
>>>>>> When requests are extremely short in duration this can matter a lot.
>>>>>>
>>>>>> Usually though, a better solution is to simply have N+1 consumers for 
>>>>>> N producers and use a handoff channel (unbuffered) - but if the workload 
>>>>>> is 
>>>>>> CPU bound you will expend extra resources context switching (ie. 
>>>>>> thrashing) 
>>>>>> - because these Go routines will be timesliced.
>>>>>>
>>>>>> Better to cap the consumers and use a buffered channel.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sep 1, 2025, at 08:37, Egor Ponomarev <egorvpo...@gmail.com> 
>>>>>> wrote:
>>>>>>
>>>>>> We’re using a typical producer-consumer pattern: goroutines send 
>>>>>> messages to a channel, and a worker processes them. A colleague asked me 
>>>>>> why we even bother with a buffered channel (say, size 1000) if we’re 
>>>>>> waiting for the result anyway.
>>>>>>
>>>>>> I tried to explain it like this: there are two kinds of waiting.
>>>>>>
>>>>>>
>>>>>> “Bad” waiting – when a goroutine is blocked trying to send to a full 
>>>>>> channel:
>>>>>> requestChan <- req // goroutine just hangs here, blocking the system
>>>>>>
>>>>>> “Good” waiting – when the send succeeds quickly, and you wait for the 
>>>>>> result afterwards:
>>>>>> requestChan <- req // quickly enqueued
>>>>>> result := <-resultChan // wait for result without holding up others
>>>>>>
>>>>>> The point: a big buffer lets goroutines hand off tasks fast and free 
>>>>>> themselves for new work. Under burst load, this is crucial — it lets the 
>>>>>> system absorb spikes without slowing everything down.
>>>>>>
>>>>>> But here’s the twist: my colleague tested it with 2000 goroutines and 
>>>>>> got roughly the same processing time. His argument: “waiting to enqueue 
>>>>>> or 
>>>>>> dequeue seems to perform the same no matter how many goroutines are 
>>>>>> waiting.”
>>>>>>
>>>>>> So my question is: does Go have any official docs that describe this 
>>>>>> idea? *Effective Go* shows semaphores, but it doesn’t really spell 
>>>>>> out this difference in blocking types.
>>>>>>
>>>>>> Am I misunderstanding something, or is this just one of those 
>>>>>> “implicit Go concurrency truths” that everyone sort of knows but isn’t 
>>>>>> officially documented?
>>>>>>
>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "golang-nuts" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to golang-nuts...@googlegroups.com.
>>>>>> To view this discussion visit 
>>>>>> https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>>
>>>>>>
>>>>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> To view this discussion visit 
>>> https://groups.google.com/d/msgid/golang-nuts/8a063215-e980-4e93-a1b0-aac8bbd786b4n%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/8a063215-e980-4e93-a1b0-aac8bbd786b4n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> To view this discussion visit 
>>> https://groups.google.com/d/msgid/golang-nuts/818EF6F9-E919-477B-9B86-3DF8287EE9EC%40ix.netcom.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/818EF6F9-E919-477B-9B86-3DF8287EE9EC%40ix.netcom.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com.
>> To view this discussion visit 
>> https://groups.google.com/d/msgid/golang-nuts/CANrC0BiJ-gOZbufDprFf51mwJbnQZEwPjY45_GfZHuCEkpNmCw%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/golang-nuts/CANrC0BiJ-gOZbufDprFf51mwJbnQZEwPjY45_GfZHuCEkpNmCw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/golang-nuts/f91e450d-a660-41d9-8ed0-58af18d2f473n%40googlegroups.com.

Reply via email to