In this case, it won’t matter performance wise, but two unbuffered channels - 
request and response - probably simplifies things.

> On Sep 2, 2025, at 04:47, Egor P <guide.novosibi...@gmail.com> wrote:
> 
> Hi Robert, Jason,
> 
> Thank you both for your detailed and thoughtful responses — they helped me 
> see the problem more clearly. Let me share some more details about our 
> specific case:
> 
> We have exactly one consumer (worker), and we can’t add more because the 
> underlying resource can only be accessed by one process at a time (think of 
> it as exclusive access to a single connection).
> 
> The worker operation is a TCP connection, which is usually fast, but the 
> network can occasionally be unreliable and introduce delays.
> 
> We may have lots of producers, and each producer waits for a result after 
> submitting a request.
> 
> Given these constraints, can an unbuffered channel have any advantage over a 
> buffered one for our case? 
> My understanding is that producers will just end up blocking when the single 
> worker can’t keep up — so whether the blocking happens at “enqueue time” 
> (unbuffered channel) or later (buffered channel).
> 
> What’s your view — is there any benefit in using an unbuffered/buffered 
> channel in this situation?
> 
> 
> Thanks again for the guidance!
> 
> понедельник, 1 сентября 2025 г. в 14:04:48 UTC-5, Jason E. Aten: 
>> Hi Egor,
>> 
>> To add to what Robert advises -- there is no one-size-fits-all 
>> guidance that covers all situations. You have to understand the 
>> principles of operation and reason/measure from there. There are
>> heuristics, but even then exceptions to the rules of thumb abound.
>> 
>> As Robert said, in general the buffered channel will give you
>> more opportunity for parallelism, and might move your bottleneck
>> forward or back in the processing pipeline. 
>> 
>> You could try to study the location of your bottleneck, and tracing
>> ( https://go.dev/blog/execution-traces-2024 ) might help
>> there (but I've not used it myself--I would just start with a
>> basic CPU profile and see if there are hot spots).
>> 
>> An old design heuristic in Go was to always start
>> with unbuffered channels. Then add buffering to tune
>> performance. 
>> 
>> However there are plenty of times when I
>> allocate a channel with a buffer of size 1 so that I know
>> my initial sender can queue an initial value without itself
>> blocking. 
>> 
>> Sometimes, for flow-control, I never want to
>> buffer a channel--in particular when going network <-> channel,
>> because I want the local back-pressure to propagate
>> through TCP/QUIC to the result in back-pressure on the
>> remote side, and if I buffer then in effect I'm asking for work I cannot
>> yet handle. 
>> 
>> If I'm using a channel as a test event history, then I typically
>> give it a massive buffer, and even then also wrap it in a function
>> that will panic if the channel reaches cap() capacity; because
>> I never really want my tests to be blocked on creating
>> a test execution event-specific "trace" that I'm going to
>> assert over in the test.  So in that case I always want big buffers.
>> 
>> As above, exceptions to most heuristics are common.
>> 
>> In your particular example, I suspect your colleague is right
>> and you are not gaining anything from channel buffering--of course
>> it is impossible to know for sure without the system in front
>> of you to measure.
>> 
>> Lastly, you likely already realize this, but the request+response
>> wait pattern you cited typically needs both request and waiting
>> for the response to be wrapped in selects with a "bail-out" or shutdown 
>> channel:
>> 
>> jobTicket := makeJobTicketWithDoneChannel()
>> select {
>>   case sendRequestToDoJobChan <- jobTicket:
>>   case <-bailoutOnShutDownChan: // or context.Done, etc
>>       // exit/cleanup here
>> }
>> select {
>>   case <-jobTicket.Done:
>>   case <-bailoutOnShutDownChan:
>>     // exit/cleanup here
>> }
>> in order to enable graceful stopping/shutdown of goroutines.
>> On Monday, September 1, 2025 at 5:13:32 PM UTC+1 robert engels wrote:
>>> There is not enough info to give a full recommendation but I suspect you 
>>> are misunderstanding how it works.
>>> 
>>> The buffered channels allow the producers to continue while waiting for the 
>>> consumer to finish.
>>> 
>>> If the producer can’t continue until the consumer runs and provides a value 
>>> via a callback or other channel, then yes the buffered channel might not 
>>> seem to provide any value - expect that in a highly concurrent environment 
>>> go routines are usually not in a pure ‘reading the channel’ mode - they are 
>>> finishing up a previous request - so the buffering allows some level of 
>>> additional concurrency in the state.
>>> 
>>> When requests are extremely short in duration this can matter a lot.
>>> 
>>> Usually though, a better solution is to simply have N+1 consumers for N 
>>> producers and use a handoff channel (unbuffered) - but if the workload is 
>>> CPU bound you will expend extra resources context switching (ie. thrashing) 
>>> - because these Go routines will be timesliced.
>>> 
>>> Better to cap the consumers and use a buffered channel.
>>> 
>>> 
>>> 
>>> 
>>>> On Sep 1, 2025, at 08:37, Egor Ponomarev <egorvpo...@ <>gmail.com 
>>>> <http://gmail.com/>> wrote:
>>>> 
>>> 
>>>> We’re using a typical producer-consumer pattern: goroutines send messages 
>>>> to a channel, and a worker processes them. A colleague asked me why we 
>>>> even bother with a buffered channel (say, size 1000) if we’re waiting for 
>>>> the result anyway.
>>>> 
>>>> I tried to explain it like this: there are two kinds of waiting.
>>>> 
>>>> 
>>>> 
>>>> “Bad” waiting – when a goroutine is blocked trying to send to a full 
>>>> channel:
>>>> requestChan <- req // goroutine just hangs here, blocking the system
>>>> 
>>>> “Good” waiting – when the send succeeds quickly, and you wait for the 
>>>> result afterwards:
>>>> requestChan <- req // quickly enqueued
>>>> result := <-resultChan // wait for result without holding up others
>>>> 
>>>> 
>>>> The point: a big buffer lets goroutines hand off tasks fast and free 
>>>> themselves for new work. Under burst load, this is crucial — it lets the 
>>>> system absorb spikes without slowing everything down.
>>>> 
>>>> But here’s the twist: my colleague tested it with 2000 goroutines and got 
>>>> roughly the same processing time. His argument: “waiting to enqueue or 
>>>> dequeue seems to perform the same no matter how many goroutines are 
>>>> waiting.”
>>>> 
>>>> So my question is: does Go have any official docs that describe this idea? 
>>>> Effective Go shows semaphores, but it doesn’t really spell out this 
>>>> difference in blocking types.
>>>> 
>>>> 
>>>> Am I misunderstanding something, or is this just one of those “implicit Go 
>>>> concurrency truths” that everyone sort of knows but isn’t officially 
>>>> documented?
>>>> 
>>>> 
>>> 
>>>> -- 
>>>> You received this message because you are subscribed to the Google Groups 
>>>> "golang-nuts" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>> email to golang-nuts...@ <>googlegroups.com <http://googlegroups.com/>.
>>>> To view this discussion visit 
>>>> https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com 
> <mailto:golang-nuts+unsubscr...@googlegroups.com>.
> To view this discussion visit 
> https://groups.google.com/d/msgid/golang-nuts/a8021225-814d-4ec5-bcd0-80e01e7a2acdn%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/golang-nuts/a8021225-814d-4ec5-bcd0-80e01e7a2acdn%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/golang-nuts/41F7182B-381D-4738-B8FC-00FBC1635A3F%40ix.netcom.com.

Reply via email to