Oh, and I don’t think SCTP is natively supported on Windows yet.
So your interoperability May vary...
> On Dec 30, 2019, at 4:17 PM, Robert Engels wrote:
>
>
> Im pretty sure I’m correct. It is a socket type not an option on TCP, which
> equates to a different protocol. If you use that opti
Im pretty sure I’m correct. It is a socket type not an option on TCP, which
equates to a different protocol. If you use that option you get a SCTP
transport not TCP.
> On Dec 30, 2019, at 4:06 PM, Bruno Albuquerque wrote:
>
>
> Although I am no expert in the subject, I would doubt this asse
Although I am no expert in the subject, I would doubt this assertion. It is
there in the socket man page in a Ubuntu machine with no mention of
anything specific being needed (other than the implicit fact that you need
a TCP stack that supports it, which should be true for any modern version
of Lin
I think that distinction is splitting hairs a bit in the case of Go - usually when you speak of concurrent you are talking about completely disparate processes (in the context of an OS). A typically Go server might be handling many types of client requests but it can easily be viewed as it is paral
One thing this exercise also reminded me of, is that using Go for any sort of "real time signal processing" is going to be very difficult - maybe if you lock the event handling routine to a thread, then use native code to change the thread priority - not sure how that would interact with the Go sch
On Mon, Dec 30, 2019 at 10:14 PM Robert Engels
wrote:
> Here is a simple test that demonstrates the dynamics
> https://play.golang.org/p/6SZcxCEAfFp (cannot run in playground)
>
> Notice that the call that uses an over allocated number of routines takes
> 5x longer wall time than the properly siz
Also, if running on a machine with a low cpu count (gomaxprocs) you probably need to increase the 'total' multiplier (mine was 12).-Original Message-
From: Robert Engels
Sent: Dec 30, 2019 3:14 PM
To: Robert Engels , Jesper Louis Andersen
Cc: Brian Candler , golang-nuts
Subject: Re: [go-
Here is a simple test that demonstrates the dynamics https://play.golang.org/p/6SZcxCEAfFp (cannot run in playground)Notice that the call that uses an over allocated number of routines takes 5x longer wall time than the properly sized one - this is due to scheduling and contention on the underlying
> I am trying to understand what triggers the Csrc.Read(buf) to return
The Read call will eventually turn into a read() system call. For TCP it will
return as long as there is at least one byte in the kernel's receive buffer for
this connection, or if the buffer give to read() is filled up, or
That option requires proprietary protocols not standard tcp/udp.
> On Dec 30, 2019, at 12:04 PM, Bruno Albuquerque wrote:
>
>
> But, to complicate things, you can create what is basically a TCp connection
> with packet boundaries using SOCK_SEQPACKET (as opposed to SOCK_STREAM or
> SOCK_DGR
But, to complicate things, you can create what is basically a TCp
connection with packet boundaries using SOCK_SEQPACKET (as opposed to
SOCK_STREAM or SOCK_DGRAM).
On Mon, Dec 30, 2019 at 9:04 AM Jake Montgomery wrote:
> It sounds like maybe you have some misconceptions about TCP. It is a
> stre
ReadAll reads until buffer length or EOF.
> On Dec 30, 2019, at 11:04 AM, Jake Montgomery wrote:
>
>
> It sounds like maybe you have some misconceptions about TCP. It is a stream
> protocol, there are no data boundaries that are preserved. If send 20 bytes
> via TCP in a single call, it is
It sounds like maybe you have some misconceptions about TCP. It is a stream
protocol, there are no data boundaries that are preserved. If send 20 bytes
via TCP in a single call, it is *likely *that those 20 will arrive together
at the client. But it is *NOT guaranteed*. It is perfectly legitimat
Thank you Keith, that is a very interesting technique, I doubt I would have
come up with that.
Unfortunately, that lead me to another problem as I needed the finalizer to
have access to the entire underlying data, which I did not state when I
wrote my question.
I tried to extend this to provid
Right, but the overhead is not constant nor free. So if you parallelize the CPU
bound task into 100 segments and you only have 10 cores, the contention on the
internal locking structures (scheduler, locks in channels) will be significant
and the entire process will probably take far longer - wo
On Mon, Dec 30, 2019 at 10:46 AM Brian Candler wrote:
> Which switching cost are you referring to? The switching cost between
> goroutines? This is minimal, as it takes places within a single thread. Or
> are you referring to cache invalidation issues? Or something else?
>
>
It is the usual di
On Sunday, 29 December 2019 22:18:51 UTC, Robert Engels wrote:
>
> I agree. I meant that worker pools are especially useful when you can do
> cpu affinity - doesn’t apply to Go.
>
> I think Go probably needs some idea of “capping” for cpu based workloads.
> You can cap in the local N CPUs
By d
17 matches
Mail list logo