On Wed, 14 Mar 2018 12:37:22 -0700 Ian Lance Taylor <i...@golang.org> wrote: Ian Lance Taylor writes: > On Wed, Mar 14, 2018 at 11:58 AM, Rio <m...@riobard.com> wrote: > > > > While implementing a SOCKS proxy in Go, I ran into an issue which is better > > explained in details by Evan Klitzke in this post > > https://eklitzke.org/goroutines-nonblocking-io-and-memory-usage > > > > In my case, each proxied connection costs two goroutines and two buffers in > > blocking read. For TCP connections the buffer size can be small (e.g. 2kb), > > so the overhead per proxied TCP connection is 8kb (2 x 2kb goroutine stack > + > > 2 x 2kb read buffer). For UDP connections the buffer size must be large > > enough to hold the largest packet due to the nature of packet-oriented > > network, so the overhead per proxied UDP connection is 132kb (2 x 2kb > > goroutine stack + 2 x 64kb read buffer for largest UDP packet). Handling > > 10,000 UDP proxied connections requires at least 1.25GB memory, which is > > unnecessary if there's a way to poll I/O readiness and use a shared read > > buffer. > > > > I'm wondering if there's a better way other than calling > > syscall.Epoll/Kqueue to create custom poller? > > Even for TCP, that's an interesting point. I wonder if we should have > a way to specify a number of bytes to read such that we only allocate > the []byte when there is something to read.
There is an old paper[1] about using a malloc like interface for stream IO. The idea is ReadAlloc(n) returns a buffer of upto n bytes, filled with data from the underlying stream or file. This allowed the implementation to use mmap for files or buffer management for streams. On the next call the buffer returned by the previous call was assumed to be no longer in use by the client. There was even ReadRealloc() & ReadAllocAt()! The latter to reposition the file or stream. For writes, a buffer was returned. On the next WriteAlloc() it would be accepted to be written out (this decoupled the client from actual IO and yet avoided extra copying). Final blocking close() would push out any remaining writes to the actual connection or file. What Ian proposes makes me think this is a similar model. [1] I finally remembered one name + web search brought up this paper: "Exploiting the Advantages of Mapped Files for Stream I/O" by Krieger, Stumm & Unrau. I used it as a model to implement some C++ classes at RealNetworks. I used nonblocking io & a select loop underneath. -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.