Il 12/03/20 01:54, Adrian Ratnapala ha scritto:
BTW: Thanks for all this investigation and writeups, they are
interesting, and I look forward to your test results.
Desipte my question, I think using a custom allocator is perfectly
reasonable. go/issues/23199 is showing us that sync.Pool's interface
isn't great for allocating byte-bufffers (it'd be better to explicitly
ask for objects of a given size). In that sense, a custom allocator
with an appropriate interface is more future-proof.
Hi Adrian,
I did some more tests, you can find some test implementations in my
branches here:
https://github.com/drakkan/sftp
- allocator branch uses a variable size allocator
- allocator2 branch uses a fixed size allocator with a sync.Pool
- allocator1 branch uses a fixed size allocator: the first commit has an
implementation very similar to the other ones, in last commit I replaced
the pagelist with a list of byte array slices
In a single SFTP request more than one allocation could be required, so
we need to keep track of all the allocations and release them after the
request is served, so inside the allocator each allocated byte array has
a reference to the request unique ID.
Based on the benchmarks here:
https://github.com/drakkan/sftp/blob/allocator1/allocator_test.go#L97
allocator1 seems the fastest implementation: both versions (with a
without the pagelist) are a bit faster than allocator2 that uses the
sync.Pool. The allocator branch has the slowest implementation.
If you have suggestions for improving my implementations they are welcome!
In a real SFTP test I think the performances of these allocators are
very similar.
I'll add an interface later, after discussing about the allocation
strategies with pkg/sftp maintainers,
thanks
Nicola
what I understand reading that issue is that sync.Pool is not the best
choice to store variable-length buffers and my first allocator
I think the problem is that the total memory usage of the pool ends up
O(size of large objects * total numer of objects kept). This is very
wasteful if you need lots of small objects and only a few large ones.
implementation accepts buffers of any size, each received packet can
have different sizes (between 20 and 262144 bytes).
So every 20 byte object will pin down ~100kiB of actual memory, which
sounds pretty wasteful, but might be acceptable.
in general and currently, in my not yet public branch, I only allocate
packets of 2 different sizes and so sync.Pool could be appropriate, I'll
You could have two separate pools, one for each size. This is
basically a bucketed-allocator, but simpler because you already know
the structure of your buckets.
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/golang-nuts/dd77159c-50b0-e350-67f2-9adebe064e0f%40gmail.com.