Hi,
I can't put em or igb interfaces into netmap mode on a recent -CURRENT (ix
interfaces work on the same machines). Here are the pkt-gen and dmesg outputs:
# sudo sysctl dev.netmap.admode=1
# sudo sysctl dev.netmap.verbose=1
# sudo ./pkt-gen -i em1
790.411737 main [2274] interface is em1
79
Hi all,
the "Operation not permitted" is coming from iflib_netmap_register:
ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE);
...
IFDI_INIT(ctx); // for igb it calls em_if_init()
...
return (ifp->if_drv_flags & IFF_DRV_RUNNING ? 0 : 1);
the last
Hello,
I'm seeing continuous slow download speeds from Amazon S3, but only on FreeBSD.
Other OSes saturate the connection without problems.
This happens with 10.3-RELEASE and 11.0-RELEASE and only with AWS S3 in
different regions (Ireland, London, Frankfurt, US Standard have been tested)
whils
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=211219
--- Comment #14 from Franco Fichtner ---
Created attachment 180048
--> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=180048&action=edit
link state fix for 10.3 / 11.0
Yes, the issue is also present on soekris boxes, but not on 10.2
Window scaling and receive buffer scaling is the most likely cause.
Check what the sysctl net.inet.tcp.recvspace is set to, then try
increasing it e.g.
sysctl net.inet.tcp.recvspace=655360
This jumped the transfer rate with a wget and your test URL from 3MB/s
to 30MB/s here.
Regards
Hi Steve,
increasing the buffer size did indeed enhance throughput.
I am obviously not an expert in this field, but i don't understand why or if
the TCP Receive Window Size shouldn't increase automatically.
I found this thread on the ML and i'm reading myself a bit more into the topic
right no
It does seem to be related to TCP Receive Window Size.
When I tested on a 11.0-RELEASE box I got 30MB/s out of the box and the
increase of recvspace was only needed on the original 10.2 however the
11 box is only 1.2ms from AWS where as the 10.2 was 17ms away, so likely
explains the difference
On Thu, Feb 09, 2017 at 10:30:24PM -0800, Gleb Smirnoff wrote:
T> Two important updates.
T>
T> 1) The patch worked pretty okay, but the idea of separate file type is
T>abandoned. With current filedescriptor code it is almost impossible
T>to racelessly switch fileops and f_data.
T>For
Hi,
You're right, we'll try to add more details.
In any case, buf_size, ring_size and if_size are the sizes in bytes of each
buffer, ring and netmap_if (control data structure), respectively.
So the maximum amount of slots for each ring is ring_size/16, as 16 is the
size in bytes of struct netma
On Thu, Feb 16, 2017 at 09:14:19PM +0100, Vincenzo Maffione wrote:
> Hi,
> You're right, we'll try to add more details.
>
> In any case, buf_size, ring_size and if_size are the sizes in bytes of each
> buffer, ring and netmap_if (control data structure), respectively.
> So the maximum amount of
Not sure about what you mean. Until memory areas are in use the real values
(*_num, *_size) are not changed.
At NIOCREGIF time you can say what allocator you are interested in by
writing a non-zero id inside req.nr_arg2.
2017-02-16 21:38 GMT+01:00 Slawa Olhovchenkov :
> On Thu, Feb 16, 2017 at 09
On Thu, Feb 16, 2017 at 09:48:14PM +0100, Vincenzo Maffione wrote:
> Not sure about what you mean. Until memory areas are in use the real values
> (*_num, *_size) are not changed.
> At NIOCREGIF time you can say what allocator you are interested in by
> writing a non-zero id inside req.nr_arg2.
M
Hi Steve,
> On 16 Feb 2017, at 18:18, Steven Hartland wrote:
>
> It does seem to be related to TCP Receive Window Size.
>
> When I tested on a 11.0-RELEASE box I got 30MB/s out of the box and the
> increase of recvspace was only needed on the original 10.2 however the 11 box
> is only 1.2ms f
13 matches
Mail list logo