On Sat, Apr 14, 2012 at 18:14, Jason Leschnik wrote:
> What about the results on the end node, during the testing?.
This is not a very quick way to isolate the problem. Some NICs and
some configurations just can't do line rate at any packet size, or at
any plausible packet size, and if that's th
What about the results on the end node, during the testing?.
On Sun, Apr 15, 2012 at 5:37 AM, Yuri wrote:
> On 04/14/2012 12:32, Jason Leschnik wrote:
>>
>> What kind of load are you looking at when running the test?
>>
>> maybe output `vmstat 1 15` during a test run
>
>
> Here is the log during
On 04/14/2012 12:32, Jason Leschnik wrote:
What kind of load are you looking at when running the test?
maybe output `vmstat 1 15` during a test run
Here is the log during the test run on the sending side:
# vmstat 1 15
procs memory pagedisks
faults
What kind of load are you looking at when running the test?
maybe output `vmstat 1 15` during a test run
On Sun, Apr 15, 2012 at 5:28 AM, Yuri wrote:
> On 04/14/2012 12:27, Jason Leschnik wrote:
>>
>> cat5e or just cat5?
>
>
> sorry, CAT5e.
>
> Yuri
--
Regards,
Jason Leschnik.
[m] 0432 35 4
On 04/14/2012 12:27, Jason Leschnik wrote:
cat5e or just cat5?
sorry, CAT5e.
Yuri
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
cat5e or just cat5?
On Sun, Apr 15, 2012 at 5:21 AM, Yuri wrote:
> On 04/14/2012 12:11, Jason Leschnik wrote:
>>
>> I would first start by doing a point-to-point link between your two
>> end nodes to rule out your network gear as being the problem
>
>
> Now I did this, connected hosts directly wi
On 04/14/2012 12:11, Jason Leschnik wrote:
I would first start by doing a point-to-point link between your two
end nodes to rule out your network gear as being the problem
Now I did this, connected hosts directly with cat5 cable.
Sending speed is still 753 Mbits/sec. Receiving speed reported ev
I would first start by doing a point-to-point link between your two
end nodes to rule out your network gear as being the problem
On Sun, Apr 15, 2012 at 4:20 AM, Yuri wrote:
> I am running some tests with gigabit switch between two 9.0 hosts using
> iperf.
> The best UDP transmit rate I am gettin
I am running some tests with gigabit switch between two 9.0 hosts using
iperf.
The best UDP transmit rate I am getting is 753 Mbits/sec with ~3-6%
packet loss @1500 MTU @ 2.5GHz CPU, even though command 'iperf -c
X.X.X.X -u -b 1000m' requests the full gigabit.
I am trying to understand what ex
On 13. Apr 2012, at 18:03 , Hajimu UMEMOTO wrote:
> Hi,
>
>> On Fri, 13 Apr 2012 20:01:39 +1200
>> Andrew Thompson said:
>
> thompsa> On 13 April 2012 18:41, Rainer Bredehorn wrote:
>> Hi!
>>
>>> I have noticed that getifaddrs() does not have sin6_scope_id set to
>>> the interface id
10 matches
Mail list logo