turn off TSO
the problems sound similar to the one I reported a while back. truing off tso
fixed it.
danny
On Mar 20, 2014, at 3:26 PM, Garrett Wollman wrote:
> I recently put a new server running 9.2 (with a local patches for NFS)
> into production, and it's immediately started to fail in an
Hi
I will be 'experimenting' with 10g in the next few months, so
I need to buy some cards,
After googling for some time, I noticed that there is not realy much real
info, and some of it is a bit dated.
Since these cards are pricy, could those that have such cards share some info?
cheers,
>
>
> On Friday, June 8, 2012 at 7:54 AM, Daniel Braniss wrote:
>
> > Hi
> > I will be 'experimenting' with 10g in the next few months, so
> > I need to buy some cards,
> > After googling for some time, I noticed that there is not realy muc
thanks to all that responded!
from the rough polling, it seems that the order list is
Intel, Myricom
Solarflare, Chelsio
Now I'll try and 'borrow' some of these.
thanks again,
danny
___
freebsd-net@freebsd.org mailing list
ht
hi,
I'm trying out a 4way Dell PowerEdge C5125/AMD server with onboard
Intel(R) PRO/1000 Network Connection version - 2.3.1
when running FreeBsd 8.3 (from sometime around Nov 2) all is ok.
with latest (at least Fridays') 9.1-PRERELEASE it's getting 'constipated'.
It seems that NFS writes
urprice! all is ok, I will now have to go through logs to see
what
happend, but my guess is that the switch this host was connected was under
heavy
load, it has a cluster of HPCs.
thanks, have a nice weekend and season greatings!
danny
>
> Jack
>
>
> On Thu, D
this code behaves correctly when run from a diskless host which booted via PXE,
but fails on a host that was booted from disk.
hint: the non working sends a packet with a non ethernet broadcast address and
an ip address of 255.255.255.255, the working version sets the ethernet address
to 0x
Hi Eygene,
> Daniel, good day.
>
> Mon, Jul 18, 2011 at 05:04:27PM +0300, Daniel Braniss wrote:
> > this code behaves correctly when run from a diskless host which
> > booted via PXE, but fails on a host that was booted from disk.
> > hint: the non working sends a
> Tue, Jul 19, 2011 at 10:40:11AM +0300, Daniel Braniss wrote:
> > > And that non-broadcast ethernet address is the MAC of your
> > > default router?
> > yes.
with dest_addr = INADDR_BROADCAST on the non diskless:
09:44:29.850576 00:0d:b9:00:72:a8 (oui Unknown) > 0
hi,
I have a host (Dell R730) with both cards, connected to an HP8200
switch at 10Gb.
when writing to the same storage (netapp) this is what I get:
ix0:~130MGB/s
mlxen0 ~330MGB/s
this is via nfs/tcpv3
I can get similar (
> On Aug 17, 2015, at 12:41 PM, Slawa Olhovchenkov wrote:
>
> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote:
>
>> hi,
>> I have a host (Dell R730) with both cards, connected to an HP8200
>> switch at 10Gb.
>> when writing to the
:-)
> I used to tweak the card settings, but now it's just stock. You may want to
> check your settings, the Mellanox may just have better defaults for your
> switch.
>
> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov <mailto:s...@zxy.spb.ru>> wrote:
> O
> On Aug 17, 2015, at 3:21 PM, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron
>>> wrote:
>>>
>>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and FreeBSD
>>&g
> On Aug 18, 2015, at 12:49 AM, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On Aug 17, 2015, at 3:21 PM, Rick Macklem wrote:
>>>
>>> Daniel Braniss wrote:
>>>>
>>>>> On Aug 17, 2015, at 1:41 PM, Christopher Forger
sorry, it’s been a tough day, we had a major meltdown, caused by a faulty gbic
:-(
anyways, could you tell me what to do?
comment out, fix the off by one?
the machine is not yet production.
thanks,
danny
> On 18 Aug 2015, at 16:32, Hans Petter Selasky wrote:
>
> On 08/18/15 14:53, Ric
> On 19 Aug 2015, at 16:00, Rick Macklem wrote:
>
> Hans Petter Selasky wrote:
>> On 08/19/15 09:42, Yonghyeon PYUN wrote:
>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote:
On 08/18/15 23:54, Rick Macklem wrote:
> Ouch! Yes, I now see that the code that counts the
ment in sys/net/if_var.h it
>>> is clear
>>> what it means, but for some reason I didn't read it that way before? (I
>>> think it was
>>> the part that said the driver didn't have to subtract for the headers that
>>> confused me?)
>>&g
> On 22 Aug 2015, at 14:59, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On Aug 22, 2015, at 12:46 AM, Rick Macklem wrote:
>>>
>>> Yonghyeon PYUN wrote:
>>>> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote:
>
> On 24 Aug 2015, at 02:02, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On 22 Aug 2015, at 14:59, Rick Macklem wrote:
>>>
>>> Daniel Braniss wrote:
>>>>
>>>>> On Aug 22, 2015, at 12:46 AM, Rick Macklem wrote:
&g
> On 24 Aug 2015, at 10:22, Hans Petter Selasky wrote:
>
> On 08/24/15 01:02, Rick Macklem wrote:
>> The other thing is the degradation seems to cut the rate by about half each
>> time.
>> 300-->150-->70 I have no idea if this helps to explain it.
>
> Might be a NUMA binding issue for the proc
> On Aug 24, 2015, at 3:25 PM, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On 24 Aug 2015, at 10:22, Hans Petter Selasky wrote:
>>>
>>> On 08/24/15 01:02, Rick Macklem wrote:
>>>> The other thing is the degradation seems to cut
21 matches
Mail list logo