Hi Kristof
> On 23.08.2015, at 17:09, Kristof Provost wrote:
>
> - PR 202351
> This is a panic after ip6 reassembly in pf. We set the rcvif to NULL
> when refragmenting. That seems to go OK execpt when we're refragmenting
> broadcast/multicast packets in the forwarding path. It's not at al
On 26.03.2014, at 03:33, Christopher Forgeron wrote:
> On Tue, Mar 25, 2014 at 8:21 PM, Markus Gebert
> wrote:
>
>>
>>
>> Is 65517 correct? With Ricks patch, I get this:
>>
>> dev.ix.0.hw_tsomax: 65518
>>
>
> Perhaps a difference be
On 26.03.2014, at 00:06, Christopher Forgeron wrote:
> Update:
>
> I'm changing my mind, and I believe Rick's TSO patch is fixing things
> (sorry). In looking at my notes, it's possible I had lagg on for those
> tests. lagg does seem to negate the TSO patch in my case.
I’m glad to hear you co
On 25.03.2014, at 23:21, Rick Macklem wrote:
> Markus Gebert wrote:
>>
>> On 25.03.2014, at 22:46, Rick Macklem wrote:
>>
>>> Markus Gebert wrote:
>>>>
>>>> On 25.03.2014, at 02:18, Rick Macklem
>>>> wrote:
>>>
On 25.03.2014, at 22:46, Rick Macklem wrote:
> Markus Gebert wrote:
>>
>> On 25.03.2014, at 02:18, Rick Macklem wrote:
>>
>>> Christopher Forgeron wrote:
>>>>
>>>>
>>>>
>>>> This is regarding the TSO patch t
On 25.03.2014, at 02:18, Rick Macklem wrote:
> Christopher Forgeron wrote:
>>
>>
>>
>> This is regarding the TSO patch that Rick suggested earlier. (With
>> many thanks for his time and suggestion)
>>
>>
>> As I mentioned earlier, it did not fix the issue on a 10.0 system. It
>> did make it
IP_MAXPACKET issue.
While we have most symptoms in common, I’ve still not seen any allocation error
in netstat -m. So I tend to agree that this is most probably a different
problem.
Markus
> I'll create a separate thread for that one shortly.
>
>
> On Mon, Mar 24, 2014
On 24.03.2014, at 16:21, Christopher Forgeron wrote:
> This is regarding the TSO patch that Rick suggested earlier. (With many
> thanks for his time and suggestion)
>
> As I mentioned earlier, it did not fix the issue on a 10.0 system. It did
> make it less of a problem on 9.2, but either way,
On 21.03.2014, at 15:49, Christopher Forgeron wrote:
> However, if you can make a spare tester of the same hardware, that's
> perfect - And you can generate all the load you need with benchmark
> software like iometer, large NFS copies, or perhaps a small replica of your
> network. Synthetic lo
d: 0
> dev.ix.0.queue6.rxd_tail: 2047
> dev.ix.0.queue6.rx_packets: 2559592
> dev.ix.0.queue6.rx_bytes: 0
> dev.ix.0.queue6.rx_copies: 0
> dev.ix.0.queue6.lro_queued: 0
> dev.ix.0.queue6.lro_flushed: 0
> dev.ix.0.queue7.interrupt_rate: 71428
> dev.ix.0.queue7.irqs: 150693
>
On 21.03.2014, at 14:16, Christopher Forgeron wrote:
> Hi Markus,
>
> Yes, we may have different problems, or perhaps the same problem is
> manifesting itself in different ways in our systems.
>
> Have you tried a 10.0-RELEASE system yet? If we were on the same OS version,
> we could then
On 21.03.2014, at 12:47, Christopher Forgeron wrote:
> Hello all,
>
> I ran Jack's ixgbe MJUM9BYTES removal patch, and let iometer hammer away
> at the NFS store overnight - But the problem is still there.
>
> From what I read, I think the MJUM9BYTES removal is probably good cleanup
> (as lon
On 21.03.2014, at 03:45, Rick Macklem wrote:
> Markus Gebert wrote:
>>
>> On 20.03.2014, at 14:51, woll...@bimajority.org wrote:
>>
>>> In article <21290.60558.750106.630...@hergotha.csail.mit.edu>, I
>>> wrote:
>>>
>>>&g
o be 27 by accident.
Markus
> On Thu, Mar 20, 2014 at 7:40 AM, Markus Gebert
> wrote:
>
>> Also, if you have dtrace available:
>>
>> kldload dtraceall
>> dtrace -n 'fbt:::return / arg1 == EFBIG && execname == "ping" / { stack();
>>
On 20.03.2014, at 14:51, woll...@bimajority.org wrote:
> In article <21290.60558.750106.630...@hergotha.csail.mit.edu>, I wrote:
>
>> Since we put this server into production, random network system calls
>> have started failing with [EFBIG] or maybe sometimes [EIO]. I've
>> observed this with a
On 19.03.2014, at 20:17, Christopher Forgeron wrote:
> Hello,
>
>
>
> I can report this problem as well on 10.0-RELEASE.
>
>
>
> I think it's the same as kern/183390?
Possible. We still see this on nfsclients only, but I’m not convinced that nfs
is the only trigger.
> I have two physic
ests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
Markus
> On Th
.
Thanks for looking into this.
Markus
> On Thu, Mar 6, 2014 at 2:24 AM, Markus Gebert
> wrote:
>
>> (creating a new thread, because I'm no longer sure this is related to
>> Johan's thread that I originally used to discuss this)
>>
>> On 27.02.2014,
mmit
that fixes anything related to what were seeing…
So, what’s the conclusion here? Firmware bug that’s only triggered under 9.2?
Driver bug introduced between 9.1 and 9.2 when new multiqueue stuff was added?
Jack, how should we proceed?
Markus
On Thu, Feb 27, 2014 at 8:05 AM, Markus Gebert
19 matches
Mail list logo