< said:
> Yes, in the past the code was in this form, it should work fine Garrett,
> just make sure
> the 4K pool is large enough.
[Andre Oppermann's patch:]
>> if (adapter->max_frame_size <= 2048)
adapter-> rx_mbuf_sz = MCLBYTES;
>> - else if (adapter->max_frame_size <= 4096)
>> + el
Old Synopsis: process of 'ifconfig gif0 create hangs' when if_gif_load exists
in /etc/loader.conf and 'device gif' exists in kernel config.
New Synopsis: [gif] process of 'ifconfig gif0 create hangs' when if_gif_load
exists in /etc/loader.conf and 'device gif' exists in kernel config.
Responsibl
Old Synopsis: panic: IPsec + enc(4); device name clash with CAM
New Synopsis: [ipsec] [enc] [patch] panic: IPsec + enc(4); device name clash
with CAM
Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Sun Mar 10 04:52:34 UTC 2013
Respo
Garrett Wollman wrote:
> < said:
>
> > I suspect this indicates that it isn't mutex contention, since the
> > threads would block waiting for the mutex for that case, I think?
>
> No, because our mutexes are adaptive, so each thread spins for a while
> before blocking. With the current implement
Garrett Wollman wrote:
> < said:
>
> > around the highwater mark basically indicates this is working. If it
> > wasn't
> > throwing away replies where the receipt has been ack'd at the TCP
> > level, the cache would grow very large, since they would only be
> > discarded after a loonnngg timeout
Garett Wollman wrote:
> In article <20795.29370.194678.963...@hergotha.csail.mit.edu>, I
> wrote:
> >< > said:
> >> I've thought about this. My concern is that the separate thread
> >> might
> >> not keep up with the trimming demand. If that occurred, the cache
> >> would
> >> grow veryyy laarrggge
Old Synopsis: user-mode netgraph node hangs when replying to control message
New Synopsis: [libnetgraph] [patch] user-mode netgraph node hangs when replying
to control message
Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Sun Mar
On 09.03.2013 23:17, Nikolay Denev wrote:
On Mar 7, 2013, at 9:42 PM, John-Mark Gurney wrote:
Andre Oppermann wrote this message on Thu, Mar 07, 2013 at 08:39 +0100:
Adding interface address is handled via atomically deleting old prefix and
adding interface one.
This brings up a long standi
On Mar 7, 2013, at 9:42 PM, John-Mark Gurney wrote:
> Andre Oppermann wrote this message on Thu, Mar 07, 2013 at 08:39 +0100:
>>> Adding interface address is handled via atomically deleting old prefix and
>>> adding interface one.
>>
>> This brings up a long standing sore point of our routing c
In article <20795.29370.194678.963...@hergotha.csail.mit.edu>, I wrote:
>< said:
>> I've thought about this. My concern is that the separate thread might
>> not keep up with the trimming demand. If that occurred, the cache would
>> grow veryyy laarrggge, with effects like running out of mbuf cluste
<
said:
> around the highwater mark basically indicates this is working. If it wasn't
> throwing away replies where the receipt has been ack'd at the TCP
> level, the cache would grow very large, since they would only be
> discarded after a loonnngg timeout (12hours unless you've changes
> NFSRVC
<
said:
> I suspect this indicates that it isn't mutex contention, since the
> threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each thread spins for a while
before blocking. With the current implementation, all of them end up
doing this
Garrett Wollman wrote:
> < said:
>
> > If reducing the size to 4K doesn't fix the problem, you might want
> > to
> > consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
> > the increased CPU overhead (and some increased mutex contention) of
> > calling nfsrv_trimcache() more freque
Garrett Wollman wrote:
> < said:
>
> > The cached replies are copies of the mbuf list done via m_copym().
> > As such, the clusters in these replies won't be free'd (ref cnt ->
> > 0)
> > until the cache is trimmed (nfsrv_trimcache() gets called after the
> > TCP layer has received an ACK for rec
Dnia sobota, 9 marca 2013 o 16:11:56 napisałeś:
> > > Though the src node removal option through pfctl -K does a lot of job
> > > to cleanup things
> > > Still need to undertand why it takes so much time for you to loop
> > > through 500K states.
> >
> > That is because the loop will not be calle
On Sat, Mar 9, 2013 at 2:37 PM, Kajetan Staszkiewicz
wrote:
> Dnia sobota, 9 marca 2013 o 13:14:16 Ermal Luçi napisał(a):
> > On Fri, Mar 8, 2013 at 9:51 PM, Kajetan Staszkiewicz
> >
> > wrote:
> > > Dnia piątek, 8 marca 2013 o 21:11:43 Ermal Luçi napisał(a):
> > > > Is this FreeBSD 9.x or HEAD?
>
Dnia sobota, 9 marca 2013 o 13:14:16 Ermal Luçi napisał(a):
> On Fri, Mar 8, 2013 at 9:51 PM, Kajetan Staszkiewicz
>
> wrote:
> > Dnia piątek, 8 marca 2013 o 21:11:43 Ermal Luçi napisał(a):
> > > Is this FreeBSD 9.x or HEAD?
> >
> > I found the problem and developed the patch on 9.1.
> >
> Can y
Also do not forget to rebuild pfctl so that statistics are shown correctly.
On Sat, Mar 9, 2013 at 1:14 PM, Ermal Luçi wrote:
>
>
>
> On Fri, Mar 8, 2013 at 9:51 PM, Kajetan Staszkiewicz <
> veg...@tuxpowered.net> wrote:
>
>> Dnia piątek, 8 marca 2013 o 21:11:43 Ermal Luçi napisał(a):
>> > Is t
On Fri, Mar 8, 2013 at 9:51 PM, Kajetan Staszkiewicz
wrote:
> Dnia piątek, 8 marca 2013 o 21:11:43 Ermal Luçi napisał(a):
> > Is this FreeBSD 9.x or HEAD?
>
> I found the problem and developed the patch on 9.1.
>
> Can you please test this more 'beautiful' patch.
Its similar to yours but also dela
19 matches
Mail list logo