On Tue, Feb 18, 2020 at 10:09 AM Jason A. Donenfeld wrote:
>
> Hey K,
>
> On Tue, Feb 18, 2020 at 4:33 PM K. Macy wrote:
> > I appreciate your enthusiasm, but I can’t count the number of nascent
> > kernel projects that have come up in discussion over the years and
Hi Jason -
I appreciate your enthusiasm, but I can’t count the number of nascent
kernel projects that have come up in discussion over the years and
ultimately come to nothing.
I started by getting the OpenBSD bits to build on FreeBSD. However, the
implementation in Open is not really a very good s
On Fri, Aug 31, 2018 at 6:50 PM John-Mark Gurney wrote:
>
> First, does vale work for anyone? At least one of the documented
> commands in vale(4) does not work.
>
The documentation with respect to naming is wrong. I didn't have a bit
when I was using it so never got around to fixing it.
> Afte
Could you please try 334117 vs 334118 and see if 334118 introduces
this regression?
-M
On Wed, May 30, 2018 at 2:48 PM, Rodney W. Grimes
wrote:
>> On Wed, 30 May 2018 17:46:06 +0200, "Rodney W. Grimes" wrote:
>> >
>> > > On Wed, May 30, 2018 at 07:44:52AM -0700, Rodney W. Grimes wrote:
>> > > >
> On a single-client basis the fastest rates we see are around 5 Gbps.
> Hitting this server from multiple boxes we see peaks of 20 Gbps at the very
> highest. More frequently things top off around 13 Gbps. These numbers are
> coming from iperf tests. We are seeing similar numbers with direct
>
HEAD or 11?
On Thu, Nov 30, 2017 at 13:03 Joe Buehler wrote:
> I am using the LINUX 4.4.86 realtime kernel patch with netmap and the
> ixgbevf driver (SRIOV in a VM) and having some serious RX latency issues.
>
> The ixbgevf driver built by the netmap build against the kernel source
> does not
On Sat, Nov 11, 2017 at 8:30 PM, K. Macy wrote:
> On Tue, Nov 7, 2017 at 9:32 AM, Vincenzo Maffione
> wrote:
>> Hi,
>> In general netmap adapters (i.e. netmap ports) may support NS_MOREFRAG.
>> But in practice this is mainly supported on VALE ports.
>> So if you
On Tue, Nov 7, 2017 at 9:32 AM, Vincenzo Maffione wrote:
> Hi,
> In general netmap adapters (i.e. netmap ports) may support NS_MOREFRAG.
> But in practice this is mainly supported on VALE ports.
> So if you don't want to add the missing support by yourself you can simply
> change the netmap buff
It would help if you told us which OS version you're using. Is this on
11.x or HEAD?
-M
On Mon, Nov 6, 2017 at 7:42 AM, Guido Falsi wrote:
> Hi,
>
> As the subject states I have been seeing hangups on a machine with that
> chipset [1] when using default ifconfig settings. It was hanging once
> e
I'm working with a 40GigE mesh where the average RTT is on the order
of 10s of microseconds. Consequently, any application idle periods
result in poor measured application latency because the congestion
window is constantly being reset.
I know that this isn't a common use case, but it's clear tha
On Sat, Sep 17, 2016 at 8:24 AM, Babak Farrokhi wrote:
> ICYMI: http://marc.info/?l=linux-netdev&m=147405177724268&w=2
>
> Google submitted their own TCP CC algorithm to upstream. This algorithm has
> been widely in use in their network.
> This looks very interesting and it would be great if some
16 09:56, K. Macy wrote:
>>
>> #12 taskqueue_drain (queue=0x0, task=0xfe004fc17150) at
>
>
> Hi,
>
> Looks like a NULL pointer, queue=NULL
>
> --HPS
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/ma
I get this panic periodically at iwm load time:
(kgdb) p ic->ic_tq
value has been optimized out
(kgdb) down
#12 taskqueue_drain (queue=0x0, task=0xfe004fc17150) at
/usr/home/mmacy/drm-next-4.6/sys/kern/subr_taskqueue.c:554
554 TQ_LOCK(queue);
(kgdb) bt
#0 __curthread () at ./machi
On Tue, Jun 28, 2016 at 10:51 AM, Matthew Macy wrote:
> You guys should really look at Samy Bahra's epoch based reclamation. I solved
> a similar problem in drm/linuxkpi using it.
The point being that this is a bug in the TCP life cycle handling
_not_ in callouts. Churning the callout interface
On Friday, May 27, 2016, Navdeep Parhar wrote:
> On Fri, May 27, 2016 at 12:23:02AM -0700, K. Macy wrote:
> > On Thursday, May 26, 2016, Navdeep Parhar > wrote:
> >
> > > On Fri, May 27, 2016 at 12:57:34AM -0400, Garrett Wollma
On Thursday, May 26, 2016, Navdeep Parhar wrote:
> On Fri, May 27, 2016 at 12:57:34AM -0400, Garrett Wollman wrote:
> > In article <
> cajpshy4vf5ky6guausloorogiquyd2ccrmvxu8x3carqrzx...@mail.gmail.com
> > you write:
> >
> > ># ifconfig -m cxgbe0
> > >cxgbe0: flags=8943
> >
> > ># ifconfig cxgbe0
Much to my chagrin, this too is my fault. Please apply the attached
patch if it hasn't yet been committed to -CURRENT.
On Fri, May 20, 2016 at 11:28 PM, Joel Dahl wrote:
> On Fri, May 20, 2016 at 07:32:30PM -0700, K. Macy wrote:
>> I'm seeing watchdog resets on em(4) in my VM
I'm seeing watchdog resets on em(4) in my VMWare as of the last day or two.
>
>
> I don't use ipfw, aliases or anything other than stock networking. I
> was unable to copy a large image off the VM without getting an
> unending stream of watchdog resets which could only be fixed by a
> reboot. Fort
On Wed, May 11, 2016 at 7:56 AM, Chris H wrote:
> On Tue, 10 May 2016 10:25:24 -0700 hiren panchasara
> wrote
>
>> + Kip, Scott.
>>
>> On 05/10/16 at 04:46P, David Somayajulu wrote:
>> > Hi All,
>> > I have a couple of questions on iflib :
>> >
>> > 1. Are there plans to incorporate iflib i
ajulu
wrote:
> Thanks for info.
> Is there a link to the latest patch which I can apply to CURRENT in the
> meantime ?
> Thanks
> David S.
>
> -Original Message-
> From: owner-freebsd-...@freebsd.org [mailto:owner-freebsd-...@freebsd.org] On
> Behalf Of K. Macy
>
I'm waiting on erj to commit his ixl update. It will go in immediately
following that with an ixgbe and ixl driver. It would have also included a
bxe driver, but the original bxe driver is too flaky to even test that
cabling is OK and my 10GbaseT version frequently had unrecoverable dmae
errors. It
p;LLDP packet came in.
>
> So something seems to be broken with lagg's LACP support recently. The
> good news is I don't think the route caching is causing this problem. I'll
> put it back in and retest to make sure though.
>
>
Glad to hear I was in error.
-M
>
On Tue, Apr 19, 2016 at 2:52 PM, Dustin Marquess wrote:
> Okay, interestingly, I just updated the AMD machine (the ix one) to the
> latest version of -CURRENT last night, and now it's acting strangeish
> also. So maybe it's not ixl(4) afterall.
>
> What's obviously "broken" is that the config tha
On Mon, Apr 18, 2016 at 10:45 PM, Eggert, Lars wrote:
> I haven't played with lagg+vlan+bridge, but I briefly evaluated XL710 boards
> last year
> (https://lists.freebsd.org/pipermail/freebsd-net/2015-October/043584.html)
> and saw very poor throughputs and latencies even in very simple setups.
plaining that an oid
> cannot be unregistered... It's proving to be frustrating :(
I'm only familiar with how to add things to a device's sysctl tree. It
obviously gets torn down at detach, so you can probably look at
subr_bus.c to see what it does.
> I appreciate the comments Kip.
You do understand that init needs to be run every time interface
settings are changed (TSO / PROMISC / CSUM/ etc)? Reallocating queues
and interrupts every time is fragile (long running systems can run low
on contiguous memory) and, in the common case that you're not actually
changing the number, g
Depending on the vendor in question ask np@ or hps@.
On Thursday, March 24, 2016, Vijay Singh wrote:
> I would like to use krping for some RDMA testing, but I have not used it
> before. If someone who has used this could send me instructions I would
> really appreciate it.
>
> -vijay
> _
On Fri, Sep 4, 2015 at 5:53 PM, Don Lewis wrote:
> On 4 Sep, K. Macy wrote:
>> By default ECN is completely disabled on FreeBSD. On Linux the default
>> is to disable it outbound (not request it) but enable it inbound
>> (accept new connections asking for it). Is there
By default ECN is completely disabled on FreeBSD. On Linux the default
is to disable it outbound (not request it) but enable it inbound
(accept new connections asking for it). Is there a good reason to only
set ECN_PERMIT on inbound connections if the system is doing ECN on
outbound connections?
__
On Sep 3, 2015 10:33 AM, "Vijay Singh" wrote:
>
> Someone told me that once the OFED code hit kernel.org the GPL is the only
> license that applies. Does anyone have insights about that?
That sounds bizarre since mellanox wrote the code and explicitly dual
licensed.
The problem you *do* run in t
Mlxen supports all ConnectX 3 to the best of my knowledge. Are there bug
fixes in the latest version that aren't in svn? The ConnectX 4 driver is
under development. I will be converting mlxen to iflib in the next few
weeks.
-K
On Aug 20, 2015 2:29 PM, "aurfalien" wrote:
> Hi,
>
> Curious if the
>>
>> Im not clear how I'd do that. the data being passed up from the kernel is a
>> variable size. To use copyout I'd have to pass a
>> pointer with a static buffer, right?
>
> Correct, you can pass along the size, and if it's not large enough
> try again... Less than ideal...
>
>> Is there a way
you can grep for if it's a substring of
something actually defined.
> BR,
> KL
>
> On Thu, May 21, 2015 at 6:45 PM, K. Macy wrote:
>>
>> Your module references a variable that the kernel doesn't define. As
soon as you either define it or figure out what you sho
Your module references a variable that the kernel doesn't define. As soon
as you either define it or figure out what you should really be referencing
it as your module will load.
On May 21, 2015 3:53 AM, "Karlis Laivins" wrote:
> Hello again,
>
> A little update - the problem occurs only when try
>
> I'm interested in doing this a bit as I now have 5 em(4) interfaces on
> my soon to be router box.
>
> I tried modifying the driver to allow num_queues to be raised and I
> compiled with EM_MULTIQUEUE set, and all I got for my trouble was
> kernel panics.
>
> I'm not sure if the code even works
TCP host cache? See netinet/tcp_hostcache.c for any fiddling that
needs doing. Let me know if there are any values that should be
exported as sysctls.
-K
SYSCTL_INT(_net_inet_tcp_hostcache, OID_AUTO, expire, CTLFLAG_VNET | CTLFLAG_RW,
&VNET_NAME(tcp_hostcache.expire), 0,
"Expire time of
On Wed, Dec 17, 2014 at 11:04 AM, Adrian Chadd wrote:
> hi,
>
> The ndis code is (a) not maintained, and (b) not going to be updated
> for the newer NDIS API that drivers are slowly being changed over to
> use.
Lack of interest? Too much work?
-K
> I'm sorry. :(
>
>
>
> -adrian
>
>
> On 17 De
>
> I also suspect there are further problems with buf_ring. A full wrap
> around of the atomically swapped value is possible. I.e. the code thinks
> it just atomically updated a head/tail index when in fact a full wrap
> around occurred leading to undefined land. A relatively simple way to
> avoi
> Hi Oleg and Ryan,
>
> We have run into the spurious drop issue too. I could not make sense of
> seeing a single drop at a time every few seconds i.e. if a queue of 4k
> element fills, likely more than one packet is going to be dropped once the
> queue full condition is reached. So we investigated
>
>
>
>> - uint32_t -> m_flowid_t is plain gratuitous. Now we need to include
>>mbuf.h in more places just to get this definition. What's the
>>advantage of this? style(9) isn't too fond of typedefs either. Also,
>>drivers *do* need to know the width of the flowid. At least lagg(4)
On Wed, Aug 20, 2014 at 7:41 AM, Luigi Rizzo wrote:
> On Wed, Aug 20, 2014 at 3:29 PM, Hans Petter Selasky
> wrote:
>
> > Hi Luigi,
> >
> >
> > On 08/20/14 11:32, Luigi Rizzo wrote:
> >
> >> On Wed, Aug 20, 2014 at 9:34 AM, Hans Petter Selasky
> >> wrote:
> >>
> >> Hi,
> >>>
> >>> A month has
It's highly chipset and processor dependent what works best. Intel now
has non-temporal loads and stores which work much better in some cases
but provide little benefit in others.
-Kip
On Wed, May 2, 2012 at 11:52 PM, Steven Atreju wrote:
> Luigi Rizzo wrote:
>> 2. apparently, bcopy is not the f
some interest. This
the relevant part of the last mail that I received from you. The final
part having been dedicated to the narrow potential ABI changes that
were to make it in to the release.
From: Bjoern A. Zeeb
Date: Mon, Sep 19, 2011 at 3:19 PM
To: "K. Macy"
Cc: Robert Watson
On Tue, Apr 24, 2012 at 6:34 PM, Luigi Rizzo wrote:
> On Tue, Apr 24, 2012 at 02:16:18PM +, Li, Qing wrote:
>> >
>> >From previous tests, the difference between flowtable and
>> >routing table was small with a single process (about 5% or 50ns
>> >in the total packet processing time, if i remem
On Tue, Apr 24, 2012 at 5:03 PM, K. Macy wrote:
> On Tue, Apr 24, 2012 at 4:16 PM, Li, Qing wrote:
>>>
>> >From previous tests, the difference between flowtable and
>>>routing table was small with a single process (about 5% or 50ns
>>>in the total pac
On Tue, Apr 24, 2012 at 4:16 PM, Li, Qing wrote:
>>
> >From previous tests, the difference between flowtable and
>>routing table was small with a single process (about 5% or 50ns
>>in the total packet processing time, if i remember well),
>>but there was a large gain with multiple concurrent proce
Most of these issues are well known. Addressing the bottlenecks is
simply time consuming due to the fact that any bugs introduced during
development potentially impact many users.
-Kip
On Sun, Apr 22, 2012 at 4:14 AM, Adrian Chadd wrote:
> Hi,
>
> This honestly sounds like it's begging for an
> i
Comments inline below:
On Fri, Apr 20, 2012 at 4:44 PM, Luigi Rizzo wrote:
> On Thu, Apr 19, 2012 at 11:06:38PM +0200, K. Macy wrote:
>> On Thu, Apr 19, 2012 at 11:22 PM, Luigi Rizzo wrote:
>> > On Thu, Apr 19, 2012 at 10:34:45PM +0200, K. Macy wrote:
>> >> >>
On Thu, Apr 19, 2012 at 11:27 PM, Andre Oppermann wrote:
> On 19.04.2012 23:17, K. Macy wrote:
>>>>
>>>> This only helps if your flows aren't hitting the same rtentry.
>>>> Otherwise you still convoy on the lock for the rtentry itself to
>>>&g
>
> Yes, but the lookup requires a lock? Or is every entry replicated
> to every CPU? So a number of concurrent CPU's sending to the same
> UDP destination would content on that lock?
No. In the default case it's per CPU, thus no serialization is
required. But yes, if your transmitting thread ma
>> This only helps if your flows aren't hitting the same rtentry.
>> Otherwise you still convoy on the lock for the rtentry itself to
>> increment and decrement the rtentry's reference count.
>
>
> The rtentry lock isn't obtained anymore. While the rmlock read
> lock is held on the rtable the rele
On Thu, Apr 19, 2012 at 11:22 PM, Luigi Rizzo wrote:
> On Thu, Apr 19, 2012 at 10:34:45PM +0200, K. Macy wrote:
>> >> This is indeed a big problem. ?I'm working (rough edges remain) on
>> >> changing the routing table locking to an rmlock (read-mostly) which
&g
>> This is indeed a big problem. I'm working (rough edges remain) on
>> changing the routing table locking to an rmlock (read-mostly) which
>
This only helps if your flows aren't hitting the same rtentry.
Otherwise you still convoy on the lock for the rtentry itself to
increment and decrement the
On Wed, Feb 15, 2012 at 11:17 AM, Attila Nagy wrote:
> Hi,
>
> I'm using FreeBSD 9-STABLE on a four core machine with bce to run
> multi-threaded unbound with libev (using kqueue).
> Here's the first message (not a long thread so far) about the problem:
> http://unbound.net/pipermail/unbound-users
The following reply was made to PR kern/146792; it has been noted by GNATS.
From: "K. Macy"
To: n...@gtelecom.ru
Cc: bug-follo...@freebsd.org
Subject: kern/146792: [flowtable] flowcleaner 100% cpu's core load
Date: Mon, 23 Jan 2012 21:48:09 +0100
Have you tested this workload wi
The following reply was made to PR kern/144917; it has been noted by GNATS.
From: "K. Macy"
To: r...@net1.cc
Cc: bug-follo...@freebsd.org
Subject: kern/144917: [flowtable] [panic] flowtable crashes system [regression]
Date: Mon, 23 Jan 2012 21:50:08 +0100
Have you tested this workl
On Sat, Jul 24, 2010 at 2:17 PM, Bjoern A. Zeeb
wrote:
> On Thu, 22 Jul 2010, alan yang wrote:
>
> Hey,
>
>
>> Wonder people had implemented interface to import / export flowtable.
>
Yes I did, and I added an API to query it more generally. I didn't add
it to net/flowtable.c because my usage seem
Sorry, didn't look at the images (limited bw), I've seen something
like this before in timewait. This "can't happen" with UDP so will be
interested in learning more about the bug.
On Mon, Sep 26, 2011 at 4:02 PM, Arnaud Lacombe wrote:
> Hi,
>
> On Mon, Sep 2
On Monday, September 26, 2011, Adrian Chadd wrote:
> On 26 September 2011 13:41, Arnaud Lacombe wrote:
>> /*
>> * XXX
>> * This entire block sorely needs a rewrite.
>> */
>>if (t &&
>>((t->inp_flags & INP_TIMEWAIT) == 0) &&
>>(so->so_type != SOCK_STREAM ||
>
On Sat, Sep 24, 2011 at 7:53 PM, Bjoern A. Zeeb
wrote:
> On Sep 24, 2011, at 5:31 PM, Luigi Rizzo wrote:
>
>> does anyone know know which 10GE cards are supported by FreeBSD,
>> either natively or using third-party drivers ?
>
> is this a serious question? FreeBSD has documentation (for those in-
> What this means is that we have
> a failure of abstraction. Abstraction has a cost, and some of the people who
> want
> access to low level queues are not interested in paying an extra abstraction
> cost.
I think a case can be made that that isn't necessarily the case
depending on how well th
> Whatever the mechanism is, the interface should allow for:
>
> - Flexible matching on layer 2, 3 and 4 header fields
> - Masking out some bits before matching (e.g. ignoring priority bits of
> VLAN tag or least significant bits of IPv4 address)
> - Priority of rules in case several match a singl
On Thu, Sep 8, 2011 at 2:34 PM, John Baldwin wrote:
> On Monday, September 05, 2011 7:21:12 am Ben Hutchings wrote:
>> On Mon, 2011-09-05 at 15:51 +0900, Takuya ASADA wrote:
>> > Hi,
>> >
>> > I implemented Ethernet Flow Director sysctls to ixgbe(4), here's a detail:
>> >
>> > - Adding removing si
On Wed, Sep 7, 2011 at 6:16 AM, Arnaud Lacombe wrote:
> Hi,
>
> On Mon, Sep 5, 2011 at 8:45 PM, Doug Barton wrote:
>> On 09/05/2011 17:18, Arnaud Lacombe wrote:
>>> From my point of view, I should be able to run a FreeBSD 9.0 kernel
>>> (when released) on top of a FreeBSD 5 userland without such
On Mon, Sep 5, 2011 at 11:44 PM, Arnaud Lacombe wrote:
> Hi,
>
> On Mon, Sep 5, 2011 at 4:18 PM, Arnaud Lacombe wrote:
>> Hi,
>>
>> On Mon, Sep 5, 2011 at 3:14 PM, K. Macy wrote:
>>> -STABLE only implies that the ABI does not change during that release
&
-STABLE only implies that the ABI does not change during that release
line. It makes no guarantees when moving from one branch to the next.
On Mon, Sep 5, 2011 at 8:31 PM, Arnaud Lacombe wrote:
> Hi,
>
> It would seem that the ipfw binary from a 7.4 install is not
> compatible with the in-kernel
> This said, one should consider that going fast and being
> completely general don't go well together -- you don't do
> F1 races with a minivan, and don't carry large groups of
> people around with an F1 car.
This is self-evident, any sort of multiplexing comes at a price. To
have the card be sim
> I'd really encourage people to look at the code (e.g. the pkt-gen.c
> program, which is part of the archive) so you can see how easy it
> is to use.
Provided one has a dedicated interface.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.o
Oops, second 10 GigE should obviously be 1GigE
On Tuesday, June 7, 2011, K. Macy wrote:
> All 10GigE NICs and some newer 10 GigE NICs have multiple hardware
> queues with a separate MSI-x vector per queue, where each vector is
> directed to a different CPU. The current operating model i
All 10GigE NICs and some newer 10 GigE NICs have multiple hardware
queues with a separate MSI-x vector per queue, where each vector is
directed to a different CPU. The current operating model is to have a
separate interrupt thread per vector. This obviously gets bogged down
if one has multiple card
Unfortunately msl is a global variable:
tcp_timer.c:
int tcp_msl;
SYSCTL_PROC(_net_inet_tcp, OID_AUTO, msl, CTLTYPE_INT|CTLFLAG_RW,
&tcp_msl, 0, sysctl_msec_to_ticks, "I", "Maximum segment lifetime");
Sockets or rather inpcbs in timewait are maintained on a per-vnet
list. Since tcp_twst
> In the machine what PCIe speed does it state its using? What CPU's do you
> have as to push closer to 10Gbps your going to need a quick machine.
Good point. 5Gbps could indicate that the PCI-e slot is only
negotiating 4x as opposed to 8x.
-Kip
___
fre
onfig_msk1="inet 10.0.1.8 netmask 0xff00"
#defaultrouter="10.0.1.1"
I've been setting the IP later on in boot.
>
> 2011/4/22 K. Macy
>
>> I've had the same problem on my Shuttle box on both Linux and FreeBSD.
>> I work around it by disabling aut
I've had the same problem on my Shuttle box on both Linux and FreeBSD.
I work around it by disabling auto-negotiation and forcing it to
100Mbit.
On Thu, Apr 21, 2011 at 10:45 AM, cyberGn0m wrote:
> Hi all
>
> Some other investigations done and I found the following: device still
> receives data b
On Tue, Apr 19, 2011 at 8:19 PM, Freddie Cash wrote:
> On Tue, Apr 19, 2011 at 7:42 AM, K. Macy wrote:
>>> I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
>>> something
>>> present only in HEAD?
>>
>> It looks like it is now
> Hi,
>
> I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
> something
> present only in HEAD?
It looks like it is now EM_MULTIQUEUE.
Cheers
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebs
On Mon, Apr 18, 2011 at 7:28 PM, K. Macy wrote:
> 400kpps is not a large enough measure to reach any conclusions. A
> system like that should be able to push at least 2.3Mpps with
> flowtable. I'm not saying that what you've done is not an improvement,
> but rather that you
400kpps is not a large enough measure to reach any conclusions. A
system like that should be able to push at least 2.3Mpps with
flowtable. I'm not saying that what you've done is not an improvement,
but rather that you're hitting some other bottleneck. The output of
pmc and LOCK_PROFILING might be
It would be great to see flowtable going back to its intended use.
However, I would be surprised if this actually scales to Mpps. I don't
have any high end hardware at the moment to test, what is the highest
packet rate you've seen? i.e. simply generating small packets.
Thanks
On Mon, Apr 18, 20
he top.
>
> On Wed, Apr 06, 2011 at 06:15:19PM +0200, K. Macy wrote:
>> The weights of the links can be changed at run time. If one link is
>> not passing traffic its weight should be set to zero until such time
>> as it is passing traffic again.
>>
>> On Wed, Ap
The weights of the links can be changed at run time. If one link is
not passing traffic its weight should be set to zero until such time
as it is passing traffic again.
On Wed, Apr 6, 2011 at 6:13 PM, Nikolay Denev wrote:
> On Apr 6, 2011, at 5:36 PM, Michael Proto wrote:
>
>> On Wed, Apr 6, 2011
their
word for it. And I can see how the flowtable might not handle ARP
flapping properly.
We'll need to do more diagnostics. So this is only IPv6 and you are using TCP?
-Kip
>
> -- Frederique
>
>
>
> On Mon, Apr 04, 2011 at 12:09:05PM +0200, K. Macy wrote:
>> Cor
>
> -- Frederique
>
>
>
> On Sun, Apr 03, 2011 at 08:11:33PM +0200, K. Macy wrote:
>> I don't think it was properly tested when it was enabled for IPv6.
>> Given that I have been absentee it really should not be in the default
>> kernel or at lea
I don't think it was properly tested when it was enabled for IPv6.
Given that I have been absentee it really should not be in the default
kernel or at least the sysctl should be off. Sorry for the
inconvenience. Additionally, you don't need to rebuild you can just
disable the sysctl.
-Kip
On Sun
This has since been fixed. However, with 8.0 the simplest fix is to
turn flowtable off.
sysctl net.inet.flowtable.enable=0
-Kip
On Mon, May 24, 2010 at 4:54 AM, Brandon Gooch
wrote:
> On Sun, May 23, 2010 at 5:06 PM, Kurt Jaeger wrote:
>> The following reply was made to PR kern/146792; it has
Thu, Apr 8, 2010 at 6:05 AM, Vincent Hoffman wrote:
> On 08/04/2010 13:07, Barney Cordoba wrote:
>>
>> --- On Fri, 4/2/10, K. Macy wrote:
>>
>>
>>> From: K. Macy
>>> Subject: Re: kern/144917: Flowtable crashes system
>>> To: "Ilya Zh
Please try with the latest 8-STABLE and tell me if recent changes fix it.
Thanks,
Kip
On Thu, Mar 25, 2010 at 8:32 AM, Ilya Zhuravlev wrote:
> On 21.03.2010 17:04, Evgenii Davidov wrote:
>>
>> Здравствуйте,
>>
>> On Sat, Mar 20, 2010 at 11:06:35PM +, Doychin Dokov пишет:
>>
Description:
87 matches
Mail list logo