I've had this idea for a long time (I fixed the kernel to support it
in r162205[1]) and even used a manual version of it a long time ago in
production for NFS servers, but never got around to producing an
automatic version of it.
Now I have:
https://github.com/jmgurney/automtud
It's a simple scri
Hi,
Some hand-waving suggestions:
* if you're running something before 10.2, please disable IXGBE_FDIR
in sys/conf/options and sys/modules/ixgbe/Makefile . It's buggy and it
caused a lot of issues.
* It sounds like some extra latency is happening, so I'd fiddle around
with interrupt settings. By
On Mon, Aug 24, 2015 at 06:48:31PM -0400, Zaphod Beeblebrox wrote:
> So, as background, the minimum packet size for ethernet packets is 64
> bytes. According to at least cisco, the minimum size, then, for 802.1q
> (vlan, etc) packets is 68 bytes. On at least BGE and BCE interfaces, it
> seems (a
So, as background, the minimum packet size for ethernet packets is 64
bytes. According to at least cisco, the minimum size, then, for 802.1q
(vlan, etc) packets is 68 bytes. On at least BGE and BCE interfaces, it
seems (according to counters on my switch) that FreeBSD doesn't honour this.
"show
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183407
Carlos J Puga Medina changed:
What|Removed |Added
Blocks||202602
Depends on|2
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183407
Carlos J Puga Medina changed:
What|Removed |Added
Depends on||202602
--
You are receivin
> On 24 Aug 2015, at 17:33, Markus Gebert wrote:
>
>> On 23.08.2015, at 17:09, Kristof Provost wrote:
>>
>> - PR 202351
>> This is a panic after ip6 reassembly in pf. We set the rcvif to NULL
>> when refragmenting. That seems to go OK execpt when we're refragmenting
>> broadcast/multicast p
Hi Kristof
> On 23.08.2015, at 17:09, Kristof Provost wrote:
>
> - PR 202351
> This is a panic after ip6 reassembly in pf. We set the rcvif to NULL
> when refragmenting. That seems to go OK execpt when we're refragmenting
> broadcast/multicast packets in the forwarding path. It's not at al
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202510
--- Comment #3 from d...@my.gd ---
Patch attached, changes rc's behaviour so that "ipv4_addrs" addresses are set
up on interfaces before "ifconfig_aliasN" addresses.
Seeing this changes the system's behaviour, this may require consensus bef
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202510
--- Comment #2 from d...@my.gd ---
Created attachment 160307
--> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=160307&action=edit
network.subr IP alias order patch
Ensures "ipv4_addrs_" addresses are set up before
"ifconfig__aliasN"
On Sun, Aug 23, 2015 at 05:48:28PM +0100, Gary Palmer wrote:
> On Sun, Aug 23, 2015 at 04:37:56PM +0100, Matthew Seaman wrote:
> > On 23/08/2015 16:04, Gary Palmer wrote:
> > > However if I configure other IPs on other interfaces from the netblock
> > > that
> > > has been delegated to me and eith
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202351
markus.geb...@hostpoint.ch changed:
What|Removed |Added
CC||markus.geb...@hostpoint
Daniel Braniss wrote:
>
> > On 24 Aug 2015, at 10:22, Hans Petter Selasky wrote:
> >
> > On 08/24/15 01:02, Rick Macklem wrote:
> >> The other thing is the degradation seems to cut the rate by about half
> >> each time.
> >> 300-->150-->70 I have no idea if this helps to explain it.
> >
> > Mig
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202510
--- Comment #1 from d...@my.gd ---
As a precision, the problem stems more from how IPs are assigned from rc , than
from CARP itself.
Either the old ipv4_addrs_ syntax should be deprecated, or it should
be moved higher up so that it's enacte
> On 24 Aug 2015, at 10:22, Hans Petter Selasky wrote:
>
> On 08/24/15 01:02, Rick Macklem wrote:
>> The other thing is the degradation seems to cut the rate by about half each
>> time.
>> 300-->150-->70 I have no idea if this helps to explain it.
>
> Might be a NUMA binding issue for the proc
> On 24 Aug 2015, at 02:02, Rick Macklem wrote:
>
> Daniel Braniss wrote:
>>
>>> On 22 Aug 2015, at 14:59, Rick Macklem wrote:
>>>
>>> Daniel Braniss wrote:
> On Aug 22, 2015, at 12:46 AM, Rick Macklem wrote:
>
> Yonghyeon PYUN wrote:
>> On Wed, Aug 19, 2015 at 09:00:3
On 08/24/15 01:02, Rick Macklem wrote:
The other thing is the degradation seems to cut the rate by about half each
time.
300-->150-->70 I have no idea if this helps to explain it.
Might be a NUMA binding issue for the processes involved.
man cpuset
--HPS
_
Hi Eric,
Did you manage to try this problem?
- Evgeny
19.08.2015 23:16, Eric Joyner пишет:
Yeah; it should be able to do up to 64 queues for the PF's. It's
possible for the NVM to limit the RSS table size and entry width, but
that seems unlikely.
- Eric
On Wed, Aug 19, 2015 at 12:41 PM Adr
18 matches
Mail list logo