https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #18 from commit-h...@freebsd.org ---
A commit references this bug:
Author: cem
Date: Mon Dec 11 04:32:37 UTC 2017
New revision: 326758
URL: https://svnweb.freebsd.org/changeset/base/326758
Log:
i386: Bump KSTACK_PAGES default
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #17 from Shreesh Holla ---
(In reply to Conrad Meyer from comment #16)
I agree with @eugene. The generally accepted case should just work. And special
configurations are well special configurations and they can tune accordingly.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #16 from Conrad Meyer ---
(In reply to Eugene Grosbein from comment #13)
(In reply to Shreesh Holla from comment #14)
Yeah, just bumping i386 KSTACK_PAGES will at least give parity with amd64.
That's reasonable. It seems kind
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #15 from Michael Tuexen ---
OK, so it is a problem in i386. Haven't tested that for ages... One problem we
ran into in the past was that the stack grew due to inline compilation. Not
sure what is the case here. I can try to nail
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #14 from Shreesh Holla ---
(In reply to Conrad Meyer from comment #12)
@conrad - I see what you mean since i386 => 32 bit. And yes definitely fixing
the SCTP stack to not use that much stack is the right one. From what I saw it
Eugene Grosbein wrote:
> 11.12.2017 2:54, Michael Grimm wrote:
>> *BUT* if I do boot with the default 1500 setting,
>> changing the MTU to e.g. 1450 and *immediately* back to 1500 manually,
>> I do not encounter any performance loss at all. Why?
>> Even when booting 1490 and immediately setting
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #13 from Eugene Grosbein ---
There are no reasons to keep kstack_pages<4 for i386 with exception of very
specific load pattern when you have enormous number of threads in the system.
--
You are receiving this mail because:
You
11.12.2017 2:54, Michael Grimm wrote:
> I did already lower MTU: If I do configure vtnet0 to a MTU of 1490 at boot
> time I do not not notice a performance loss compared to the default 1500
> setting.
>
>>> *BUT* if I do a "ifconfig vtnet0 mtu 1450 up ; ifconfig vtnet0 mtu 1500 up"
>>> I do ob
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #12 from Conrad Meyer ---
(In reply to Shreesh Holla from comment #11)
Well, pointers and native words *are* smaller on i386, so it isn't totally
unreasonable for the stack size to be smaller than amd64. Also, i386-only
devices
Eugene Grosbein wrote:
> 10.12.2017 23:55, Michael Grimm wrote:
> "bad cksum 0" is pretty normal for traffic going out via interface supporting
> hardware checksum offload,
> so kernel skips computing checksum before passing packets to the NIC.
Ok, good to know.
> Your problem more likely is
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #11 from Shreesh Holla ---
(In reply to Eugene Grosbein from comment #10)
Yes. This did do the trick. Adding to loader.conf as Eugene suggested.
Also @Michael - as a FYI - I am using global IPV6 addresses.
That bug that Eugene
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
Eugene Grosbein changed:
What|Removed |Added
Status|New |Open
CC|
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #9 from Conrad Meyer ---
@Michael, one other thing to consider is that Shreesh is running i386, which
uses a smaller KSTACK_PAGES default (2) than amd64 (4). Double fault is
consistent with overrunning the end of the stack. It
10.12.2017 23:55, Michael Grimm wrote:
> Hi
>
> I do run an IPsec/racoon tunnel between two servers (11.1-STABLE #0 r326663).
> Some days ago I did migrate one of my servers from bare metal to a public
> cloud instance. Now I do observe weird performance issues from new to old
> server:
>
> if
Hi
I do run an IPsec/racoon tunnel between two servers (11.1-STABLE #0 r326663).
Some days ago I did migrate one of my servers from bare metal to a public cloud
instance. Now I do observe weird performance issues from new to old server:
ifconfig (OLD server, bare metal):
ix0: flags=8843
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #8 from Michael Tuexen ---
(In reply to Shreesh Holla from comment #7)
Is it a global IPv6 address or a link local one? I have been testing with link
local addresses...
--
You are receiving this mail because:
You are the assig
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #7 from Shreesh Holla ---
(In reply to Michael Tuexen from comment #6)
Oh I mentioned in the bug report right in the beginning. Here it is again:
ncat --sctp -l
--
You are receiving this mail because:
You are the assignee fo
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224218
--- Comment #6 from Michael Tuexen ---
(In reply to Shreesh Holla from comment #5)
Can you please state the arguments you use on the server side for starting
ncat?
--
You are receiving this mail because:
You are the assignee for the bug.
18 matches
Mail list logo