Hello,
thank you so much, I did some tests with my X553 and the system survived
all of my tests. Running tcpbench and unplugging the cables several times,
ifconfig down & up.
I think, the performance looks good too:
cpu0 at mainbus0: apid 4 (boot processor)
cpu0: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz, 2200.41 MHz, 06-5f-01
...
ix0 at pci6 dev 0 function 0 "Intel X553 SGMII" rev 0x11: msi, address
00:90:0b:5f:80:98
ix1 at pci6 dev 0 function 1 "Intel X553 SGMII" rev 0x11: msi, address
00:90:0b:5f:80:99
ix2 at pci7 dev 0 function 0 "Intel X553 SGMII" rev 0x11: msi, address
00:90:0b:5f:80:9a
ix3 at pci7 dev 0 function 1 "Intel X553 SGMII" rev 0x11: msi, address
00:90:0b:5f:80:9b
running a back to back (2 rdomains) tcpbench with pf disabled and
mtu 1500 results in about 880 Mbit/s and using mtu 9000 in 990 Mbit/s
I run the tcpbench all the weekend and did not see any problems.
Nice work! I would really love to see the update in GENERIC.
- Joerg
On 31.10.2019 16:00, sven falempin wrote:
On Thu, Oct 31, 2019 at 9:49 AM sven falempin <sven.falem...@gmail.com>
wrote:
On Thu, Oct 31, 2019 at 9:17 AM Stuart Henderson <s...@spacehopper.org>
wrote:
On 2019/10/31 08:25, sven falempin wrote:
Thank you, the ./dev/pci/pcidevs_data.h, pcidevs.h are completly
removed from this
The pcidevs update is no longer needed since pcidevs r1.1889.
I may have time to test it against 6.6, see if it is still working
since 6.4 (where it was
tested, also a cross test revealed
that plugin'/unplugging SFP maybe non functionnal)
I can test an already-wprking fibre ix for new problems at some point,
but I don't
think I'll have a fibre X553.
ppb7 at pci0 dev 22 function 0 "Intel C3000 PCIE" rev 0x11
"Intel X553 SFP+" rev 0x11 at pci8 dev 0 function 0 not configured
"Intel X553 SFP+" rev 0x11 at pci8 dev 0 function 1 not configured
ppb8 at pci0 dev 23 function 0 "Intel C3000 PCIE" rev 0x11
"Intel X553 SFP+" rev 0x11 at pci9 dev 0 function 0 not configured
"Intel X553 SFP+" rev 0x11 at pci9 dev 0 function 1 not configured
When I look at them , some more involved people was talking about 10gb in
OpenBSD .
Isn't there some limitations currently or can we expect the X553 to
perform at 10Gb ?
Hey looks like my diff still works
OpenBSD 6.6-stable (GENERIC.MP) #21: Thu Oct 31 10:53:41 EDT 2019
ix0 at pci8 dev 0 function 0 "Intel X553 SFP+" rev 0x11: msi, address
00:30:18:0b:4a:81
ix1 at pci8 dev 0 function 1 "Intel X553 SFP+" rev 0x11: msi, address
00:30:18:0b:4a:82
ix2 at pci9 dev 0 function 0 "Intel X553 SFP+" rev 0x11: msi, address
00:30:18:0b:4a:83
ix3 at pci9 dev 0 function 1 "Intel X553 SFP+" rev 0x11: msi, address
00:30:18:0b:4a:84
# ifconfig ix
ix0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500
lladdr 00:30:18:0b:4a:81
index 3 priority 0 llprio 3
media: Ethernet autoselect
status: no carrier
ix1: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500
lladdr 00:30:18:0b:4a:82
index 4 priority 0 llprio 3
media: Ethernet autoselect (10GbaseSR full-duplex)
status: active
ix2: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500
lladdr 00:30:18:0b:4a:83
index 5 priority 0 llprio 3
media: Ethernet autoselect
status: no carrier
ix3: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500
lladdr 00:30:18:0b:4a:84
index 6 priority 0 llprio 3
media: Ethernet autoselect (10GbaseSR full-duplex)
status: active
# ifconfig ix1 rdomain 1 10.0.0.1
# ifconfig ix3 rdomain 3 10.0.0.3
# ping -V 3 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=255 time=0.573 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=255 time=0.246 ms
# route -T 1 exec iperf -c 10.0.0.3
------------------------------------------------------------
Client connecting to 10.0.0.3, TCP port 5001
TCP window size: 17.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.1 port 29912 connected with 10.0.0.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 871 MBytes 731 Mbits/sec
If anyone want to help run it faster, you re welcome.