On 2019-04-09, Mark Schneider wrote:
> Hi Peter
>
> Thank you very much for your feedback.
>
> It looks like the performance issue is more complex than I have expected.
> Just for the test I have installed OpenBSD 6.4 and FreeBSD 13.0 on few
> different servers and compared results (details are i
Hi Karel
Thank you very much for your hint.
Yes, I used FreeBSD 13.0 for final performance testing (scp transfer and
iperf3).
Before I run some tests with stable FreeBSD 12.0 but the performance was
much lower (even still approx 3 times better than OpenBSD 6.4). FreeBSD
12.0 recognized less N
On 4/9/19 6:56 PM, Mark Schneider wrote:
Hi Peter
Thank you very much for your feedback.
It looks like the performance issue is more complex than I have expected.
Just for the test I have installed OpenBSD 6.4 and FreeBSD 13.0 on few
different servers and compared results (details are in at
compared filesystem performance, was Re: 10GBit network
performance on OpenBSD 6.4
gwes [g...@oat.com] wrote:
That doesn't answer the question: if you say
dd if=/dev/zero of=/dev/sda (linux) /dev/rsd0c (bsd) bs=64k count=100
what transfer rate is reported
totally agree, Anatoli could you p
Hi Stuart
Thank you very much for the link.
The total ssh based performace depends strongly on the server hardware
(and installed OSes).
For the "fastest" test configuration (server hardware / installed OS) I
was possible to achieve
a total trasfer speed of approx 400MBytes/s (on the 10Gbit fi
Am 08.04.2019 23:46, schrieb Anatoli:
Thank you very much for the idea Anatoli!
Running dd with "/dev/zero" and "/dev/null" gave me back a very good
overview what is going on (different server hardware and operating systems)
ironm@wheezy:~$ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 &
Hi
> Whats your performance without scp? tcpbench / netcat, for example?
Thank you very much for your hint. I did not run them yet (only iperf3
as listed below)
Further test details are in attached files.
Kind regards
Mark
--
m...@it-infrastrukturen.org
Am 08.04.2019 22:06, schrieb Abel A
Hi Anatoly
Thank you very much for your helpfull hints.
The CPU usage (one of available cores) was nearly 100%.
FreeBSD 13.0 and Linux (Debian) seem currently to have faster network
stacks (and faster mass storage handling).
During test I used debian linux running in live mode (transfer to DDR3
lp you apart from a "me too" but at least it
rules out it being an issue with your particular hardware.
Kind Regards,
Peter Membrey
- Original Message -
From: "Mark Schneider"
To: "misc"
Sent: Monday, 8 April, 2019 06:09:09
Subject: Re: 10GBit network
Hello Tom
Thank you very much for your hint.
I have disabled pf with "pfctrl -d" command but didn't notice any
difference in the 10GBit transfer speed.
The CPU usage was high (like 100% for one of the available CPU cores)
# Single send
obsdsrv2$ scp 4GByte-random.bin ironm@10.0.0.2:/home/ironm
gwes [g...@oat.com] wrote:
>
> That doesn't answer the question: if you say
> dd if=/dev/zero of=/dev/sda (linux) /dev/rsd0c (bsd) bs=64k count=100
> what transfer rate is reported
>
totally agree, Anatoli could you please compare ?
> That number represents the maximum possible long-term fi
On 2019-04-07, Mark Schneider wrote:
> Short feedback:
>
> Just for the test I have checked the 10GBit network performance
> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
> transfering data in both directions
>
> # ---
> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100
On 04/08/19 19:29, Chris Cappuccio wrote:
gwes [g...@oat.com] wrote:
What is the rated transfer rate of the SSD you're using to test?
SATA 3 wire speed is 6G/sec and realistically 500MB/sec raw rate
is near the top.
Anything over that is an artefact probably from a cache somewhere.
He's us
GB/s with
nanoseconds latency, but that's not the case unfortunately (at least in
my setup).
*From:* Joseph Mayer
*Sent:* Monday, April 08, 2019 22:52
*To:* Chris Cappuccio
*Cc:* Anatoli , Misc
*Subject:* Re: 10GBit network performance on OpenBSD 6.4
On Tuesday, April 9, 2019 3:28 AM,
On Tuesday, April 9, 2019 3:28 AM, Chris Cappuccio wrote:
> Anatoli [m...@anatoli.ws] wrote:
> > I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
> > Linux, also no filesystem cache, so depending on your HDD with scp you may
> > be hitting the max throughput for the FS,
.
If you can suggest some specific tests to analyze the cause (i.e.
filesystem, hardware issues, scheduling, etc.), please let me know.
*From:* Chris Cappuccio
*Sent:* Monday, April 08, 2019 16:28
*To:* Anatoli
*Cc:* Misc
*Subject:* Re: 10GBit network performance on OpenBSD 6.4
gwes [g...@oat.com] wrote:
>
> What is the rated transfer rate of the SSD you're using to test?
> SATA 3 wire speed is 6G/sec and realistically 500MB/sec raw rate
> is near the top.
>
> Anything over that is an artefact probably from a cache somewhere.
>
He's using NVMe with its own DRAM cache,
On Sun, Apr 7, 2019 at 5:21 PM Mark Schneider
wrote:
> Short feedback:
>
> Just for the test I have checked the 10GBit network performance
> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
> transfering data in both directions
>
> # ---
> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/
Anatoli [m...@anatoli.ws] wrote:
>
> I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
> Linux, also no filesystem cache, so depending on your HDD with scp you may
> be hitting the max throughput for the FS, not the network.
>
12x slower? That's insane. What are you ta
Hi,
I guess you're hitting 2 bottlenecks: the CPU performance for iperf and
HDD performance for scp.
Check how much CPU is consumed during iperf transfer and try scp'ing
something not from/to HDD, e.g. /dev/zero.
I've seen extremely slow HDD performance in OpenBSD, like 12x slower
than on
Short feedback:
Just for the test I have checked the 10GBit network performance
between two FreeBSD 13.0 servers (both HP DL380g7 machines)
transfering data in both directions
# ---
ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
Password for ironm@fbsdsrv1:
t2.iso
21 matches
Mail list logo