On Fri, Jul 21, 2017 at 09:15 -0400, Maxim Khitrov wrote:
> On Sat, Jul 16, 2016 at 6:37 AM, Mike Belopuhov <m...@belopuhov.com> wrote:
> > On 14 July 2016 at 14:54, Maxim Khitrov <m...@mxcrypt.com> wrote:
> >> On Wed, Jul 13, 2016 at 11:47 PM, Tinker <ti...@openmailbox.org> wrote:
> >>> On 2016-07-14 07:27, Maxim Khitrov wrote:
> >>> [...]
> >>>>
> >>>> No, the tests are run sequentially. Write performance is measured
> >>>> first (20 MB/s), then rewrite (12 MB/s), then read (37 MB/s), then
> >>>> seeks (95 IOPS).
> >>>
> >>>
> >>> Okay, you are on a totally weird platform. Or, on an OK platform with a
> >>> totally weird configuration.
> >>>
> >>> Or on an OK platform and configuration with a totally weird underlying
> >>> storage device.
> >>>
> >>> Are you on a magnet disk, are you using a virtual block device or virtual
> >>> SATA connection, or some legacy interface like IDE?
> >>>
> >>> I get some feeling that your hardware + platform + configuration 
> >>> crappiness
> >>> factor is fairly much through the ceiling.
> >>
> >> Dell R720 and R620 servers, 10 gigabit Ethernet SAN, Dell MD3660i
> >> storage array, 1.2 TB 10K RPM SAS disks in RAID6. I don't think there
> >> is anything crappy or weird about the configuration. Test results for
> >> CentOS on the same system: 170 MB/s write, 112 MB/s rewrite, 341 MB/s
> >> read, 746 IOPS.
> >>
> >> I'm assuming that there are others running OpenBSD on Xen, so I was
> >> hoping that someone else could share either bonnie++ or even just dd
> >> performance numbers. That would help us figure out if there really is
> >> an anomaly in our setup.
> >>
> >
> > Hi,
> >
> > Since you have already discovered that we don't provide a driver
> > for the paravirtualized disk interface (blkfront), I'd say that most likely
> > your setup is just fine, but emulated pciide performance is subpar.
> >
> > I plan to implement it, but right now the focus is on making networking
> > and specifically interrupt delivery reliable and efficient.
> >
> > Regards,
> > Mike
> 
> Hi Mike,
> 
> Revisiting this issue with OpenBSD 6.1-RELEASE and the new xbf driver
> on XenServer 7.0. The write performance is much better at 74 MB/s
> (still slower than other OSs, but good enough). IOPS also improved
> from 95 to 167. However, the read performance actually got worse and
> is now at 16 MB/s. Here are the full bonnie++ results:
> 
> Version  1.97       ------Sequential Output------ --Sequential Input- 
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> web4.dhcp.bhsai. 8G           76191  43 10052  17           16044  25 167.3  
> 43
> Latency                         168ms     118ms               416ms     488ms
> 
> Here are two dd runs for writing and reading:
> 
> $ dd if=/dev/zero of=test bs=1M count=2048
> 2147483648 bytes transferred in 25.944 secs (82771861 bytes/sec)
> 
> $ dd if=test of=/dev/null bs=1M
> 2147483648 bytes transferred in 123.505 secs (17387767 bytes/sec)
> 
> Here's the dmesg output:
> 
> pvbus0 at mainbus0: Xen 4.6
> xen0 at pvbus0: features 0x2705, 32 grant table frames, event channel 3
> xbf0 at xen0 backend 0 channel 8: disk
> scsibus1 at xbf0: 2 targets
> sd0 at scsibus1 targ 0 lun 0: <Xen, phy xvda 768, 0000> SCSI3 0/direct fixed
> sd0: 73728MB, 512 bytes/sector, 150994944 sectors
> xbf1 at xen0 backend 0 channel 9: cdrom
> xbf1: timed out waiting for backend to connect
> 
> Any ideas on why the read performance is so poor?
> 

Yes, 6.1 has a bug that was fixed recently.  Please use -current.
Given how serious were recent fixes, I cannot possibly recommend
using anything but -current on Xen at this point.

Reply via email to