I observed something like this a while ago, but assumed it was something 
I did. (It usually is... ;)

Tell me - If you watch with an iostat -x 1, do you see bursts of I/O 
then periods of nothing, or just a slow stream of data?

I was seeing intermittent stoppages in I/O, with bursts of data on 
occasion...

Maybe it's not just me... Unfortunately, I'm still running old nv and 
xen bits, so I can't speak to the 'current' situation...

Cheers.

Nathan.

Martin wrote:
> Hello
> 
> I've got Solaris Express Community Edition build 75 (75a) installed on an 
> Asus P5K-E/WiFI-AP (ip35/ICH9R based) board.  CPU=Q6700, RAM=8Gb, 
> disk=Samsung HD501LJ and (older) Maxtor 6H500F0.
> 
> When the O/S is running on bare metal, ie no xVM/Xen hypervisor, then 
> everything is fine.
> 
> When it's booted up running xVM and the hypervisor, then unlike plain disk 
> I/O, and unlike svm volumes, zfs is around 20 time slower.
> 
> Specifically, with either a plain ufs on a raw/block disk device, or ufs on a 
> svn meta device, a command such as dd if=/dev/zero of=2g.5ish.dat bs=16k 
> count=150000 takes less than a minute, with an I/O rate of around 30-50Mb/s.
> 
> Similary, when running on bare metal, output to a zfs volume, as reported by 
> zpool iostat, shows a similar high output rate. (also takes less than a 
> minute to complete).
> 
> But, when running under xVM and a hypervisor, although the ufs rates are 
> still good, the zfs rate drops after around 500Mb.
> 
> For instance, if a window is left running zpool iostat 1 1000, then after the 
> "dd" command above has been run, there are about 7 lines showing a rate of 
> 70Mbs, then the rate drops to around 2.5Mb/s until the entire file is 
> written.  Since the dd command initially completes and returns control back 
> to the shell in around 5 seconds, the 2 gig of data is cached and is being 
> written out.  It's similar with either the Samsung or Maxtor disks (though 
> the Samsung are slightly faster).
> 
> Although previous releases running on bare metal (with xVM/Xen) have been 
> fine, the same problem exists with the earlier b66-0624-xen drop of Open 
> Solaris
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to