Hmm, I just repeated this test on my system: bash-3.2# uname -a SunOS soe-x4200m2-6 5.11 onnv-gate:2007-11-02 i86pc i386 i86xpv
bash-3.2# prtconf | more System Configuration: Sun Microsystems i86pc Memory size: 7945 Megabytes bash-3.2# prtdiag | more System Configuration: Sun Microsystems Sun Fire X4200 M2 BIOS Configuration: American Megatrends Inc. 080012 02/02/2007 BMC Configuration: IPMI 1.5 (KCS: Keyboard Controller Style) bash-3.2# ptime dd if=/dev/zero of=/xen/myfile bs=16k count=150000 150000+0 records in 150000+0 records out real 31.927 user 0.689 sys 15.750 bash-3.2# zpool iostat 1 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- xen 15.3G 121G 0 261 0 32.7M xen 15.3G 121G 0 350 0 43.8M xen 15.3G 121G 0 392 0 48.9M xen 15.3G 121G 0 631 0 79.0M xen 15.5G 121G 0 532 0 60.1M xen 15.6G 120G 0 570 0 65.1M xen 15.6G 120G 0 645 0 80.7M xen 15.6G 120G 0 516 0 63.6M xen 15.7G 120G 0 403 0 39.9M xen 15.7G 120G 0 585 0 73.1M xen 15.7G 120G 0 573 0 71.7M xen 15.7G 120G 0 579 0 72.4M xen 15.7G 120G 0 583 0 72.9M xen 15.7G 120G 0 568 0 71.1M xen 16.1G 120G 0 400 0 39.0M xen 16.1G 120G 0 584 0 73.0M xen 16.1G 120G 0 568 0 71.0M xen 16.1G 120G 0 585 0 73.1M xen 16.1G 120G 0 583 0 72.8M xen 16.1G 120G 0 665 0 83.2M xen 16.1G 120G 0 643 0 80.4M xen 16.1G 120G 0 603 0 75.0M xen 16.1G 120G 5 526 320K 64.9M xen 16.7G 119G 0 582 0 68.0M xen 16.7G 119G 0 639 0 78.5M xen 16.7G 119G 0 641 0 80.2M xen 16.7G 119G 0 664 0 83.0M xen 16.7G 119G 0 629 0 78.5M xen 16.7G 119G 0 654 0 81.7M xen 17.2G 119G 0 563 63.4K 63.5M xen 17.3G 119G 0 525 0 59.2M xen 17.3G 119G 0 619 0 71.4M xen 17.4G 119G 0 7 0 448K xen 17.4G 119G 0 0 0 0 xen 17.4G 119G 0 408 0 51.1M xen 17.4G 119G 0 618 0 76.5M xen 17.6G 118G 0 264 0 27.4M xen 17.6G 118G 0 0 0 0 xen 17.6G 118G 0 0 0 0 xen 17.6G 118G 0 0 0 0 ...<ad infinitum> I don't seem to be experiencing the same result as yourself. The behaviour of ZFS might vary between invocations, but I don't think that is related to xVM. Can you get the results to vary when just booting under "bare metal"? Gary On Fri, Nov 02, 2007 at 10:46:56AM -0700, Martin wrote: > I've removed half the memory, leaving 4Gb, and rebooted into "Solaris xVM", > and re-tried under Dom0. Sadly, I still get a similar problem. With "dd > if=/dev/zero of=myfile bs=16k count=150000" I get command returning in 15 > seconds, and "zpool iostat 1 1000" shows 22 records with an IO rate of around > 80M, then 209 records of 2.5M (pretty consistent), then the final 11 records > climbing to 2.82, 3.29, 3.05, 3.32, 3.17, 3.20, 3.33, 4.41, 5.44, 8.11 > > regards > > Martin > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Gary Pennington Solaris Core OS Sun Microsystems [EMAIL PROTECTED] _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss