I did the same experiment in an VMWare guest (SLES10 x64). The archive was stored on the vdisk and untarring went to the same vdisk. The storage backend is sun system with 64 GB RAM, 2*QC cpus, 24 SAS disks with 450 GB, 4 vdevs with 6 disks as RAIDZ2, an Intel X25-E as log device (c2t1d0). A StorageTek SAS RAID Host Bus Adapters with 256 RAM and BBU for the zpool and a second HBA for the slog device. c3 is for the zpool and c2 for slog (c2t1d0)/boot (c2t0d0) devices. There are actually 140 VMs running and used over NFS from VSphere 4 with two 1 Gb/s links.
zd-nms-s5:/build # iostat -indexC 5 before untarring r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 396.0 0.0 9428.3 0.0 0.1 0.0 0.2 0 5 0 0 0 0 c2 0.0 14.0 0.0 61.9 0.0 0.0 0.0 2.8 0 1 0 0 0 0 c2t0d0 0.0 382.0 0.0 9366.4 0.0 0.0 0.0 0.1 0 3 0 0 0 0 c2t1d0 265.4 0.0 3631.2 0.0 0.0 1.2 0.0 4.3 0 105 0 0 0 0 c3 9.8 0.0 148.2 0.0 0.0 0.0 0.0 3.4 0 3 0 0 0 0 c3t0d0 8.8 0.0 137.7 0.0 0.0 0.0 0.0 3.6 0 3 0 0 0 0 c3t1d0 .... zd-nms-s5:/build # iostat -indexC 5 during untarring extended device statistics ---- errors --- r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 1128.3 0.0 31713.6 0.0 0.2 0.0 0.1 0 12 0 0 0 0 c2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c2t0d0 0.0 1128.3 0.0 31713.6 0.0 0.2 0.0 0.1 1 12 0 0 0 0 c2t1d0 2005.7 5708.9 7423.7 42041.5 0.1 61.7 0.0 8.0 0 1119 0 0 0 0 c3 82.8 602.2 364.9 2408.4 0.0 4.4 0.0 6.4 1 68 0 0 0 0 c3t0d0 72.4 601.6 288.5 2452.7 0.0 4.2 0.0 6.2 1 61 0 0 0 0 c3t1d0 .... zd-nms-s5:/build # time tar jxf /tmp/gcc-4.4.3.tar.bz2 real 0m58.086s user 0m12.241s sys 0m6.552s Andreas -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss