Hi all,

My system is powered by an Intel Core 2 Duo (E6600) with 8GB of RAM.
Running into some very heavy CPU usage.

First, a copy from one zpool to another (cp -aRv /oldtank/documents*
/tank/documents/*), both in the same system. Load averages are around
~4.8. I think I used lockstat correctly, and found the following:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2960 events in 30.516 seconds (97 events/sec)

Count indv cuml rcnt     nsec Hottest CPU+PIL        Caller
-------------------------------------------------------------------------------
 1518  51%  51% 0.00     1800 cpu[0]                 SHA256TransformBlocks
  334  11%  63% 0.00     2820 cpu[0]
vdev_raidz_generate_parity_pq
  261   9%  71% 0.00     3493 cpu[0]                 bcopy_altentry
  119   4%  75% 0.00     3033 cpu[0]                 mutex_enter
   73   2%  78% 0.00     2818 cpu[0]                 i86_mwait
<snip>

So, obviously here it seems checksum calculation is, to put it mildly,
eating up CPU cycles like none other. I believe it's bad(TM) to turn
off checksums? (zfs property just has checksum=on, I guess it has
defaulted to SHA256 checksums?)

Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
GigE to 2 6-drive RAID-Z2s). Load average are around ~3. Again, with
lockstat:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2919 events in 30.089 seconds (97 events/sec)

Count indv cuml rcnt     nsec Hottest CPU+PIL        Caller
-------------------------------------------------------------------------------
 1298  44%  44% 0.00     1853 cpu[0]                 i86_mwait
  301  10%  55% 0.00     2700 cpu[0]
vdev_raidz_generate_parity_pq
  144   5%  60% 0.00     3569 cpu[0]                 bcopy_altentry
  103   4%  63% 0.00     3933 cpu[0]                 ddi_getl
   83   3%  66% 0.00     2465 cpu[0]                 mutex_enter
<snip>
Here it seems as if 'i86_mwait' is occupying the top spot (is this
because I have power-management set to poll my CPU?). Is something odd
happening drive buffer wise? (i.e. coming in on NIC, buffered in the
HBA somehow, and then flushed to disks?)

Either case, it seems I'm hitting a ceiling of around 65MB/s. I assume
CPU is bottlenecking, since bonnie++ benches resulted in much better
performance for the vdev. In the latter case though, it may just be a
limitation of the source drive (if it can't read data faster than
65MB/s, I can't write faster than that...).

e: E6600 is a first-generation 65nm LGA775 CPU, clocked at 2.40GHz.
Dual-cores, no hyper-threading.

-- 
--khd
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to