Running "iostat -nxce 1", I saw write sizes alternate between two raidz groups in the same pool.
At one time, drives on cotroller 1 have larger writes (3-10 times) than ones on controller2: extended device statistics ---- errors --- r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c1t1d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t10d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c4t0d0 0.0 9.0 0.0 4.0 0.0 0.0 0.0 0.5 0 0 1 0 0 1 c0t12d0 0.0 9.0 0.0 4.0 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t13d0 0.0 9.0 0.0 4.5 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t14d0 0.0 8.0 0.0 4.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c0t15d0 0.0 9.0 0.0 3.5 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t16d0 0.0 9.0 0.0 3.5 0.0 0.0 0.0 0.1 0 0 1 0 0 1 c0t17d0 0.0 20.0 0.0 56.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t6d0 0.0 20.0 0.0 55.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t7d0 0.0 20.0 0.0 53.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t8d0 0.0 20.0 0.0 53.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t9d0 0.0 20.0 0.0 55.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t10d0 0.0 20.0 0.0 55.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t12d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t13d0 cpu us sy wt id 0 47 0 53 extended device statistics ---- errors --- r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c1t1d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t10d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c4t0d0 0.0 8.0 0.0 18.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c0t12d0 0.0 8.0 0.0 18.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t13d0 0.0 11.0 0.0 20.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t14d0 0.0 12.0 0.0 20.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t15d0 0.0 8.0 0.0 19.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c0t16d0 0.0 8.0 0.0 18.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c0t17d0 0.0 21.0 0.0 66.0 0.0 0.0 0.0 0.4 0 1 1 0 0 1 c2t6d0 0.0 21.0 0.0 66.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t7d0 0.0 21.0 0.0 65.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t8d0 0.0 20.0 0.0 64.0 0.0 0.0 0.0 0.4 0 0 1 0 0 1 c2t9d0 0.0 21.0 0.0 65.0 0.0 0.0 0.0 0.4 0 0 1 0 0 1 c2t10d0 0.0 21.0 0.0 64.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t12d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t13d0 cpu us sy wt id 0 23 0 77 .... At other time, drives on controller2 have larger writes (3-10 times) than the ones on controller1: extended device statistics ---- errors --- r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c1t1d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t10d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c4t0d0 0.0 24.0 0.0 65.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t12d0 0.0 24.0 0.0 64.0 0.0 0.0 0.0 0.4 0 0 1 0 0 1 c0t13d0 0.0 25.0 0.0 67.0 0.0 0.0 0.0 0.5 0 0 1 0 0 1 c0t14d0 0.0 25.0 0.0 66.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t15d0 0.0 26.0 0.0 69.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t16d0 0.0 26.0 0.0 69.0 0.0 0.0 0.0 0.5 0 0 1 0 0 1 c0t17d0 0.0 12.0 0.0 20.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t6d0 0.0 12.0 0.0 20.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t7d0 0.0 13.0 0.0 20.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t8d0 0.0 13.0 0.0 20.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t9d0 0.0 14.0 0.0 22.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t10d0 0.0 14.0 0.0 22.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t12d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t13d0 cpu us sy wt id 0 42 0 58 extended device statistics ---- errors --- r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c1t1d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t10d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c0t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 2 0 0 2 c4t0d0 0.0 20.0 0.0 56.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t12d0 0.0 20.0 0.0 55.5 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t13d0 0.0 19.0 0.0 54.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t14d0 0.0 19.0 0.0 53.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c0t15d0 0.0 18.0 0.0 54.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c0t16d0 0.0 18.0 0.0 54.5 0.0 0.0 0.0 0.4 0 0 1 0 0 1 c0t17d0 0.0 14.0 0.0 28.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t6d0 0.0 14.0 0.0 28.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t7d0 0.0 14.0 0.0 30.5 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t8d0 0.0 14.0 0.0 30.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t9d0 0.0 14.0 0.0 30.0 0.0 0.0 0.0 0.2 0 0 1 0 0 1 c2t10d0 0.0 14.0 0.0 29.0 0.0 0.0 0.0 0.3 0 0 1 0 0 1 c2t11d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t12d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 1 c2t13d0 Is this expected behavior? Shouldn't writes be spread out evenly across all the drives all the time? Does it only apply to the drives on the same raid controller? Here is the pool structure: pool: zpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 c0t15d0 ONLINE 0 0 0 c0t16d0 ONLINE 0 0 0 c0t17d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c2t6d0 ONLINE 0 0 0 c2t7d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 ONLINE 0 0 0 spares c2t12d0 AVAIL c2t13d0 AVAIL -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss