Carsten Aulbert writes:
> Carsten Aulbert wrote:
>
> > Put some stress on the system with bonnie and other tools and try to
> > find slow disks and see if this could be the main problem but also look
> > into more vdevs and then possible move to raidz to somehow compensate
> > for lost disk space. Since we have 4 cold spares on the shelf plus a SMS
> > warnings on disk failures (that is if fma catches them) the risk
> > involved should be tolerable.
>
> First result with bonnie during the "writing intelligently..." phase I
> see this in a 2 minute average:
>
> zpool iostats:
>
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> atlashome 1.70T 19.2T 225 1.49K 342K 107M
> raidz2 550G 6.28T 74 409 114K 32.6M
> c0t0d0 - - 0 314 32.3K 2.51M
> c1t0d0 - - 0 315 31.8K 2.52M
> c4t0d0 - - 0 313 31.3K 2.52M
> c6t0d0 - - 0 315 32.3K 2.51M
> c7t0d0 - - 0 326 32.8K 2.50M
> c0t1d0 - - 0 309 33.9K 2.52M
> c1t1d0 - - 0 313 33.4K 2.51M
> c4t1d0 - - 0 314 33.4K 2.52M
> c5t1d0 - - 0 308 32.8K 2.52M
> c6t1d0 - - 0 314 31.3K 2.51M
> c7t1d0 - - 0 311 31.8K 2.52M
> c0t2d0 - - 0 309 31.8K 2.52M
> c1t2d0 - - 0 313 31.8K 2.51M
> c4t2d0 - - 0 315 31.8K 2.52M
> c5t2d0 - - 0 307 32.8K 2.52M
> raidz2 567G 6.26T 64 529 96.5K 36.3M
> c6t2d0 - - 1 368 74.2K 2.79M
> c7t2d0 - - 1 366 74.2K 2.80M
> c0t3d0 - - 1 364 75.8K 2.80M
> c1t3d0 - - 1 365 75.2K 2.80M
> c4t3d0 - - 1 368 76.8K 2.80M
> c5t3d0 - - 1 362 76.3K 2.80M
> c6t3d0 - - 1 366 77.9K 2.80M
> c7t3d0 - - 1 365 76.8K 2.80M
> c0t4d0 - - 1 361 76.8K 2.80M
> c1t4d0 - - 1 363 75.8K 2.80M
> c4t4d0 - - 1 366 76.3K 2.80M
> c6t4d0 - - 1 364 78.4K 2.80M
> c7t4d0 - - 1 370 78.9K 2.79M
> c0t5d0 - - 1 365 77.3K 2.80M
> c1t5d0 - - 1 364 74.7K 2.80M
> raidz2 620G 6.64T 86 582 131K 37.9M
> c4t5d0 - - 18 382 1.16M 2.74M
> c5t5d0 - - 10 380 674K 2.74M
> c6t5d0 - - 18 378 1.15M 2.73M
> c7t5d0 - - 9 384 628K 2.74M
> c0t6d0 - - 18 377 1.16M 2.74M
> c1t6d0 - - 10 383 680K 2.75M
> c4t6d0 - - 19 379 1.21M 2.73M
> c5t6d0 - - 10 383 691K 2.75M
> c6t6d0 - - 19 379 1.21M 2.73M
> c7t6d0 - - 10 383 676K 2.72M
> c0t7d0 - - 18 374 1.19M 2.75M
> c1t7d0 - - 10 381 676K 2.74M
> c4t7d0 - - 19 380 1.22M 2.74M
> c5t7d0 - - 10 382 696K 2.74M
> c6t7d0 - - 18 381 1.17M 2.74M
> c7t7d0 - - 9 386 631K 2.75M
> ---------- ----- ----- ----- ----- ----- -----
>
> iostat -Mnx 120:
> extended device statistics
> r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t0d0
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0
> 0.0 1.4 0.0 0.0 0.0 0.0 1.5 0.4 0 0 c5t0d0
> 0.6 351.5 0.0 2.6 0.4 0.1 1.2 0.2 3 8 c7t0d0
> 0.6 336.3 0.0 2.6 0.1 0.1 0.4 0.2 3 7 c0t0d0
> 0.6 340.8 0.0 2.6 0.2 0.1 0.6 0.2 3 7 c1t0d0
> 0.6 330.6 0.0 2.6 0.1 0.1 0.3 0.2 3 7 c5t1d0
> 0.6 336.7 0.0 2.6 0.1 0.1 0.3 0.2 3 7 c4t0d0
> 0.6 331.8 0.0 2.6 0.1 0.1 0.3 0.2 3 7 c0t1d0
> 0.6 339.0 0.0 2.6 0.4 0.1 1.1 0.2 3 7 c7t1d0
> 0.6 335.4 0.0 2.6 0.1 0.1 0.4 0.2 3 7 c1t1d0
> 0.6 329.2 0.0 2.6 0.1 0.1 0.3 0.2 3 7 c5t2d0
> 0.6 343.7 0.0 2.6 0.3 0.1 0.7 0.2 3 7 c4t1d0
> 0.6 331.8 0.0 2.6 0.1 0.1 0.3 0.2 2 7 c0t2d0
> 1.2 396.3 0.1 2.9 0.3 0.1 0.7 0.2 4 8 c7t2d0
> 0.6 336.7 0.0 2.6 0.1 0.1 0.4 0.2 3 7 c1t2d0
> 0.6 341.9 0.0 2.6 0.2 0.1 0.7 0.2 3 7 c4t2d0
> 1.3 390.7 0.1 2.9 0.3 0.1 0.8 0.2 4 9 c5t3d0
> 1.3 396.7 0.1 2.9 0.3 0.1 0.8 0.2 4 9 c7t3d0
> 1.3 393.6 0.1 2.9 0.2 0.1 0.6 0.2 4 9 c0t3d0
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t4d0
> 1.3 396.2 0.1 2.9 0.2 0.1 0.5 0.2 4 8 c1t3d0
> 1.3 399.2 0.1 2.9 0.3 0.1 0.8 0.2 4 9 c4t3d0
> 1.3 401.8 0.1 2.9 0.3 0.1 0.8 0.2 4 9 c7t4d0
> 1.3 388.5 0.1 2.9 0.2 0.1 0.5 0.2 4 8 c0t4d0
> 1.3 391.8 0.1 2.9 0.2 0.1 0.5 0.2 4 9 c1t4d0
> 1.3 395.1 0.1 2.9 0.2 0.1 0.6 0.2 4 8 c4t4d0
> 9.9 409.7 0.6 2.9 0.8 0.2 1.9 0.4 10 18 c7t5d0
> 1.3 395.0 0.1 2.9 0.3 0.1 0.6 0.2 4 9 c0t5d0
> 10.6 405.3 0.7 2.9 0.8 0.2 2.0 0.4 11 18 c5t5d0
> 1.3 392.8 0.1 2.9 0.2 0.1 0.5 0.2 4 8 c1t5d0
> 10.7 407.6 0.7 2.9 0.9 0.2 2.1 0.4 11 19 c7t6d0
> 18.6 407.5 1.2 2.9 1.0 0.2 2.4 0.6 15 24 c4t5d0
> 10.9 407.8 0.7 2.9 0.8 0.2 2.0 0.4 11 19 c5t6d0
> 0.6 337.6 0.0 2.6 0.2 0.1 0.5 0.2 3 7 c6t0d0
> 10.7 408.8 0.7 2.9 0.8 0.2 1.9 0.4 11 19 c1t6d0
> 10.0 411.6 0.6 2.9 0.8 0.2 1.8 0.4 11 18 c7t7d0
> 19.3 403.1 1.2 2.9 1.1 0.3 2.6 0.6 16 26 c4t6d0
> 0.6 336.2 0.0 2.6 0.1 0.1 0.4 0.2 3 7 c6t1d0
> 11.0 407.7 0.7 2.9 0.8 0.2 1.9 0.4 11 19 c5t7d0
> 10.6 406.6 0.7 2.9 0.8 0.2 2.0 0.4 11 19 c1t7d0
> 18.5 401.7 1.2 2.9 1.0 0.2 2.5 0.6 15 25 c0t6d0
> 19.4 404.8 1.2 2.9 1.0 0.3 2.5 0.6 15 25 c4t7d0
> 1.2 397.6 0.1 2.9 0.3 0.1 0.9 0.2 4 9 c6t2d0
> 19.0 398.7 1.2 2.9 1.0 0.3 2.5 0.6 15 25 c0t7d0
> 1.3 396.1 0.1 2.9 0.2 0.1 0.5 0.2 4 8 c6t3d0
> 1.3 392.8 0.1 2.9 0.2 0.1 0.4 0.2 4 8 c6t4d0
> 18.4 403.3 1.2 2.9 1.1 0.2 2.5 0.6 15 24 c6t5d0
> 19.3 402.7 1.2 2.9 1.1 0.3 2.5 0.6 15 25 c6t6d0
> 18.8 406.1 1.2 2.9 1.0 0.2 2.4 0.6 15 25 c6t7d0
>
>
> Any experts here to say if that's just because bonnie via NFSv3 is a
> very special test - if it is I can start something else, suggestions? -
> or if some disks are really too busy and slowing down the pool.
>
Here is my attempt :
http://blogs.sun.com/roch/entry/decoding_bonnie
-r
> Thanks for more insight
>
> Carsten
> _______________________________________________
> zfs-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss