I've some important information that should shed some light on this behavior:
This evening I created a new filesystem across the very same 50 disks including the COMPRESS attribute. My goal was to isolate some workload to the new filesystem and started moving a 100GB directory tree over to the new FS. While I was copying I was averaging around 25MB read and 25MB write as expected. [b]Now I opened 'vi' and wanted to write out a new file in the new filesystem and what I saw was shocking: my reads remained the same but my writes shot upto the 150+MB/S range. This abnormal I/O pattern continued until the 'vi' returned from the write request.[/b] Here are the 'zpool iostat mtdc 30' output: capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- mtdc 806G 2.48T 38 173 1.93M 7.52M mtdc 806G 2.48T 188 228 15.0M 8.78M mtdc 807G 2.48T 266 624 14.0M 16.5M mtdc 807G 2.48T 286 670 17.1M 14.5M mtdc 807G 2.48T 293 1.21K 18.2M 98.4M <<-- vi activity, note mismatch in r/w rates mtdc 808G 2.48T 457 560 35.5M 24.2M mtdc 809G 2.48T 405 504 31.7M 26.3M mtdc 809G 2.48T 328 1.37K 25.2M 152M <<-- vi activity, note r/w mismatch in r/w rates mtdc 810G 2.48T 428 671 33.0M 48.0M mtdc 811G 2.48T 463 500 35.9M 26.4M mtdc 811G 2.48T 207 1.39K 16.5M 154M<<-- vi activity, note r/w mismatch in r/w rates mtdc 812G 2.48T 310 878 23.9M 77.7M mtdc 813G 2.48T 362 494 26.1M 25.3M mtdc 813G 2.48T 381 1.05K 26.8M 103M mtdc 814G 2.48T 347 1.33K 25.0M 135M mtdc 815G 2.48T 288 1.38K 21.7M 150M mtdc 815G 2.48T 425 513 32.7M 25.8M mtdc 816G 2.47T 413 515 30.2M 25.1M mtdc 817G 2.47T 341 512 21.9M 25.1M mtdc 818G 2.47T 293 529 18.5M 25.5M mtdc 818G 2.47T 344 508 23.4M 24.7M mtdc 819G 2.47T 442 512 33.4M 24.1M mtdc 820G 2.47T 385 483 28.3M 24.4M mtdc 820G 2.47T 372 483 24.7M 24.7M mtdc 821G 2.47T 347 535 23.0M 24.2M mtdc 821G 2.47T 290 497 17.9M 24.9M mtdc 823G 2.47T 349 517 20.0M 24.1M mtdc 823G 2.47T 399 512 21.2M 24.5M mtdc 824G 2.47T 383 612 19.3M 17.7M mtdc 824G 2.47T 390 614 14.2M 17.5M This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss