now it gets extremly slow at around 400G sent.  

first iostat result is captured when the send operation starts.

                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
sh001a       37.6G  16.2T      0  1.17K     82   146M
  raidz2     37.6G  16.2T      0  1.17K     82   146M
    c0t10d0      -      -      0    201    974  21.0M
    c0t11d0      -      -      0    201    974  21.1M
    c0t23d0      -      -      0    201  1.56K  21.0M
    c0t24d0      -      -      0    201  1.26K  21.0M
    c0t25d0      -      -      0    201    662  21.1M
    c0t26d0      -      -      0    201  1.26K  21.1M
    c0t2d0       -      -      0    202    974  21.1M
    c0t5d0       -      -      0    200    662  20.9M
    c0t6d0       -      -      0    200  1.26K  21.0M
-----------  -----  -----  -----  -----  -----  -----
syspool      10.6G   137G     11     13   668K   137K
  c3d0s0     10.6G   137G     11     13   668K   137K
-----------  -----  -----  -----  -----  -----  -----
vol1         5.40T  1.85T    621      5  76.9M  12.4K
  raidz1     5.40T  1.85T    621      5  76.9M  12.4K
    c0t22d0      -      -    280      3  19.5M  14.2K
    c0t3d0       -      -    279      3  19.5M  13.9K
    c0t20d0      -      -    280      3  19.5M  14.2K
    c0t21d0      -      -    280      3  19.5M  13.9K
-----------  -----  -----  -----  -----  -----  -----

-----------------------------------------------------------------------------------------------
below result is when ZFS send stuck @ 397G.  Seems the HD I/O is quite normal.  
then, where is the data... notice that, IOstat command response very slow.

                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
sh001a        397G  15.9T      0  1.08K    490   136M
  raidz2      397G  15.9T      0  1.08K    490   136M
    c0t10d0      -      -      0    185  1.68K  19.4M
    c0t11d0      -      -      0    185  1.71K  19.4M
    c0t23d0      -      -      0    185  1.99K  19.4M
    c0t24d0      -      -      0    185  1.79K  19.4M
    c0t25d0      -      -      0    185  2.10K  19.4M
    c0t26d0      -      -      0    185  2.07K  19.4M
    c0t2d0       -      -      0    185  1.99K  19.4M
    c0t5d0       -      -      0    185  2.12K  19.4M
    c0t6d0       -      -      0    185  2.23K  19.4M
-----------  -----  -----  -----  -----  -----  -----
syspool      10.6G   137G      2      6   131K  48.0K
  c3d0s0     10.6G   137G      2      6   131K  48.0K
-----------  -----  -----  -----  -----  -----  -----
vol1         5.40T  1.85T   1009      1   125M  2.85K
  raidz1     5.40T  1.85T   1009      1   125M  2.85K
    c0t22d0      -      -    453      0  31.6M  2.64K
    c0t3d0       -      -    452      0  31.6M  2.58K
    c0t20d0      -      -    453      0  31.6M  2.64K
    c0t21d0      -      -    453      0  31.6M  2.56K
-----------  -----  -----  -----  -----  -----  -----
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to