Hi Roy,
You are right. So it looks like re-distribution issue. Initially  there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which
which we added 24 more disks and created additional vdevs. The initial
vdevs are filled up and so write speed declined. Now  how to find files
that are present in a Vdev or a disk. That way I can remove and re-copy
back to distribute data. Any other way to solve this ?

Total capacity of pool - 98Tb
Used - 44Tb
Free - 54 Tb

root@host:# zpool iostat -v
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
test       54.0T  62.7T     52  1.12K  2.16M  5.78M
  raidz1     11.2T  2.41T     13     30   176K   146K
    c2t0d0       -      -      5     18  42.1K  39.0K
    c2t1d0       -      -      5     18  42.2K  39.0K
    c2t2d0       -      -      5     18  42.5K  39.0K
    c2t3d0       -      -      5     18  42.9K  39.0K
    c2t4d0       -      -      5     18  42.6K  39.0K
  raidz1     13.3T   308G     13    100   213K   521K
    c2t5d0       -      -      5     94  50.8K   135K
    c2t6d0       -      -      5     94  51.0K   135K
    c2t7d0       -      -      5     94  50.8K   135K
    c2t8d0       -      -      5     94  51.1K   135K
    c2t9d0       -      -      5     94  51.1K   135K
  raidz1     13.4T  19.1T      9    455   743K  2.31M
    c2t12d0      -      -      3    137  69.6K   235K
    c2t13d0      -      -      3    129  69.4K   227K
    c2t14d0      -      -      3    139  69.6K   235K
    c2t15d0      -      -      3    131  69.6K   227K
    c2t16d0      -      -      3    141  69.6K   235K
    c2t17d0      -      -      3    132  69.5K   227K
    c2t18d0      -      -      3    142  69.6K   235K
    c2t19d0      -      -      3    133  69.6K   227K
    c2t20d0      -      -      3    143  69.6K   235K
    c2t21d0      -      -      3    133  69.5K   227K
    c2t22d0      -      -      3    143  69.6K   235K
    c2t23d0      -      -      3    133  69.5K   227K
  raidz1     2.44T  16.6T      5    103   327K   485K
    c2t24d0      -      -      2     48  50.8K  87.4K
    c2t25d0      -      -      2     49  50.7K  87.4K
    c2t26d0      -      -      2     49  50.8K  87.3K
    c2t27d0      -      -      2     49  50.8K  87.3K
    c2t28d0      -      -      2     49  50.8K  87.3K
    c2t29d0      -      -      2     49  50.8K  87.3K
    c2t30d0      -      -      2     49  50.8K  87.3K
  raidz1     8.18T  10.8T      5    295   374K  1.54M
    c2t31d0      -      -      2    131  58.2K   279K
    c2t32d0      -      -      2    131  58.1K   279K
    c2t33d0      -      -      2    131  58.2K   279K
    c2t34d0      -      -      2    132  58.2K   279K
    c2t35d0      -      -      2    132  58.1K   279K
    c2t36d0      -      -      2    133  58.3K   279K
    c2t37d0      -      -      2    133  58.2K   279K
  raidz1     5.42T  13.6T      5    163   383K   823K
    c2t38d0      -      -      2     61  59.4K   146K
    c2t39d0      -      -      2     61  59.3K   146K
    c2t40d0      -      -      2     61  59.4K   146K
    c2t41d0      -      -      2     61  59.4K   146K
    c2t42d0      -      -      2     61  59.3K   146K
    c2t43d0      -      -      2     62  59.2K   146K
    c2t44d0      -      -      2     62  59.3K   146K


On Mon, Feb 11, 2013 at 10:23 PM, Roy Sigurd Karlsbakk 
<r...@karlsbakk.net>wrote:

>
> root@host:~# fmadm faulty
> --------------- ------------------------------------  --------------
> ---------
> TIME            EVENT-ID                              MSG-ID
> SEVERITY
> --------------- ------------------------------------  --------------
> ---------
> Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a  ZFS-8000-HC
> Major
>
> Host        : host
> Platform    : PowerEdge-R810
> Product_sn  :
>
> Fault class : fault.fs.zfs.io_failure_wait
> Affects     : zfs://pool=test
>                   faulted but still in service
> Problem in  : zfs://pool=test
>                   faulted but still in service
>
> Description : The ZFS pool has experienced currently unrecoverable I/O
>                     failures.  Refer to http://illumos.org/msg/ZFS-8000-HCfor
>               more information.
>
> Response    : No automated response will be taken.
>
> Impact      : Read and write I/Os cannot be serviced.
>
> Action      : Make sure the affected devices are connected, then run
>                     'zpool clear'.
> --
>
> The pool looks healthy to me, but it it isn't very well balanced. Have you
> been adding new VDEVs on your way to grow it? Check if of the VDEVs are
> fuller than others. I don't have an OI/IllumOS system available ATM, but
> IIRC this can be done with iostat -v. Older versions of ZFS striped to all
> VDEVs regardless to fill, which slowed down the write speeds rather
> horribly if some VDEVs were full (>90%). This shouldn't be the case with
> OmniOS, but it *may* be the case with an old zpool version. I don't know.
>
> I'd check fill rate of the VDEVs first, then perhaps try to upgrade the
> zpool unless you have to be able to mount it on an older version of zpool
> (on S10 or similar).
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 98013356
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt.
> Det er et elementært imperativ for alle pedagoger å unngå eksessiv
> anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller
> eksisterer adekvate og relevante synonymer på norsk.
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to