Marcelo,

Thanks for the details ! This rules out a bug that I was suspecting :
http://bugs.opensolaris.org/view_bug.do?bug_id=6664765

This needs more analysis.
What does the "rm" command fail with ?
We could probably run truss on the rm command like : "truss -o 
/tmp/rm.truss rm <filename>"
You then pass on the file : /tmp/rm.truss

This would show us which system call is failing and why. That would give 
us a good idea of what
is going wrong.

Thanks and regards,
Sanjeev.

Marcelo Leal wrote:
> Hello all,
>
> # zpool status
>   pool: mypool
>  state: ONLINE
>  scrub: scrub completed after 0h2m with 0 errors on Fri Dec 19 09:32:42 2008
> config:
>
>         NAME         STATE     READ WRITE CKSUM
>         storage      ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t2d0   ONLINE       0     0     0
>             c0t3d0   ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t4d0   ONLINE       0     0     0
>             c0t5d0   ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t6d0   ONLINE       0     0     0
>             c0t7d0   ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t8d0   ONLINE       0     0     0
>             c0t9d0   ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t10d0  ONLINE       0     0     0
>             c0t11d0  ONLINE       0     0     0
>           mirror     ONLINE       0     0     0
>             c0t12d0  ONLINE       0     0     0
>             c0t13d0  ONLINE       0     0     0
>         logs         ONLINE       0     0     0
>           c0t1d0     ONLINE       0     0     0
>
> errors: No known data errors
>
> -  "zfs list -r " shows eight filesystems, and nine snapshots per filesystem.
> ...
> mypool/colorado                                         1.83G  4.00T  1.13G  
> /mypool/colorado
> mypool/color...@centenario-2008-12-28-01:00:00   40.3M      -  1.46G  -
> mypool/color...@centenario-2008-12-29-01:00:00   30.0M      -  1.54G  -
> mypool/color...@campeao-2008-12-29-09:00:00  10.4M      -  1.24G  -
> mypool/color...@campeao-2008-12-29-13:00:00  31.5M      -  1.29G  -
> mypool/color...@campeao-2008-12-29-17:00:00  5.46M      -  1.10G  -
> mypool/color...@campeao-2008-12-29-21:00:00  4.23M      -  1.13G  -
> mypool/color...@centenario-2008-12-30-01:00:00       0      -  1.16G  -
> mypool/color...@campeao-2008-12-30-01:00:00      0      -  1.16G  -
> mypool/color...@campeao-2008-12-30-05:00:00  6.24M      -  1.16G  -
> ...
>  
>  - How many entries does it have ?
>  Now there is just one file, the problematic one... but before the whole 
> problem, four or five small files (the whole pool is pretty empty).
> - Which filesystem (of the zpool) does it belong to ?
>  See above...
>
>  Thanks a lot!
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to