with uname -a :

SunOS disk-01 5.11 snv_111b i86pc i386 i86pc Solaris

it is Opesolaris 2009.06


other useful info:

zfs list sas/mail-cts

NAME           USED  AVAIL  REFER  MOUNTPOINT
sas/mail-cts   149G   250G   149G  /sas/mail-cts



and with df

Filesystem           1K-blocks      Used Available Use% Mounted on
sas/mail-cts         418174037 156501827 261672210  38% /sas/mail-cts

Do you need any other infos?


Valerio Piancastelli
piancaste...@iclos.com

----- Messaggio originale -----
Da: "Victor Latushkin" <victor.latush...@sun.com>
A: "Valerio Piancastelli" <piancaste...@iclos.com>
Cc: zfs-discuss@opensolaris.org
Inviato: Venerdì, 17 settembre 2010 16:46:31
Oggetto: Re: [zfs-discuss] ZFS Dataset lost structure

What OpenSolaris build are you running?

victor

On 17.09.10 13:53, Valerio Piancastelli wrote:
> After a crash, in my zpool tree, some dataset report this we i do a ls -la:
> 
> brwxrwxrwx  2  777 root 0, 0 Oct 18  2009 mail-cts
> 
> also if i set 
> 
> zfs set mountpoint=legacy dataset
> 
> and then i mount the dataset to other location
> 
> before the directory tree was only :
> 
> dataset
> - vdisk.raw
> 
> The file was a backing device of a Xen VM, but i cannot access the directory 
> structure of this dataset.
> However i can send a snapshot of this dataset to another system, but the same 
> behavior occurs.
> 
> If i do 
> zdb -dddd dataset
> at the end of the output i can se the references to my file:
> 
>     Object  lvl   iblk   dblk  dsize  lsize   %full  type
>          7    5    16K   128K   149G   256G   58.26  ZFS plain file
>                                         264   bonus  ZFS znode
>         dnode flags: USED_BYTES USERUSED_ACCOUNTED 
>         dnode maxblkid: 2097152
>         path    /vdisk.raw
>         uid     777
>         gid     60001
>         atime   Sun Oct 18 00:49:05 2009
>         mtime   Thu Sep  9 16:22:14 2010
>         ctime   Thu Sep  9 16:22:14 2010
>         crtime  Sun Oct 18 00:49:05 2009
>         gen     444453
>         mode    100777
>         size    274877906945
>         parent  3
>         links   1
>         pflags  40800000104
>         xattr   0
>         rdev    0x0000000000000000
> 
> if i further investigate:
> 
> zdb -ddddd dataset 7
> 
> Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 
> objects, rootbp DVA[0]=<0:6654f24000:200> DVA[1]=<1:1a1e3c3600:200> [L0 D
> MU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P 
> birth=182119L/182119P fill=5 
> cksum=177e7dd4cd:81ae6d143ee:1782c972431a0:2f927ca7
> a1de2c
> 
>     Object  lvl   iblk   dblk  dsize  lsize   %full  type
>          7    5    16K   128K   149G   256G   58.26  ZFS plain file
>                                         264   bonus  ZFS znode
>         dnode flags: USED_BYTES USERUSED_ACCOUNTED 
>         dnode maxblkid: 2097152
>         path    /vdisk.raw
>         uid     777
>         gid     60001
>         atime   Sun Oct 18 00:49:05 2009
>         mtime   Thu Sep  9 16:22:14 2010
>         ctime   Thu Sep  9 16:22:14 2010
>         crtime  Sun Oct 18 00:49:05 2009
>         gen     444453
>         mode    100777
>         size    274877906945
>         parent  3
>         links   1
>         pflags  40800000104
>         xattr   0
>         rdev    0x0000000000000000
> Indirect blocks:
>                0 L4     1:6543e22800:400 4000L/400P F=1221767 B=177453/177453
>                0  L3    1:65022f8a00:2000 4000L/2000P F=1221766 
> B=177453/177453
>                0   L2   1:65325a0400:1c00 4000L/1c00P F=16229 B=177453/177453
>                0    L1  1:6530718400:1600 4000L/1600P F=128 B=177453/177453
>                0     L0 0:433c473a00:20000 20000L/20000P F=1 B=177453/177453
>            20000     L0 1:205c471600:20000 20000L/20000P F=1 B=91830/91830
>            40000     L0 0:3c418ac600:20000 20000L/20000P F=1 B=91830/91830
>            60000     L0 0:3c418cc600:20000 20000L/20000P F=1 B=91830/91830
>            80000     L0 0:3c418ec600:20000 20000L/20000P F=1 B=91830/91830
>            a0000     L0 0:3c4190c600:20000 20000L/20000P F=1 B=91830/91830
>            c0000     L0 0:3c4192c600:20000 20000L/20000P F=1 B=91830/91830
>            e0000     L0 0:3c4194c600:20000 20000L/20000P F=1 B=91830/91830
>           100000     L0 0:3c4198c600:20000 20000L/20000P F=1 B=91830/91830
>           120000     L0 0:3c4196c600:20000 20000L/20000P F=1 B=91830/91830
>           140000     L0 1:205c491600:20000 20000L/20000P F=1 B=91830/91830
>           160000     L0 1:205c4b1600:20000 20000L/20000P F=1 B=91830/91830
>           180000     L0 1:205c4d1600:20000 20000L/20000P F=1 B=91830/91830
>           1a0000     L0 1:205c4f1600:20000 20000L/20000P F=1 B=91830/91830
>           1c0000     L0 1:205c511600:20000 20000L/20000P F=1 B=91830/91830
>           1e0000     L0 1:205c531600:20000 20000L/20000P F=1 B=91830/91830
>           200000     L0 1:205c551600:20000 20000L/20000P F=1 B=91830/91830
>           220000     L0 1:205c571600:20000 20000L/20000P F=1 B=91830/91830
>           240000     L0 0:3c419ac600:20000 20000L/20000P F=1 B=91830/91830
>           260000     L0 0:3c419cc600:20000 20000L/20000P F=1 B=91830/91830
>           280000     L0 0:3c419ec600:20000 20000L/20000P F=1 B=91830/91830
>           2a0000     L0 0:3c41a0c600:20000 20000L/20000P F=1 B=91830/91830
>  
>          .................. many more lines till 149G
> 
> It seems all data blocks are there.
> 
> Any ideas on hot to recover from this situation?
> 
> 
> Valerio Piancastelli
> piancaste...@iclos.com
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
--
Victor Latushkin                   phone: x11467 / +74959370467
TSC-Kernel EMEA                    mobile: +78957693012
Sun Services, Moscow               blog: http://blogs.sun.com/vlatushkin
Sun Microsystems
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to