Henrik Johansson wrote:
> Hello,
> 
> I have a snv101 machine with a three disk raidz pool which allocation  
> of about 1TB with for no obvious reason, no snapshot, no files,  
> nothing. I tried to run zdb on the pool to see if I got any useful  
> info, but it has been working for over two hours without any more  
> output.
> 
> I know when the allocation occurred, I issued a mkfile 1024G command  
> in the background, but changed my mind and killed the process, after  
> that the 912G was missing (don't remember if I actually removed the  
> test file or what happened). If I copy a file to the /tank filesystem  
> it uses even more space, but that space is reclaimed after I remove  
> the file.
> 
> I could recreate the pool, it is empty, but I created it to test the  
> system in the first place so I would like to know what's going on. I  
> have tried to export and import the pool, but it stays the same.
> 
> Any ideas?

You can try to increase zdb verbosity by adding some -v swiches. Also 
try dumping all the objects with 'zdb -dddd tank' (add even more 'd' for 
   extra verbosity).

cheers,
victor
> 
> # zfs list tank
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> tank   912G  1.77T   912G  /tank
> 
> # ls -alb /tank
> total 7
> drwxr-xr-x   2 root     root           2 Nov 10 22:51 .
> drwxr-xr-x  24 root     root          26 Nov 10 08:23 ..
> 
> # du -hs /tank
>    2K   /tank
> 
> # zfs list -t snapshot
> no datasets available
> 
> # zpool status tank
>   pool: tank
> state: ONLINE
> scrub: none requested
> config:
> 
>         NAME        STATE     READ WRITE CKSUM
>         tank        ONLINE       0     0     0
>           raidz1    ONLINE       0     0     0
>             c1t1d0  ONLINE       0     0     0
>             c1t2d0  ONLINE       0     0     0
>             c1t4d0  ONLINE       0     0     0
> 
> errors: No known data errors
> 
> # zpool list tank
> NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
> tank  4.06T  1.34T  2.73T    32%  ONLINE  -
> 
> The output from zdb so far, two hours without any more output, zdb  
> consuming cpu-time and disks are accessed:
> 
> # zdb tank
>     version=13
>     name='tank'
>     state=0
>     txg=1703
>     pool_guid=15862877351892785549
>     hostid=13281026
>     hostname='tank'
>     vdev_tree
>         type='root'
>         id=0
>         guid=15862877351892785549
>         children[0]
>                 type='raidz'
>                 id=0
>                 guid=11705146785403105303
>                 nparity=1
>                 metaslab_array=23
>                 metaslab_shift=35
>                 ashift=9
>                 asize=4500865941504
>                 is_log=0
>                 children[0]
>                         type='disk'
>                         id=0
>                         guid=16850214711683290971
>                         path='/dev/dsk/c1t1d0s0'
>                         devid='id1,[EMAIL PROTECTED]/a'
>                         phys_path='/[EMAIL PROTECTED],0/pci1043,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0:a'
>                         whole_disk=1
>                         DTL=42
>                 children[1]
>                         type='disk'
>                         id=1
>                         guid=8819352398702414737
>                         path='/dev/dsk/c1t2d0s0'
>                         devid='id1,[EMAIL PROTECTED]/a'
>                         phys_path='/[EMAIL PROTECTED],0/pci1043,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0:a'
>                         whole_disk=1
>                         DTL=41
>                 children[2]
>                         type='disk'
>                         id=2
>                         guid=17659718247984334809
>                         path='/dev/dsk/c1t4d0s0'
>                         devid='id1,[EMAIL PROTECTED]/a'
>                         phys_path='/[EMAIL PROTECTED],0/pci1043,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0:a'
>                         whole_disk=1
>                         DTL=40
> Uberblock
> 
>         magic = 0000000000bab10c
>         version = 13
>         txg = 2138
>         guid_sum = 15557077274537276521
>         timestamp = 1226353932 UTC = Mon Nov 10 22:52:12 2008
> 
> Dataset mos [META], ID 0, cr_txg 4, 1.45M, 73 objects
> Dataset tank [ZPL], ID 16, cr_txg 1, 912G, 5 objects
> 
> # zfs get all tank
> NAME  PROPERTY              VALUE                  SOURCE
> tank  type                  filesystem             -
> tank  creation              Fri Nov  7  1:57 2008  -
> tank  used                  912G                   -
> tank  available             1.77T                  -
> tank  referenced            912G                   -
> tank  compressratio         1.00x                  -
> tank  mounted               yes                    -
> tank  quota                 none                   default
> tank  reservation           none                   default
> tank  recordsize            128K                   default
> tank  mountpoint            /tank                  default
> tank  sharenfs              off                    default
> tank  checksum              on                     default
> tank  compression           off                    default
> tank  atime                 on                     default
> tank  devices               on                     default
> tank  exec                  on                     default
> tank  setuid                on                     default
> tank  readonly              off                    default
> tank  zoned                 off                    default
> tank  snapdir               hidden                 default
> tank  aclmode               groupmask              default
> tank  aclinherit            restricted             default
> tank  canmount              on                     default
> tank  shareiscsi            off                    default
> tank  xattr                 on                     default
> tank  copies                1                      default
> tank  version               3                      -
> tank  utf8only              off                    -
> tank  normalization         none                   -
> tank  casesensitivity       sensitive              -
> tank  vscan                 off                    default
> tank  nbmand                off                    default
> tank  sharesmb              off                    default
> tank  refquota              none                   default
> tank  refreservation        none                   default
> tank  primarycache          all                    default
> tank  secondarycache        all                    default
> tank  usedbysnapshots       0                      -
> tank  usedbydataset         912G                   -
> tank  usedbychildren        1.45M                  -
> tank  usedbyrefreservation  0                      -
> 
> # zpool get all tank
> NAME  PROPERTY       VALUE       SOURCE
> tank  size           4.06T       -
> tank  used           1.34T       -
> tank  available      2.73T       -
> tank  capacity       32%         -
> tank  altroot        -           default
> tank  health         ONLINE      -
> tank  guid           15862877351892785549  -
> tank  version        13          default
> tank  bootfs         -           default
> tank  delegation     on          default
> tank  autoreplace    off         default
> tank  cachefile      -           default
> tank  failmode       wait        default
> tank  listsnapshots  off         default
> 
> Regards
> Henrik Johansson
> http://sparcv9.blogspot.com
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to