Greetings,

my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little 
unstable. From the symptoms during installation it seems that there might be 
something with the ahci driver. No problem with the Opensolaris LiveCD system.

Some weeks ago during copy of about 2 GB from a USB stick to the zfs 
filesystem, the system froze and afterwards refused to boot.

Now when investigating the rpool from the LiveCd system it can be seen, that 
still about 11.5 GB are used on the rpool (total capacity: ~65 GB), but the 
space occupied by the files that are actually accessible after importing the 
rpool is only about 750 MB. The '/' filesystem can not be accessed (the 10 GB 
is the Opensolaris installation including all the application from the offline 
repository installation.)


The following logs from the unbootable system:
# import / export the rpool:
j...@opensolaris:~$ pfexec zpool import rpool
cannot import 'rpool': pool may be in use from other system, it was last 
accessed by opensolaris (hostid: 0x4a77a8) on Tue Jun  8 16:03:25 2010
use '-f' to import anyway
j...@opensolaris:~$ pfexec zpool import -F rpool
cannot import 'rpool': pool may be in use from other system, it was last 
accessed by opensolaris (hostid: 0x4a77a8) on Tue Jun  8 16:03:25 2010
use '-f' to import anyway
j...@opensolaris:~$ pfexec zpool import -f rpool
j...@opensolaris:~$ pfexec zpool export rpool


# import to a different mountpoint and check the status:
pfexec zpool import -R /hugo rpool
j...@opensolaris:~$ pfexec zpool status -v rpool
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c8t0d0s0  ONLINE       0     0     0

errors: No known data errors
j...@opensolaris:~$ zfs get all rpool
NAME   PROPERTY                        VALUE                           SOURCE
rpool  type                            filesystem                      -
rpool  creation                        So Mai 16 12:14 2010            -
rpool  used                            11,5G                           -
rpool  available                       53,0G                           -
rpool  referenced                      81K                             -
rpool  compressratio                   1.00x                           -
rpool  mounted                         yes                             -
rpool  quota                           none                            default
rpool  reservation                     none                            default
rpool  recordsize                      128K                            default
rpool  mountpoint                      /hugo/rpool                     default
rpool  sharenfs                        off                             default
rpool  checksum                        on                              default
rpool  compression                     off                             default
rpool  atime                           on                              default
rpool  devices                         on                              default
rpool  exec                            on                              default
rpool  setuid                          on                              default
rpool  readonly                        off                             default
rpool  zoned                           off                             default
rpool  snapdir                         hidden                          default
rpool  aclmode                         groupmask                       default
rpool  aclinherit                      restricted                      default
rpool  canmount                        on                              default
rpool  shareiscsi                      off                             default
rpool  xattr                           on                              default
rpool  copies                          1                               default
rpool  version                         3                               -
rpool  utf8only                        off                             -
rpool  normalization                   none                            -
rpool  casesensitivity                 sensitive                       -
rpool  vscan                           off                             default
rpool  nbmand                          off                             default
rpool  sharesmb                        off                             default
rpool  refquota                        none                            default
rpool  refreservation                  none                            default
rpool  primarycache                    all                             default
rpool  secondarycache                  all                             default
rpool  usedbysnapshots                 0                               -
rpool  usedbydataset                   81K                             -
rpool  usedbychildren                  11,5G                           -
rpool  usedbyrefreservation            0                               -
rpool  org.opensolaris.caiman:install  ready                           local


# do a scrub:
j...@opensolaris:/hugo$ pfexec zpool scrub rpool
j...@opensolaris:/hugo$ !!
pfexec zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h8m with 0 errors on Wed Jun  9 14:34:07 2010
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c8t0d0s0  ONLINE       0     0     0

errors: No known data errors
j...@opensolaris:/hugo$ 


# summarize the data 'visible' in the pool 'mountpoint':
j...@opensolaris:/hugo$ pfexec du -sh *
750M    export
65K     rpool
7,5K    space
j...@opensolaris:/hugo$ 


Is there any further way to restore access to the 'invisible' filesystems in 
the rpool - or is this situation unrecoverable (NOTE: There is only one device 
in the zpool) ?



Best Regards,

probear
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to