Re: [zfs-discuss] Zpools on USB

2009-03-14 Thread Florian Ermisch

Richard Elling schrieb:
[...]

ZFS maintains a cache of what pools were imported so that at boot time,
it will automatically try to re-import the pool.  The file is 
/etc/zfs/zpool.cache

and you can view its contents by using "zdb -C"

If the current state of affairs does not match the cache, then you can
export the pool, which will clear its entry in the cache.  Then retry the
import.
-- richard


I had this problem myself with a mirrored zpool in a ICY BOX IB-3218 (2 
HDDs which appear as different LUNs) set up for backup purposes.
For zpool which are intended to be disconnect (or powered of) regulary 
an 'autoexport' flag would be nice: If set the system exports the pool 
at shutdown. This would prevent problems like Stefan's on a reboot and 
when a zpool from a shutdown system is connected to an other system 
(like "Hm, old slow laptop's powered off, but hey, everything I need is 
also on this shiny 1.5TB USB-HDD-zpool with all my other important 
stuff/backups.. *plug into workstation* OMG! My backup-pool is faulty!!")


Regards, Florian Ermisch



What really worries me is that zfs for some reason has started to treat
a drive which belonged to one pool as if it was belonging to another
pool. Could this happen with other non-USB drives in other configuration
scenarios such as mirrors or raidz?
I suppose anything can happen on Friday the 13th...
Cheers,

   Stefan Olsson



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How recoverable is an 'unrecoverable error'?

2009-04-16 Thread Florian Ermisch

Uwe Dippel schrieb:

Bob Friesenhahn wrote:


Since it was not reported that user data was impacted, it seems likely 
that there was a read failure (or bad checksum) for ZFS metadata which 
is redundantly stored.


(Maybe I am too much of a linguist to not stumble over the wording 
here.) If it is 'redundant', it is 'recoverable', am I right? Why, if 
this is the case, does scrub not recover it, and scrub even fails to 
correct the CKSUM error as long as it is flagged 'unrecoverable', but 
can do exactly that after the 'clear' command?




Ubuntu Linux is unlikely to notice data problems unless the drive 
reports hard errors.  ZFS is much better at checking for errors.


No doubt. But ext3 also seems to need much less attention, very much 
fewer commands. Which leaves it as a viable alternative. I still hope 
that one day ZFS will be maintainable as simple as ext3; respectively do 
all that maintenance on its own.  :)

Ext3 has no (optional) redundancy by using more than one disc and no
volume managment. You need Device Mapper for redundancy (Multiple
Devices or Linux Volume Management) and volume management (LVM again).
If you want such features on Linux Ext3 is the top of at least 2,
probably 3 layers of storage managment.
Should I add NFS, CIFS and iSCSI exports or the needlessness of resizing
volumes?

You're comparing a single tool with a whole production line.
Sorry for the flaming but yesterday I spend 4 additional hours at work
with recovery of a xen server with a single error somewhere in it's LVM
causing the virtual servers to freeze.



Uwe


Kind Regards, FLorian



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss