We have seen some unfortunate miscommunication here, and misinterpretation. 
This extends into differences of culture. One of the vocal person in here is 
surely not 'Anti-xyz'; rather I sense his intense desire to further the 
progress by pointing his finger to some potential wounds.
May I repeat my request, to run a hardware diagnosis on the drives concerned 
(being aware of the ambiguities involved). If the hardware passes with flying 
colours, we need to look deeper into the underlying matter. Many in here 
administrate professional systems, with SCSI, RAID and whatnot. If ZFS does a 
great service to them, we are happy. On the other hand, though, and, again, 
management decisions come into perspective, OpenSolaris tries to appeal to the 
mass market and enter the end-user scene. Then remarks like one had to RTFM the 
man-pages of zfs, zpool, up and down, is out of place. USB disk drives are 
common, ubiquitous even. To discourage their use is out of question. To add 
another layer to 'mount' likewise. Now we are in heavy seas: ZFS might lose all 
data irrecoverably? Not fine, but what's the alternative? UFS is sparsely 
supported elsewhere (and probably considered 'legacy' by SUN), extn is 
supported read-only. The only and last other file system is vfat/pcfs. Al
 as, when I wrote in finding it failing on a larger drive, I was told (search 
the archives), that is was a 'hack' built into the kernel only. Now what? vfat 
is the crappiest of all. UFS obsolete and not widely available, ZFS is 
currently discussed to lose all data irreversibly on USB-drives. 
I repeat that I have never lost a single drive - despite of usually using 
cheapo crap outside of my production boxes - in the last 10 years, aside of 
complete hardware failure. All my other drives, ext2, ext3, ffs, have always 
allowed to salvage some stuff and recover the larger part of data, despite of 
some of my users yanking out drives in the most inconvenient moments. Back to 
where I started from, with some questions:
1. Can the relevant people confirm that drives might turn dead when leaving a 
pool at unfortunate moments? Despite of complete physical integrity? [I'd 
really appreciate an answer here, because this is what I am starting to 
implement here: ZFS on USB drives.]
2. Are those drives in unrecoverable state passing their integrity/diagnosis 
tests (r/w)?
3. If what has been mentioned, that a pool is an entity like RAID in between 
and hurting the pool might as well destruct data, if this is the case, can this 
destruction of a pool not also happen within the confines of a server, without 
any physical yanking of the drive, by a dying controller?

Thanks,

Uwe
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to