Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-08-28 Thread MC
Thanks for the comprehensive replies! I'll need some baby speak on this one though: > The recommended use of whole disks is for drives with volatile write caches > where ZFS will enable the cache if it owns the whole disk. There may be an > RFE lurking here, but it might be tricky to correctly

Re: [zfs-discuss] Best way to incorporate disk size tolerance into raidz arrays?

2007-08-28 Thread Richard Elling
MC wrote: > The situation: a three 500gb disk raidz array. One disk breaks and you > replace it with a new one. But the new 500gb disk is slightly smaller > than the smallest disk in the array. This is quite a problem for RAID arrays, too. It is why vendors use custom labels for disks. Whe

Re: [zfs-discuss] Best way to incorporate disk size tolerance into raidz arrays?

2007-08-28 Thread Marion Hakanson
[EMAIL PROTECTED] said: > The situation: a three 500gb disk raidz array. One disk breaks and you > replace it with a new one. But the new 500gb disk is slightly smaller than > the smallest disk in the array. > . . . > So I figure the only way to build smaller-than-max-disk-size functionality >

Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-28 Thread Richard Elling
RL wrote: > Hi, > > Does ZFS flag blocks as bad so it knows to avoid using them in the future? > > During testing I had huge numbers of unrecoverable checksum errors, which I > resolved by disabling write caching on the disks. Were the errors logged during writes, or during reads? Can you share

Re: [zfs-discuss] ZFS needs a viable backup mechanism

2007-08-28 Thread sean walmsley
We mostly rely on AMANDA, but for a simple, compressed, encrypted, tape-spanning alternative backup (intended for disaster recovery) we use: tar cf - | lzf (quick compression utility) | ssl (to encrypt) | mbuffer (which writes to tape and looks after tape changes) Recovery is exactly the oppos

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Tom Buskey
If you have disks to experiment on & corrupt (and you will!) try this: System A mounts the SAN [b]disk[/b] and format w/ UFS System A umounts [b]disk[/b] System B mounts [b]disk[/b] B runs [i]touch x[/i] on [b]disk[/b]. System A mounts [b]disk[/b] System A and B umount [b]disk[/b] System B [i]fsck

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Paul Kraus
The following seems much more complicated, much less supported, and much more prone to failure than just setting up Sun Cluster on the nodes and using it just for HA storage and the Global File System. You do not have to put the Oracle RAC instances under Sun Cluster control. On 8/25/07, M

[zfs-discuss] Update ZFS community page with ZFS version page

2007-08-28 Thread MC
The ZFS version pages ( http://www.google.ca/search?hl=en&safe=off&rlz=1B3GGGL_enCA220CA220&q=+site:www.opensolaris.org+zfs+version ) are undocumented on the main page, as far as I can see. The root /versions/ directory should be listed on the main ZFS page somewhere, and contain a list of al

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Frank Hofmann
On Tue, 28 Aug 2007, David Olsen wrote: >> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote: [ ... ] >>> I don't see why multiple UFS mounts wouldn't work, >> if only one >>> of them has write access. Can you elaborate? >> >> Even with a single writer you would need to be >> concerned with re

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Frank Hofmann
On Tue, 28 Aug 2007, Charles DeBardeleben wrote: > Are you sure that UFS writes a-time on read-only filesystems? I do not think > that it is supposed to. If it does, I think that this is a bug. I have > mounted > read-only media before, and not gotten any write errors. > > -Charles I think what m

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Casper . Dik
>> It's worse than this. Consider the read-only clients. When you >> access a filesystem object (file, directory, etc.), UFS will write >> metadata to update atime. I believe that there is a noatime option to >> mount, but I am unsure as to whether this is sufficient. > >Is this some particular

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Charles DeBardeleben
Are you sure that UFS writes a-time on read-only filesystems? I do not think that it is supposed to. If it does, I think that this is a bug. I have mounted read-only media before, and not gotten any write errors. -Charles David Olsen wrote: >> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Darren Dunham
> It's worse than this. Consider the read-only clients. When you > access a filesystem object (file, directory, etc.), UFS will write > metadata to update atime. I believe that there is a noatime option to > mount, but I am unsure as to whether this is sufficient. Is this some particular build

[zfs-discuss] Best way to incorporate disk size tolerance into raidz arrays?

2007-08-28 Thread MC
The situation: a three 500gb disk raidz array. One disk breaks and you replace it with a new one. But the new 500gb disk is slightly smaller than the smallest disk in the array. I presume the disk would not be accepted into the array because the zpool replace entry on the zpool man page say

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Paul Monday
It sounds like you are looking for a shared file system like Sun's QFS? Take a look here http://opensolaris.org/os/project/samqfs/What_are_QFS_and_SAM/ Writes from multiple hosts go through the metadata server, basically, that handles locking and update problems. I believe there are other op

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread David Olsen
> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote: > > Sorry, this is a bit off-topic, but anyway: > > > > Ronald Kuehn writes: > >> No. You can neither access ZFS nor UFS in that > way. Only one > >> host can mount the file system at the same time > (read/write or > >> read-only doesn't matte

Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-28 Thread Pawel Jakub Dawidek
On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote: > Hi, > > Does ZFS flag blocks as bad so it knows to avoid using them in the future? No it doesn't. This would be a really nice feature to have, but currently when ZFS tries to write to a bad sector it simply tries few times and gives up. With C

[zfs-discuss] Need info about DMU Transactions & space modification

2007-08-28 Thread Atul Vidwansa
ZFS Experts, I am looking for some answers: 1. How does dmu_tx_hold_*() routines calculate amount of space required to modify? As actual object modification are done after calls to dmu_tx_hold_*() routines, how does one know the amount of space required? 2. Are dmu_tx_hold_*() routines act indepe

Re: [zfs-discuss] Mirrored zpool across network

2007-08-28 Thread Darren J Moffat
Depending on what hardware you have and what size the data chunks are will determine what impact IPsec will have. WAN vs LAN isn't the issue. As for mitigating the impact of the crypto in IPsec it depends on the data size. If the size of the packets is > 512 bytes then the crypto framework wi