Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Richard Elling
[EMAIL PROTECTED] wrote: One of the benefits of ZFS is that not only is head synchronization not needed, but also block offsets do not have to be the same. For example, in a traditional mirror, block 1 on device 1 is paired with block 1 on device 2. In ZFS, this 1:1 mapping is not required. I

Re: [zfs-discuss] unable to boot zone

2007-01-30 Thread Mike Gerdts
On 1/30/07, Karen Chau <[EMAIL PROTECTED]> wrote: dmpk14a603# zoneadm -z snitch-zone04 halt zoneadm: zone 'snitch-zone04': unable to unmount '/snitch-zone04/root/tmp' zoneadm: zone 'snitch-zone04': unable to unmount file systems in zone zoneadm: zone 'snitch-zone04': unable to destroy zone I wo

[zfs-discuss] Re: restore pool from detached disk from mirror

2007-01-30 Thread Rainer Heilke
Jeremy is correct. There is actually an RFE open to allow a "zpool split" that would have allowed you to detach the second disk while keeping the vdev data (and thus allowing you to pull in the data in the detached disk using some sort of "import" type command). Rainer This message posted f

[zfs-discuss] Re: Cheap ZFS homeserver.

2007-01-30 Thread Wes Williams
> On 1/18/07, . <[EMAIL PROTECTED]> wrote: > > SYBA SD-SATA-4P PCI SATA Controller Card ( > http://www.newegg.com/product/Product.asp?item=N82E168 > 15124020 ) > > >From my home ZFS server setup, I had tried two Syba SD-SATA2-2E2I PCI-X SATA >II Controller Cards without any luck; both cards' BI

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Darren Dunham
> > I think he means that if a block fails to write on a VDEV, ZFS can write > > that data elsewhere and is not forced to use that location. As opposed > > to SVM as an example, where the mirror must try to write at a particular > > offset or fail. > > Understood, I am asking if the current code

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Wade . Stuart
> > > One of the benefits of ZFS is that not only is head synchronization not > > > needed, but also block offsets do not have to be the same. For example, > > > in a traditional mirror, block 1 on device 1 is paired with block 1 on > > > device 2. In ZFS, this 1:1 mapping is not required. I

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Darren Dunham
> > One of the benefits of ZFS is that not only is head synchronization not > > needed, but also block offsets do not have to be the same. For example, > > in a traditional mirror, block 1 on device 1 is paired with block 1 on > > device 2. In ZFS, this 1:1 mapping is not required. I believe thi

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Wade . Stuart
> One of the benefits of ZFS is that not only is head synchronization not > needed, but also block offsets do not have to be the same. For example, > in a traditional mirror, block 1 on device 1 is paired with block 1 on > device 2. In ZFS, this 1:1 mapping is not required. I believe this w

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Ian Collins
Richard Elling wrote: > > One of the benefits of ZFS is that not only is head synchronization not > needed, but also block offsets do not have to be the same. For example, > in a traditional mirror, block 1 on device 1 is paired with block 1 on > device 2. In ZFS, this 1:1 mapping is not require

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Toby Thain
On 30-Jan-07, at 5:48 PM, Richard Elling wrote: ... One of the benefits of ZFS is that not only is head synchronization not needed, but also block offsets do not have to be the same. For example, in a traditional mirror, block 1 on device 1 is paired with block 1 on device 2. In ZFS, thi

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Richard Elling
Nicolas Williams wrote: On Tue, Jan 30, 2007 at 06:41:25PM +0100, Roch - PAE wrote: I think I got the point. Mine was that if the data travels a single time toward the storage and is corrupted along the way then there will be no hope of recovering it since the array was given bad data. Having t

[zfs-discuss] yet another blog: ZFS space, performance, MTTDL

2007-01-30 Thread Richard Elling
I've blogged about the trade-offs for space, performance, and MTTDL (RAS) for ZFS and RAID in general. http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance Enjoy. -- richard ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] Export ZFS over NFS ?

2007-01-30 Thread Frank Cusack
On January 30, 2007 9:59:45 AM -0800 Neal Pollack <[EMAIL PROTECTED]> wrote: I've got my first server deployment with ZFS. Consolidating a pair of other file servers that used to have a dozen or so NFS exports in /etc/dfs/dfstab similar to; /export/solaris/images /export/tools /export/ws . a

Re: [zfs-discuss] Export ZFS over NFS ?

2007-01-30 Thread Neal Pollack
Neal Pollack wrote: I've got my first server deployment with ZFS. Consolidating a pair of other file servers that used to have a dozen or so NFS exports in /etc/dfs/dfstab similar to; /export/solaris/images /export/tools /export/ws . and so on For the new server, I have one large zfs po

[zfs-discuss] unable to boot zone

2007-01-30 Thread Karen Chau
I'm unable to boot a zone after I did a sys-unconfig, how do I recover? GLOBAL ZONE: --- dmpk14a603# zoneadm list -cv ID NAME STATUS PATH 0 global running/ 1 snitch-zone02running/snitch-zone02 4 snitch-zone04down

Re: [zfs-discuss] Need Help on device structure

2007-01-30 Thread Eric Schrock
On Tue, Jan 30, 2007 at 09:24:26AM -0800, Richard Elling wrote: > > For Solaris labeled disks, block 0 contains the vtoc. If you overwrite > block 0 with junk, then this is the error message you should see. > Also note that for EFI labelled disks, Solaris will create a 'bare' dev link that corre

[zfs-discuss] Export ZFS over NFS ?

2007-01-30 Thread Neal Pollack
I've got my first server deployment with ZFS. Consolidating a pair of other file servers that used to have a dozen or so NFS exports in /etc/dfs/dfstab similar to; /export/solaris/images /export/tools /export/ws . and so on For the new server, I have one large zfs pool; -bash-3.00# df -h

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Nicolas Williams
On Tue, Jan 30, 2007 at 06:41:25PM +0100, Roch - PAE wrote: > I think I got the point. Mine was that if the data travels a > single time toward the storage and is corrupted along the > way then there will be no hope of recovering it since the > array was given bad data. Having the data travel twic

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Roch - PAE
Nicolas Williams writes: > On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote: > > > The only benefit of using a HW RAID controller with ZFS is that it > > > reduces the I/O that the host needs to do, but the trade off is that ZFS > > > cannot do combinatorial parity reconstruction

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Nicolas Williams
On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote: > > The only benefit of using a HW RAID controller with ZFS is that it > > reduces the I/O that the host needs to do, but the trade off is that ZFS > > cannot do combinatorial parity reconstruction so that it could only > > detect erro

Re: [zfs-discuss] Thumper Origins Q

2007-01-30 Thread Roch - PAE
Nicolas Williams writes: > On Thu, Jan 25, 2007 at 10:57:17AM +0800, Wee Yeh Tan wrote: > > On 1/25/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote: > > >... > > >after all, what was ZFS going to do with that expensive but useless > > >hardware RAID controller? ... > > > > I almost rolled ov

Re: [zfs-discuss] Need Help on device structure

2007-01-30 Thread Richard Elling
dudekula mastan wrote: I don't know whether it's the right place or not to discuss my doubts. I opened a device ( in raw mode) and I filled the entire space (from 1 block to last block) with some random data. While writing data, I am seeing the following warning messages in dmesg buffer. J

Re: [zfs-discuss] hot spares - in standby?

2007-01-30 Thread Albert Chin
On Mon, Jan 29, 2007 at 09:37:57PM -0500, David Magda wrote: > On Jan 29, 2007, at 20:27, Toby Thain wrote: > > >On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote: > > > >>I seem to remember the Massive Array of Independent Disk guys ran > >>into > >>a problem I think they called static fric

[zfs-discuss] Re: Actual (cache) memory use of ZFS?

2007-01-30 Thread Bjorn Munch
ZFS does release memory if I e.g. do a simple malloc(), but using this "intimate shared memory (flag SHM_SHARE_MMU in the call to shmat()), this does not happen. BTW the OS here was Solaris 10 U2; the 8Gb machines I'm using now are running U3. Hmm, looks like this may have been fixed in U3, th

Re: [zfs-discuss] Actual (cache) memory use of ZFS?

2007-01-30 Thread Roch - PAE
Bjorn Munch writes: > Hello, > > I am doing some tests using ZFS for the data files of a database > system, and ran into memory problems which has been discussed in a > thread here a few weeks ago. > > When creating a new database, the data files are first initialized to > their configur

Re: [zfs-discuss] hot spares - in standby?

2007-01-30 Thread Luke Scharf
David Magda wrote: What about a rotating spare? When setting up a pool a lot of people would (say) balance things around buses and controllers to minimize single points of failure, and a rotating spare could disrupt this organization, but would it be useful at all? Functionally, that sound

Re: [zfs-discuss] restore pool from detached disk from mirror

2007-01-30 Thread Jeremy Teo
Hello, On 1/30/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello zfs-discuss, I had a pool with only two disks in a mirror. I detached one disks and have erased later first disk. Now i would really like to quickly get data from the second disk available again. Other than detaching t

[zfs-discuss] Actual (cache) memory use of ZFS?

2007-01-30 Thread Bjorn Munch
Hello, I am doing some tests using ZFS for the data files of a database system, and ran into memory problems which has been discussed in a thread here a few weeks ago. When creating a new database, the data files are first initialized to their configured size (written in full), then the servers a

[zfs-discuss] restore pool from detached disk from mirror

2007-01-30 Thread Robert Milkowski
Hello zfs-discuss, I had a pool with only two disks in a mirror. I detached one disks and have erased later first disk. Now i would really like to quickly get data from the second disk available again. Other than detaching the second disk nothing else was done to it. Has anyone written

[zfs-discuss] Need Help on device structure

2007-01-30 Thread dudekula mastan
Hi All, I don't know whether it's the right place or not to discuss my doubts. I opened a device ( in raw mode) and I filled the entire space (from 1 block to last block) with some random data. While writing data, I am seeing the following warning messages in dmesg buffer.

Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-30 Thread Casper . Dik
>Ok, I'll bite. It's been a long day, so that may be why I can't see >why the radioisotopes in lead that was dug up 100 years ago would be >any more depleted than the lead that sat in the ground for the >intervening 100 years. Half-life is half-life, no? >Now if it were something about the