Re: [zfs-discuss] ZFS send/receive filehandle issue

2008-09-08 Thread Adrian Ulrich
Hi Marcelo, > I did some tests with send/receive a filesystem from one node to another, > changing the IP from one node to the other, and got the FH issue (stale), > from a GNU/Linux client. How are you replicating the filesystems? zfs send | zfs recv ? This method will preserve the inodes bu

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-08 Thread Ralf Ramge
Richard Elling wrote: >> Yes, you're right. But sadly, in the mentioned scenario of having >> replaced an entire drive, the entire disk is rewritten by ZFS. > > No, this is not true. ZFS only resilvers data. Okay, I see we have a communication problem here. Probably my fault, I should have wri

Re: [zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Richard Elling
Karl Pielorz wrote: > > > --On 08 September 2008 07:30 -0700 Richard Elling > <[EMAIL PROTECTED]> wrote: > >> This seems like a reasonable process to follow, I would have done >> much the same. > >> [caveat: I've not examined the FreeBSD ZFS port, the following >> presumes the FreeBSD port is simi

Re: [zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Karl Pielorz
--On 08 September 2008 07:30 -0700 Richard Elling <[EMAIL PROTECTED]> wrote: > This seems like a reasonable process to follow, I would have done > much the same. > [caveat: I've not examined the FreeBSD ZFS port, the following > presumes the FreeBSD port is similar to the Solaris port] > ZFS d

Re: [zfs-discuss] ZFS over multiple iSCSI targets

2008-09-08 Thread Tuomas Leikola
On Mon, Sep 8, 2008 at 8:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote: >ps> iSCSI with respect to write barriers? > > +1. > > Does anyone even know of a good way to actually test it? So far it > seems the only way to know if your OS is breaking write barriers is to > trade gossip and guess. >

[zfs-discuss] ZFS send/receive filehandle issue

2008-09-08 Thread Marcelo Leal
Hello all, Some way to workaround the filehandle issue with a send/receive ZFS procedure? In the ZFS begining, i did a conversation with some of the devel guys, and did ask about how ZFS would treat the NFS filehandle.. IIRC, the answere was: "No problem, the NFS filehandle will not depend on t

Re: [zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Bob Friesenhahn
On Mon, 8 Sep 2008, Miles Nordin wrote: > > no, I think ZFS should be fixed. > > 1. the procedure you used is how hot spares are used, so anyone who > says it's wrong for any reason is using hindsight bias. > > 2. Being able to pull data off a failing-but-not-fully-gone drive is > something a g

Re: [zfs-discuss] ZFS over multiple iSCSI targets

2008-09-08 Thread Miles Nordin
> "ps" == Peter Schuller <[EMAIL PROTECTED]> writes: ps> The software raid in Linux does not support [write barriers] ps> with raid5/raid6, yeah i read this warning also and think it's a good argument for not using it. http://lwn.net/Articles/283161/ With RAID5 or RAID6 there is

Re: [zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Miles Nordin
> "kp" == Karl Pielorz <[EMAIL PROTECTED]> writes: kp> Thinking about it - perhaps I should have detached ad4 (the kp> failing drive) before attaching another device? no, I think ZFS should be fixed. 1. the procedure you used is how hot spares are used, so anyone who says it's wro

Re: [zfs-discuss] A question about recordsize...

2008-09-08 Thread Robert Milkowski
Hello Marcelo, Monday, September 8, 2008, 1:51:09 PM, you wrote: ML> If i understand well, the recordsize is really important for big ML> files. Because with small files, and small updates, we have a lot ML> of chances to have the data well organized on disk. I think the ML> problem is the big f

Re: [zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Richard Elling
Karl Pielorz wrote: > Hi All, > > I run ZFS (a version 6 pool) under FreeBSD. Whilst I realise this changes a > *whole heap* of things - I'm more interested in if I did 'anything wrong' > when I had a recent drive failure... > > On of a mirrored pair of drives on the system started failing, badly

Re: [zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread jan damborsky
Hi Mark, Mark J Musante wrote: > On Mon, 8 Sep 2008, jan damborsky wrote: > >> Is there any way to release dump ZFS volume after it was activated by >> dumpadm(1M) command ? > > Try 'dumpadm -d swap' to point the dump to the swap device. That helped - since swap is on ZFS volume (which can't be

Re: [zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread Mark J Musante
On Mon, 8 Sep 2008, jan damborsky wrote: > Is there any way to release dump ZFS volume after it was activated by > dumpadm(1M) command ? Try 'dumpadm -d swap' to point the dump to the swap device. Regards, markm ___ zfs-discuss mailing list zfs-discu

[zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread jan damborsky
Hi, I have successfully created dedicated ZFS volume for dump device and activated it using dumpadm(1M) command: # zfs create -b 131072 -V 2048m data/dump # dumpadm -n -d /dev/zvol/dsk/data/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/data/dump (dedicated) Savecore dire

Re: [zfs-discuss] A question about recordsize...

2008-09-08 Thread Marcelo Leal
> On Fri, 5 Sep 2008, Marcelo Leal wrote: > > 4 - The last one... ;-) For the FSB allocation, > how the zfs knows > > the file size, for know if the file is smaller than > the FSB? > > Something related to the txg? When the write goes > to the disk, the > > zfs knows (some way) if that write is

[zfs-discuss] ZFS Failing Drive procedure (mirrored pairs) - did I mess this up?

2008-09-08 Thread Karl Pielorz
Hi All, I run ZFS (a version 6 pool) under FreeBSD. Whilst I realise this changes a *whole heap* of things - I'm more interested in if I did 'anything wrong' when I had a recent drive failure... On of a mirrored pair of drives on the system started failing, badly (confirmed by 'hard' read & w