Hi Marcelo,
> I did some tests with send/receive a filesystem from one node to another,
> changing the IP from one node to the other, and got the FH issue (stale),
> from a GNU/Linux client.
How are you replicating the filesystems? zfs send | zfs recv ?
This method will preserve the inodes bu
Richard Elling wrote:
>> Yes, you're right. But sadly, in the mentioned scenario of having
>> replaced an entire drive, the entire disk is rewritten by ZFS.
>
> No, this is not true. ZFS only resilvers data.
Okay, I see we have a communication problem here. Probably my fault, I
should have wri
Karl Pielorz wrote:
>
>
> --On 08 September 2008 07:30 -0700 Richard Elling
> <[EMAIL PROTECTED]> wrote:
>
>> This seems like a reasonable process to follow, I would have done
>> much the same.
>
>> [caveat: I've not examined the FreeBSD ZFS port, the following
>> presumes the FreeBSD port is simi
--On 08 September 2008 07:30 -0700 Richard Elling <[EMAIL PROTECTED]>
wrote:
> This seems like a reasonable process to follow, I would have done
> much the same.
> [caveat: I've not examined the FreeBSD ZFS port, the following
> presumes the FreeBSD port is similar to the Solaris port]
> ZFS d
On Mon, Sep 8, 2008 at 8:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
>ps> iSCSI with respect to write barriers?
>
> +1.
>
> Does anyone even know of a good way to actually test it? So far it
> seems the only way to know if your OS is breaking write barriers is to
> trade gossip and guess.
>
Hello all,
Some way to workaround the filehandle issue with a send/receive ZFS procedure?
In the ZFS begining, i did a conversation with some of the devel guys, and did
ask about how ZFS would treat the NFS filehandle.. IIRC, the answere was: "No
problem, the NFS filehandle will not depend on t
On Mon, 8 Sep 2008, Miles Nordin wrote:
>
> no, I think ZFS should be fixed.
>
> 1. the procedure you used is how hot spares are used, so anyone who
> says it's wrong for any reason is using hindsight bias.
>
> 2. Being able to pull data off a failing-but-not-fully-gone drive is
> something a g
> "ps" == Peter Schuller <[EMAIL PROTECTED]> writes:
ps> The software raid in Linux does not support [write barriers]
ps> with raid5/raid6,
yeah i read this warning also and think it's a good argument for not
using it.
http://lwn.net/Articles/283161/
With RAID5 or RAID6 there is
> "kp" == Karl Pielorz <[EMAIL PROTECTED]> writes:
kp> Thinking about it - perhaps I should have detached ad4 (the
kp> failing drive) before attaching another device?
no, I think ZFS should be fixed.
1. the procedure you used is how hot spares are used, so anyone who
says it's wro
Hello Marcelo,
Monday, September 8, 2008, 1:51:09 PM, you wrote:
ML> If i understand well, the recordsize is really important for big
ML> files. Because with small files, and small updates, we have a lot
ML> of chances to have the data well organized on disk. I think the
ML> problem is the big f
Karl Pielorz wrote:
> Hi All,
>
> I run ZFS (a version 6 pool) under FreeBSD. Whilst I realise this changes a
> *whole heap* of things - I'm more interested in if I did 'anything wrong'
> when I had a recent drive failure...
>
> On of a mirrored pair of drives on the system started failing, badly
Hi Mark,
Mark J Musante wrote:
> On Mon, 8 Sep 2008, jan damborsky wrote:
>
>> Is there any way to release dump ZFS volume after it was activated by
>> dumpadm(1M) command ?
>
> Try 'dumpadm -d swap' to point the dump to the swap device.
That helped - since swap is on ZFS volume (which can't be
On Mon, 8 Sep 2008, jan damborsky wrote:
> Is there any way to release dump ZFS volume after it was activated by
> dumpadm(1M) command ?
Try 'dumpadm -d swap' to point the dump to the swap device.
Regards,
markm
___
zfs-discuss mailing list
zfs-discu
Hi,
I have successfully created dedicated ZFS volume for dump
device and activated it using dumpadm(1M) command:
# zfs create -b 131072 -V 2048m data/dump
# dumpadm -n -d /dev/zvol/dsk/data/dump
Dump content: kernel pages
Dump device: /dev/zvol/dsk/data/dump (dedicated)
Savecore dire
> On Fri, 5 Sep 2008, Marcelo Leal wrote:
> > 4 - The last one... ;-) For the FSB allocation,
> how the zfs knows
> > the file size, for know if the file is smaller than
> the FSB?
> > Something related to the txg? When the write goes
> to the disk, the
> > zfs knows (some way) if that write is
Hi All,
I run ZFS (a version 6 pool) under FreeBSD. Whilst I realise this changes a
*whole heap* of things - I'm more interested in if I did 'anything wrong'
when I had a recent drive failure...
On of a mirrored pair of drives on the system started failing, badly
(confirmed by 'hard' read & w
16 matches
Mail list logo