Hello Mattias,
Saturday, November 15, 2008, 12:24:05 AM, you wrote:
MP> On Sat, Nov 15, 2008 at 00:46, Richard Elling <[EMAIL PROTECTED]> wrote:
>> Adam Leventhal wrote:
>>>
>>> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>>>
That is _not_ active-active, that is ac
]
[mailto:[EMAIL PROTECTED] On Behalf Of Mattias Pantzare
Sent: Friday, November 14, 2008 11:48 PM
To: David Pacheco
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] continuous replication
> I think you're confusing our clustering feature with the remote
> replication feature.
On Sat, Nov 15, 2008 at 00:46, Richard Elling <[EMAIL PROTECTED]> wrote:
> Adam Leventhal wrote:
>>
>> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>>
>>>
>>> That is _not_ active-active, that is active-passive.
>>>
>>> If you have a active-active system I can access the same d
Adam Leventhal wrote:
> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>
>> That is _not_ active-active, that is active-passive.
>>
>> If you have a active-active system I can access the same data via both
>> controllers at the same time. I can't if it works like you just
>> d
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
> That is _not_ active-active, that is active-passive.
>
> If you have a active-active system I can access the same data via both
> controllers at the same time. I can't if it works like you just
> described. You can't call it activ
> I think you're confusing our clustering feature with the remote
> replication feature. With active-active clustering, you have two closely
> linked head nodes serving files from different zpools using JBODs
> connected to both head nodes. When one fails, the other imports the
> failed node's pool
Brent Jones wrote:
> *snip*
>>> a 'zfs send' on the sending host
>>> monitors the pool/filesystem for changes, and immediately sends them to
>>> the
>>> receiving host, which applies the change to the remote pool.
>> This is asynchronous, and isn't really different from running zfs send/recv
>> in
*snip*
>> a 'zfs send' on the sending host
>> monitors the pool/filesystem for changes, and immediately sends them to
>> the
>> receiving host, which applies the change to the remote pool.
>
> This is asynchronous, and isn't really different from running zfs send/recv
> in a loop. Whether the loop
River Tarnell wrote:
> Daryl Doami:
>> As an aside, replication has been implemented as part of the new Storage
>> 7000 family. Here's a link to a blog discussing using the 7000
>> Simulator running in two separate VMs and replicating w/ each other:
>
> that's interesting, although 'less than a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Miles Nordin:
> rt> currently i crontab zfs send | zfs recv for this
> My point is that I don't think the 10min delay is the most significant
> difference between AVS/snapmirror and a 'zfs send' cronjob.
i didn't intend to suggest there was any si
> "rt" == River Tarnell <[EMAIL PROTECTED]> writes:
rt> currently i crontab zfs send | zfs recv for this
doesn't it also fall over if the stream falls behind? I mean, what if
it takes longer than ten minutes? What if the backup node goes away
and then comes back? What if the master nod
Ian Collins wrote:
> On Thu 13/11/08 12:04 , River Tarnell [EMAIL PROTECTED] sent:
>> are there any RFEs or plans to create a 'continuous' replication mode for
>> ZFS?i envisage it working something like this: a 'zfs send' on the sending
>> hostmonitors the pool/filesystem for changes, and immediat
On Wed, Nov 12, 2008 at 5:58 PM, River Tarnell
<[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Daryl Doami:
>> As an aside, replication has been implemented as part of the new Storage
>> 7000 family. Here's a link to a blog discussing using the 7000
>> Simulator ru
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Daryl Doami:
> As an aside, replication has been implemented as part of the new Storage
> 7000 family. Here's a link to a blog discussing using the 7000
> Simulator running in two separate VMs and replicating w/ each other:
that's interesting, alth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
> It sounds like you need either a true clustering file system or to draw back
> your plans to see changes read-only instantly on the secondary node.
well, the idea is to have two separate copies of the data, for backup / DR.
being able t
ure of the specifics of how, but it might provide ideas of how
it can be accomplished.
Regards.
Original Message
Subject: Re: [zfs-discuss] continuous replication
From: Brent Jones <[EMAIL PROTECTED]>
To: Ian Collins <[EMAIL PROTECTED]>, zfs-discuss@opensolaris.org
Date: We
On Wed, Nov 12, 2008 at 3:40 PM, River Tarnell
<[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Ian Collins:
>> I doubt zfs receive would be able to keep pace with any non-trivial update
>> rate.
>
> one could consider this a bug in zfs receive :)
>
>> Mirroring iSC
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
> I doubt zfs receive would be able to keep pace with any non-trivial update
> rate.
one could consider this a bug in zfs receive :)
> Mirroring iSCSI or a dedicated HA tool would be a better solution.
i'm not sure how to apply iSCSI h
On Thu 13/11/08 12:04 , River Tarnell [EMAIL PROTECTED] sent:
>
> are there any RFEs or plans to create a 'continuous' replication mode for
> ZFS?i envisage it working something like this: a 'zfs send' on the sending
> hostmonitors the pool/filesystem for changes, and immediately sends them to
> t
19 matches
Mail list logo