Torrey McMahon wrote On 09/19/06 16:29,:
Eric Schrock wrote:
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote:
1 - ZFS is self consistent but if you take a LUN snapshot then any
transactions in flight might not be completed and the pool - Which
you need to snap in its entir
still more below...
Torrey McMahon wrote:
Darren Dunham wrote:
In my experience, we would not normally try to mount two different
copies of the same data at the same time on a single host. To avoid
confusion, we would especially not want to do this if the data
represents
two different points
Darren Dunham wrote:
In my experience, we would not normally try to mount two different
copies of the same data at the same time on a single host. To avoid
confusion, we would especially not want to do this if the data represents
two different points of time. I would encourage you to stick with
Eric Schrock wrote:
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote:
1 - ZFS is self consistent but if you take a LUN snapshot then any
transactions in flight might not be completed and the pool - Which you
need to snap in its entirety - might not be consistent. The more LUNs
On Mon, Sep 18, 2006 at 11:55:27PM -0400, Jonathan Edwards wrote:
>
> 1) If the zpool was imported when the split was done, can the
> secondary pool be imported by another host if the /dev/dsk entries
> are different? I'm assuming that you could simply use the -f
> option .. would the guid
On Sep 18, 2006, at 23:16, Eric Schrock wrote:
Here's an example: I've three LUNs in a ZFS pool offered from my
HW raid
array. I take a snapshot onto three other LUNs. A day later I turn
the
host off. I go to the array and offer all six LUNs, the pool that
was in
use as well as the snapsh
On Mon, Sep 18, 2006 at 06:03:47PM -0400, Torrey McMahon wrote:
>
> Its not the transport layer. It works fine as the LUN IDs are different
> and the devices will come up with different /dev/dsk entries. (And if
> not then you can fix that on the array in most cases.) The problem is
> that devi
> In my experience, we would not normally try to mount two different
> copies of the same data at the same time on a single host. To avoid
> confusion, we would especially not want to do this if the data represents
> two different points of time. I would encourage you to stick with more
> traditi
Joerg Haederli wrote:
I'm really not an expert on ZFS, but at least from my point to
handle such cases ZFS has to handle at least the following points
- GUID a new/different GUID has to be assigned
- LUNs ZFS has to be aware that device trees are different, if
these are part of some k
Torrey McMahon wrote:
A day later I turn the host off. I go to the array and offer all six
LUNs, the pool that was in use as well as the snapshot that I took a
day previously, and offer all three LUNs to the host.
Errrthat should be
A day later I turn the host off. I go to the arr
Eric Schrock wrote:
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote:
It looks as this has not been implemented yet nor even tested.
What hasn't been implemented? As far as I can tell, this is a request
for the previously mentioned RFE (ability to change GUIDs on import)
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote:
> I'm really not an expert on ZFS, but at least from my point to
> handle such cases ZFS has to handle at least the following points
>
> - GUID a new/different GUID has to be assigned
As I mentioned previously, ZFS handles this gr
On Mon, Sep 18, 2006 at 03:29:49PM -0400, Jonathan Edwards wrote:
>
> err .. i believe the point is that you will have multiple disks
> claiming to be the same disk which can wreak havoc on a system (eg:
> I've got a 4 disk pool with a unique GUID and 8 disks claiming to be
> part of that sa
I'm really not an expert on ZFS, but at least from my point to
handle such cases ZFS has to handle at least the following points
- GUID a new/different GUID has to be assigned
- LUNs ZFS has to be aware that device trees are different, if
these are part of some kind of metadata stored
On Sep 18, 2006, at 14:41, Eric Schrock wrote:
2 - If you import LUNs with the same label or ID as a currently
mounted
pool then ZFS will no one seems to know. For example: I have
a pool
on two LUNS X and Y called mypool. I take a snapshot of LUN X & Y,
ignoring issue #1 above for no
01752
Phone: 508-787-8564
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Eric Schrock
Sent: Monday, September 18, 2006 2:42 PM
To: Torrey McMahon
Cc: zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage
On Mon, Sep 18,
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote:
>
> 1 - ZFS is self consistent but if you take a LUN snapshot then any
> transactions in flight might not be completed and the pool - Which you
> need to snap in its entirety - might not be consistent. The more LUNs
> you have in t
Hans-Joerg Haederli - Sun Switzerland Zurich - Sun Support Services wrote:
Hi colleagues
IHAC who wants to use ZFS with his HDS box. He asks now how he can do
the
following:
- Create ZFS pool/fs on HDS LUNs
- Create Copy with ShadowImage inside HDS
- Disconnect ShadowImage
- Import ShadowIm
Hi colleagues
IHAC who wants to use ZFS with his HDS box. He asks now how he can do the
following:
- Create ZFS pool/fs on HDS LUNs
- Create Copy with ShadowImage inside HDS
- Disconnect ShadowImage
- Import ShadowImage with ZFS in addition to the existing ZFS pool/fs
I wonder how ZFS is hand
19 matches
Mail list logo