> From: Richard Elling <rich...@nexenta.com>
> Date: Wed, 4 Aug 2010 18:40:49 -0700
> To: Terry Hull <t...@nrg-inc.com>
> Cc: "zfs-discuss@opensolaris.org" <zfs-discuss@opensolaris.org>
> Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive
> 
> On Aug 4, 2010, at 1:27 PM, Terry Hull wrote:
>>> From: Richard Elling <rich...@nexenta.com>
>>> Date: Wed, 4 Aug 2010 11:05:21 -0700
>>> Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive
>>> 
>>> On Aug 3, 2010, at 11:58 PM, Terry Hull wrote:
>>>> I have a logical unit created with sbdadm create-lu that it I replicating
>>>> with zfs send  / receive between 2 build 134 hosts.   The these LUs are
>>>> iSCSI
>>>> targets used as VMFS filesystems and ESX RDMs mounted on a Windows 2003
>>>> machine.   The zfs pool names are the same on both machines.  The
>>>> replication
>>>> seems to be going correctly.  However, when I try to use the LUs on the
>>>> server I am replicating the data to, I have issues.   Here is the scenario:
>>>> 
>>>> The LUs are created as sparse.  Here is the process I¹m going through after
>>>> the snapshots are replicated to a secondary machine:
>>> 
>>> How did you replicate? In b134, the COMSTAR metadata is placed in
>>> hidden parameters in the dataset. These are not transferred via zfs send,
>>> by default.  This metadata includes the LU.
>>> -- richard
>> 
>> Does the -p option on the zfs send solve that problem?
> 
> I am unaware of a "zfs send -p" option.  Did you mean the -R option?
> 
> The LU metadata is stored in the stmf_sbd_lu property.  You should be able
> to get/set it.
> 

On the source machine I did a

zfs get -H stmf_sbd_lu pool-name.  In my case that gave me

tank/iscsi/bg-man5-vmfs stmf_sbd_lu
554c4442534e5553070200000000000007020000000000000000000000000000000000000000
000001000100000000843000000000000000b70100000100ff86200500000000000000000000
0000000000000000000000000000000000000000000000c01200000000000000000000000000
0000000000000000000000000000180000000000000009ff0000f1030010600144f0fa354000
00004c4f9edb0003000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000007461
6e6b2f69736373692f62672d6d616e352d766d6673002f6465762f7a766f6c2f7264736b2f74
616e6b2f69736373692f62672d6d616e352d766d6673000000000000000000e7010000000000
00200000000200ff0800000000000000000 local

(But it was all one line.)

I cut the numeric section out above and then did a

Zfs set stmf_sbd_lu=(above cut section) pool_name

And that seemed to work.  However, when I did a

stmfadm import_lu /dev/zvol/rdsk/pool

I still get meta file error

However, when I do a zfs get -H stmf_sbd_lu pool_name on the secondary
system, it now matches the results on the first system.

BTW:  The zfs send -p option is described as "Send Properties"

It seems like this should not be so hard to transfer an LU with zfs
send/receive.   


>> What else is not sent
>> by default?   In other words, am I better off sending the metadata with the
>> zfs send, or am I better off just creating the GUID once I get the data
>> transferred?  
> 
> I don't think this is a GUID issue.
>  -- richard
> 
> -- 
> Richard Elling
> rich...@nexenta.com   +1-760-896-4422
> Enterprise class storage for everyone
> www.nexenta.com
> 
--
Terry Hull


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to