Re: [zfs-discuss] zfs and iscsi: cannot open : I/O error

2007-03-05 Thread Rick McNeal
If you have questions about iSCSI, I would suggest sending them to  
[EMAIL PROTECTED] I read that mail list a little more  
often, so you'll get a quicker response.


On Feb 26, 2007, at 8:39 AM, cedric briner wrote:


>> devfsadm -i iscsi # to create the device on sf3
>> iscsiadm list target -Sv| egrep 'OS Device|Peer|Alias' # not empty
>>  Alias: vol-1
>>IP address (Peer): 10.194.67.111:3260
>>   OS Device Name:
>> /dev/rdsk/c1t014005A267C12A0045E2F524d0s2
this is where my confusion began.
I don't know what is the device c1t04d0s2 for ? I mean what  
does it represents?




Normally the "OS Device Name:" would be exactly the same name that  
you would see when you run format. I don't know why you're seeing two  
different names. What version of Solaris are you running on the  
initiator?


The device names contain the Globally Unique IDentifier (GUID). The  
main benefit is that if you have multiple Solaris machines which can  
attach to the same device the pathname will be consistent across the  
machines.


I've found that the ``OS Device Name'' (c1t04d0s2) is created  
after the invocation:

devfsadm -i iscsi # to create the device on sf3

but no way, this is not a device that you can use.
you can find the device only with the command:
format
   Searching for disks...done


   AVAILABLE DISK SELECTIONS:
   0. c0t0d0 
  /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c0t2d0 
  /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t014005A267C12A0045E308D2d0 
  /scsi_vhci/[EMAIL PROTECTED]

and then if you create the zpool with:
zpool create tank c1t014005A267C12A0045E308D2d0
it works !!


BUT.. BUT... and re-BUT
Since this, and with all this virtualization... how can I link a  
device name on my iscsi's client with the device name on my  
iscsi'server.




Look at the "Alias" value which is reported by the initiator. You can  
use that to find the device on the storage array. This assumes that  
you don't create duplicate "Alias" strings of course.


Because, Imagine that you are in my situation where I want to have  
(let's say) 4 iscsi'server with at maximum 16 disks attached by  
iscsi'server. And that you have at least 2 iscsi's client which  
will consolidate this space with zfs. And suddenly, you can see  
with zpool that a disk is dead. So I have to be able to replace  
this disk and so for this, I have to know on which one of the 4  
machine it resides and which disk it is.



so does some of you knows a little bit about this ?



If you post iSCSI related questions to storage-discuss you'll find  
many people who've been using both the initiator and target and are  
quite knowledgeable. Also, the Solaris iSCSI developers read the  
storage-discuss list more frequently than this one.



Ced.
--

Cedric BRINER
Geneva - Switzerland
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate  
attempt on your part to deprive me of happiness, the pursuit of which  
is my unalienable right according to the Declaration of  
Independence.  I therefore assert my patriotic prerogative not to  
know this material.  I'll be out on the playground." -- Calvin



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Rick McNeal
Instead of replying to each of the messages I'll just reply to the last, 
but answer the questions that each of the three have raised.


Dick Davies wrote:

On 01/11/06, Cyril Plisko <[EMAIL PROTECTED]> wrote:

On 11/1/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> > And we'll be able to use sparse zvols
> > for this too (can't think why we couldn't, but it'd be dead handy)?
>
> Thinking about this, we won't be able to (without some changes) -
> I think a target is zero-filled before going online
> (educated guess: it takes 5 minutes to 'online' a target,
> and it consumes virtually no space in the parent zvol if compression 
is on),

> so a sparse zvol would exhaust zpool space.



I should change the code and look to see if the backing store is a 
character device. If so, there's no need to initialize the backing store 
to verify that the space is available.



Looking at the code it doesn't seem like the backing store being zeroed.
In case of regular file a single sector (512 byte) of uninitialized 
data from
stack (bad practice ?) is written to the very end of the file. And in 
case


I hang my head in shame. I've fixed the code.

of character device it isn't written at all. zvol should fall into 
char device

category. See mgmt_create.c::setup_disk_backing()

Or did I miss something ?




The routine that you're looking at is primarily dealing with the 
condition when the administrator has provided a backing store. The code 
that you're looking provides the ability to specify a backing store that 
doesn't exist, but a size must also be given.


At the end of create_lun() there's a call to create_lun_common(), not 
very original and a little misleading. create_lun_common() starts the 
thread which initializes the backing store. This is done for disk and 
tapes, but not for raw devices. I'm going to change create_lun_common() 
to ignore backing store which points at character devices.



I'm not the one to ask :)
I'm just saying what I've seen - it was SXCR b49, and a ZFS
filesystem, not a zvol as I said (seems iscsi targets are file backed
by default). Still took a few minutes to online a new target, so it was 
doing

something, but I don't know what.



The daemon does use regular files by default. This was done to make the 
daemon as flexible as possible.


When using regular files the daemon must zero fill the backing store so 
that initiators will not get a write error because the underlying file 
system has filled up. The daemon is able to create logical units that 
are hole-y files, but doing so requires a special XML tag to be added. 
I've been thinking about adding this as an option to the CLI if enough 
folks would find it useful.


Hope this helps.


If it's a non-issue that'd be great,




--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Rick McNeal



Cyril Plisko wrote:

On 11/1/06, Dick Davies <[EMAIL PROTECTED]> wrote:

On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> And we'll be able to use sparse zvols
> for this too (can't think why we couldn't, but it'd be dead handy)?

Thinking about this, we won't be able to (without some changes) -
I think a target is zero-filled before going online
(educated guess: it takes 5 minutes to 'online' a target,
and it consumes virtually no space in the parent zvol if compression 
is on),

so a sparse zvol would exhaust zpool space.


Looking at the code it doesn't seem like the backing store being zeroed.
In case of regular file a single sector (512 byte) of uninitialized data 
from

stack (bad practice ?) is written to the very end of the file. And in case
of character device it isn't written at all. zvol should fall into char 
device

category. See mgmt_create.c::setup_disk_backing()


In another email I indicated that the routine create_lun_common() was 
where the initialization was done and that I didn't check for a 
character device. The code already does check for character devices and 
ignores them.



Or did I miss something ?


I too must be missing something. I can't imagine why it would take 5 
minutes to online a target. A ZVOL should automatically be brought 
online since now initialization is required.



--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [storage-discuss] Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Rick McNeal



Cyril Plisko wrote:


> Can we do something similar to NFS case, where sharenfs can be
> "on", "off", or something else, in which case it is a list of options ?
> Would this technique be applicable to shareiscsi too ?

Absolutely. We would, however, like to be conservative about adding 
options
only doing so when it meets a specific need. As you noted, there's no 
real

requirement to be able to set the LUN.


...to be able to set the LUN *for iSCSI*. That would be more concise
formulation.

I wonder why you would like to be conservative. Can you please explain
your considerations ? How would it be different from NFS ? It seems that
in NFS case one can put whatever options she likes in "sharenfs="
attribute.


From my point of view I would like to be conservative to prevent the 
addition of a cool new feature that six months down the road we find 
that it's only used in one particular case. Once added it's difficult to 
remove because of our backwards compatibility requirement.



Also can you please elaborate more on the hidden attribute ? Which
values would be stored there ? Maybe the idea was to put all the options
into that attribute, thus making the "zfs get" output prettier (iSCSI 
options

tend to have massive character count).
In that case why not to make something similar with NFS too ?


The target daemon stores the IQN name which is about 60 characters, the 
GUID which is 32 characters, the emulation type, plus a few other 
miscellaneous bits of information. The zfs command goes through some 
effort to determine the longest property name and value. It then 
displays that information in uniform column widths. The iscsi property 
value blows that out of the water.


--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [storage-discuss] Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Rick McNeal



Richard Elling - PAE wrote:

Adam Leventhal wrote:

On Wed, Nov 01, 2006 at 04:00:43PM -0500, Torrey McMahon wrote:
Lets say server A has the pool with NFS shared, or iSCSI shared, 
volumes. Server A exports the pool or goes down. Server B imports the 
pool.


Which clients would still be active on the filesystem(s)? The ones 
that were mounting it when it was on Server A?


Clients would need to explicitly change the server they're contacting 
unless

that new server also took over the IP address, hostname, etc.


Does this imply that using Sun Cluster, which transfers (logical) IP
addresses, would easily provide an HA-iSCSI service?  I'll admit that
I still don't understand the iSCSI naming and security schemes.


The simple answer is yes. For a complete solution the iSCSI target 
project needs to support iSNS and the administrator should setup RADIUS 
to enable simplified authentication administration.



 -- richard
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


--
----
Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Rick McNeal



Adam Leventhal wrote:

On Wed, Nov 01, 2006 at 09:58:12AM +, Darren J Moffat wrote:

   iscsioptions

	This property, which is hidden by default, is used by the iSCSI 
	target

daemon to store persistent information such as the IQN. The contents
are not intended for users or external consumers.
What does "hidden" mean here ?  Is there a way to view it ?  Just 
curious more than anything I don't see a problem.


If one were to type 'zfs get all ', the iscsioptions property would
not be shown. If you do 'zfs get iscsioptions ' you can see it.


BTW where is the iscsitadm(1M) man page it doesn't seem to be in snv_50.


Rick probably has an answer.


The man page is scheduled to go into build 52. It's been available for 
some time, but because of vacation schedules and documentation gate 
schedules the man page hasn't made it into the release yet.



Adam



--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal



Dick Davies wrote:

On 01/11/06, Rick McNeal <[EMAIL PROTECTED]> wrote:


I too must be missing something. I can't imagine why it would take 5
minutes to online a target. A ZVOL should automatically be brought
online since now initialization is required.


s/now/no/ ?


Correct. That should have been 'no'.


Thanks for the explanation. The '5 minute online' issue I had was
with a file-based target (which happened to be on a ZFS filesystem).
 From what you say, it should be a non-issue
with a zvol-backed target.


Ah. I missed that for some reason. I thought you had set the backing 
store to be a ZVOL.

--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal



Cyril Plisko wrote:

On 11/1/06, Adam Leventhal <[EMAIL PROTECTED]> wrote:

> >What properties are you specifically interested in modifying?
>
> LUN for example. How would I configure LUN via zfs command ?

You can't. Forgive my ignorance about how iSCSI is deployed, but why 
would

you want/need to change the LUN?


Well, with iSCSI specifically it is of less importance, since one can 
easily

created multiple units identified by other means, than LUN.
I, however, trying to look forward for FC SCSI target functionality 
mirroring

that of the iSCSI (AFAIK it is on the Rick' roadmap [and I really do not
mind helping]). In FC world it is essentially the only way to have multiple
units on a particular FC port.


The administration of FC devices for the target mode needs some serious 
thinking so that we don't end up with a real nightmare on our hands.


As you point out the FC world doesn't separate the port address from the 
target name. Therefore each FC target must support thousands of LUs. We 
also need to support LU masking. I have no plans to support LU mapping 
since that was created to support a certain OS which could only boot off 
of LUN 0.


Also need to support linking iSCSI targets with FC LUs. The emulation 
code doesn't care about the transport layer so there's no reason why a 
logical unit can't be exposed via iSCSI and FC.


Lot's of things to worry about.


Can we do something similar to NFS case, where sharenfs can be
"on", "off", or something else, in which case it is a list of options ?
Would this technique be applicable to shareiscsi too ?



That's how the shareiscsi property works today.

--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [storage-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal


Adam Leventhal wrote:

On Thu, Nov 02, 2006 at 12:10:06AM -0800, eric kustarz wrote:

Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should
	be exported as an iSCSI target. The acceptable values for this 
	property

are 'on', 'off', and 'direct'. In the future, we may support other
	target types (for example, 'tape'). The default is 'off'. This 
	property

may be set on filesystems, but has no direct effect; this is to allow
ZVOLs created under the ZFS hierarchy to inherit a default. For
example, an administrator may want ZVOLs to be shared by default, and
so set 'shareiscsi=on' for the pool.

hey adam, what's "direct" mean?


It's iSCSI target lingo for vanilla disk emulation.


If it's easier for folks we could change the accepted value to be 
"disk". "direct" is a term that comes from the T10 world. Other values 
are sequential for tapes, printer, scanner, etc..



Adam



--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal



Cyril Plisko wrote:

On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:

>

The administration of FC devices for the target mode needs some serious
thinking so that we don't end up with a real nightmare on our hands.

As you point out the FC world doesn't separate the port address from the
target name. Therefore each FC target must support thousands of LUs. We
also need to support LU masking. I have no plans to support LU mapping
since that was created to support a certain OS which could only boot off
of LUN 0.


IMO there are more applications for LU mapping, than simple boot support.
Some systems are unable to address more than very modest (7!) number
of LU. I do not feel like ignoring LU mapping would be wise. Moreover
I believe that LU masking can be seen as a particular case of LU mapping.


I clearly need to get more input on the FC side of things. Regarding 
hosts that can only access 7 LUs I would wonder if they aren't already 
at capacity. If that's true would they even care about such a product.




Also need to support linking iSCSI targets with FC LUs. The emulation
code doesn't care about the transport layer so there's no reason why a
logical unit can't be exposed via iSCSI and FC.


Absolutely. That is quite logical thing to do.

I think I'll start another thread on this particular subject - FC target.



I too feel a FC target thread would be helpful.


Lot's of things to worry about.

> Can we do something similar to NFS case, where sharenfs can be
> "on", "off", or something else, in which case it is a list of options ?
> Would this technique be applicable to shareiscsi too ?
>

That's how the shareiscsi property works today.


So, why manipulating LUN is impossible via zfs ???



A ZVOL is a single LU, so there's nothing to manipulate. Could you give 
me an example of what you think should/could be changed?


--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-02 Thread Rick McNeal



Cyril Plisko wrote:

On 11/2/06, Rick McNeal <[EMAIL PROTECTED]> wrote:





>> That's how the shareiscsi property works today.
>
> So, why manipulating LUN is impossible via zfs ???
>

A ZVOL is a single LU, so there's nothing to manipulate. Could you give
me an example of what you think should/could be changed?


I was thinking about manipulating Logical Unit _Number_  (LUN)
for that particular Logical Unit (LU). iscsitadm allows me to set a LUN
for the target (LU) in hand and I thought that if we are going to
squeeze LU parameters into shareiscsi, then why LUN is excluded ?


This administrative model takes the approach of one LU per Target. Since 
SAM requires LUN 0 there's no point in allowing someone to change the value.


If you wish to have multiple LU's per target then the iscsitadm 
interface is available and you can still use ZVOL's as the backing store.




--

Rick McNeal

A good friend will come and bail you out of jail...but, a true
friend will be sitting next to you saying, "Damn...that was fun!"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and ISCSI

2006-12-15 Thread Rick McNeal


On Dec 15, 2006, at 7:37 AM, Jesus Cea wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm interesting in ZFS redundancy when vdev's are "remote". The idea,
for example, is use vdev remote mirroring as a cluster FS layer. Or
puntual backup.

Has anybody tried to mount an iscsi target as a ZFS device?. Are  
machine

reboots / conectivity problems gracefully managed by ZFS?.



I use the iSCSI Target as a ZFS device quite often these days. I've  
got several machines which only have a single disk and a test suite  
that I use for the iSCSI target which requires access to the complete  
device. So, I run the target on another machine and have the  
initiator locate that device.


For remote replication there could be an issue. It would very much  
depend on the link speed. The target is not aware of the link speed,  
but can handle large numbers of outstanding commands. It's very  
possible for the initiator to send commands that will timeout before  
the data can be returned. The initiator therefore needs to determine  
how fast data is being returned and throttle things so that they  
don't timeout. The timeouts here are related to the SCSI I/O stack  
and nothing to due with the network layer.



Hope Solaris (not express) be able to act as a iscsi target soon :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/   
_/_/

jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRYKzLZlgi5GaxT1NAQI1jQP8CR0h4xBuYjTJPBTk7QBS5+MAgwTr2NcC
vYgYjsXr6oyeeO4qKlTDgAopNBoLJwYgoLI3m50FNhHY6mVQGVR+8DpmjY1abKZv
myMUsWSUkkPdryhG3XGg+OxnTOfZJF4d0hDYK4ObAw4rUfFYEiqneHHTLGMFajwG
ddfh2uUtQZI=
=QYuq
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate  
attempt on your part to deprive me of happiness, the pursuit of which  
is my unalienable right according to the Declaration of  
Independence.  I therefore assert my patriotic prerogative not to  
know this material.  I'll be out on the playground." -- Calvin



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-15 Thread Rick McNeal


On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:


Hi, are there currently any plans to make an iSCSI target created by
setting shareiscsi=on on a zvol
bindable to a single interface (setting tpgt or acls)?

I can cobble something together with ipfilter,
but that doesn't give me enough granularity to say something like:

'host a can see target 1, host c can see targets 2-9', etc.

Also, am I right in thinking without this, all targets should be
visible on all interfaces?



We're working on some more interface stuff for setting up various  
properties like TPGT's and ACL for the ZVOLs which are shared through  
ZFS.


Now that I've knocked off a couple of things that have been on my  
plate I've got room to add some more. These definitely rank right up  
towards the top.




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate  
attempt on your part to deprive me of happiness, the pursuit of which  
is my unalienable right according to the Declaration of  
Independence.  I therefore assert my patriotic prerogative not to  
know this material.  I'll be out on the playground." -- Calvin



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-19 Thread Rick McNeal


On Jan 18, 2007, at 6:17 AM, Dick Davies wrote:


On 15/01/07, Rick McNeal <[EMAIL PROTECTED]> wrote:


On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:


> Hi, are there currently any plans to make an iSCSI target  
created by

> setting shareiscsi=on on a zvol
> bindable to a single interface (setting tpgt or acls)?



We're working on some more interface stuff for setting up various
properties like TPGT's and ACL for the ZVOLs which are shared through
ZFS.



Now that I've knocked off a couple of things that have been on my
plate I've got room to add some more. These definitely rank right up
towards the top.


Great news.

For the record, the reason I asked was we have an iscsi target host  
with
2 NICs and for some reason clients were attempting to connect to  
the targets
on  the private interface instead of the one they were doing  
discovery on

(which I thought was a bit odd).


This is due to a bug in the initiator. A prior change caused the  
discovery list, as returned from the SendTargets request, to be  
sorted in reverse order. The Solaris target goes out of it's way to  
return the address used to discovery targets as the first address in  
the list of available IP addresses for any given target. So, if you  
had a public and private network and the discovery was done on the  
public network, the public network IP address is first.


Now the iSCSI draft is mute on this point, so the initiator isn't  
violating the draft, but it's definitely not the desired behavior.  
When the engineer made the original fix for another bug they clearly  
didn't see this side effect and nobody else who reviewed the code fix  
did either. This is something which is being fixed now.




I tried creating a TPGT with iscsitadm, which seemed to work:

vera ~ # iscsitadm list tpgt -v
TPGT: 1
   IP Address: 131.251.5.8

but adding a ZFS iscsi target into it gives me:

 vera ~ # iscsitadm modify target -p 1 tank/iscsi/second4gb
 iscsitadm: Error Can't call daemon


which is a pity (I'm assuming it can't find the targets to modify).
I've had to go back to just using iscsitadm due to time pressures, but
will be watching any progress closely.



This was an oversight on my part and should work.



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/



Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate  
attempt on your part to deprive me of happiness, the pursuit of which  
is my unalienable right according to the Declaration of  
Independence.  I therefore assert my patriotic prerogative not to  
know this material.  I'll be out on the playground." -- Calvin



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss