To repeat what some others have said, yes, Solaris seems to handle an iSCSI
device going offline in that it doesn't panick and continues working once
everything has timed out.
However that doesn't necessarily mean it's ready for production use. ZFS will
hang for 3 mins (180 seconds) waiting fo
On Mon, Apr 07, 2008 at 01:06:34AM -0700, Ross wrote:
>
> To repeat what some others have said, yes, Solaris seems to handle
> an iSCSI device going offline in that it doesn't panick and
> continues working once everything has timed out.
>
> However that doesn't necessarily mean it's ready for pr
Ross wrote:
> To repeat what some others have said, yes, Solaris seems to handle an iSCSI
> device going offline in that it doesn't panick and continues working once
> everything has timed out.
>
> However that doesn't necessarily mean it's ready for production use. ZFS
> will hang for 3 mins (
On Mon, 7 Apr 2008, Ross wrote:
> However that doesn't necessarily mean it's ready for production use.
> ZFS will hang for 3 mins (180 seconds) waiting for the iSCSI client
> to timeout. Now I don't know about you, but HA to me doesn't mean
> "Highly Available, but with occasional 3 minute bre
> Crazy question here... but has anyone tried this with say, a QLogic
> hardware iSCSI card? Seems like it would solve all your issues.
> Granted, they aren't free like the software stack, but if you're trying
> to setup an HA solution, the ~$800 price tag per card seems pretty darn
> reason
On Mon, Apr 7, 2008 at 10:40 AM, Christine Tran <[EMAIL PROTECTED]>
wrote:
>
> Crazy question here... but has anyone tried this with say, a QLogic
> > hardware iSCSI card? Seems like it would solve all your issues. Granted,
> > they aren't free like the software stack, but if you're trying to s
Hi All ;
We are running latest Solaris 10 a X4500 Thumper. We defined a test iSCSI
Lun. Out put below
Target: AkhanTemp/VM
iSCSI Name: iqn.1986-03.com.sun:02:72406bf8-2f5f-635a-f64c-cb664935f3d1
Alias: AkhanTemp/VM
Connections: 0
ACL list:
TPGT list:
LUN inform
Mertol Ozyoney wrote:
Hi All ;
There are a set of issues being looked at that prevent the VMWare ESX
server from working with the Solaris iSCSI Target.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6597310
At this time there is no target date when this issues will be
Ross Smith wrote:
> Which again is unacceptable for network storage. If hardware raid
> controllers took over a minute to timeout a drive network admins would
> be in uproar. Why should software be held to a different standard?
You need to take a systems approach to analyzing these things.
For
Thanks James ;
The problem is nearly identical with mine.
When we had 2 LUN's vmware tried to multipath over them . I think this is a
bug inside VMWare as it thinks that two LUN 0 are same. I think I can fool
it setting up targets with different LUN numbers.
After I figured out this, I s
Jeff Bonwick <[EMAIL PROTECTED]> wrote on 04/05/2008 01:33:05 AM:
> > Aye, or better yet -- give the scrub/resilver/snap reset issue fix
very
> > high priority. As it stands snapshots are impossible when you need to
> > resilver and scrub (even on supposedly sun supported thumper configs).
>
>
I've been using ZFS on my home media server for about a year now. There's a lot
I like about Solaris, but the rest of the computers in my house are Macs. Now
that the Mac has experimental read/write support for ZFS, I'd like to migrate
my zpool to my Mac Pro. I primarily use the machine to serve
Jeff,
On Mon, Mar 31, 2008 at 9:01 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> Peter,
>
> That's a great suggestion. And as fortune would have it, we have the
> code to do it already. Scrubbing in ZFS is driven from the logical
> layer, not the physical layer. When you scrub a pool, you're
Some time ago I experienced the same issue.
Only 1 target could be connected from an esx host. Others were shown as
alternative paths to that target.
If I'm reminding correctly I thought I read on a forum it has something to do
with the disks serial number.
Normally every single (i)scsi disk
On Mon, Apr 7, 2008 at 12:46 PM, David Loose <[EMAIL PROTECTED]> wrote:
> The problem is that I upgraded to Solaris nv84 a while ago and bumped my
> zpool to version 9 (I think) at that time. The Macintosh guys only support up
> to version 8. There doesn't seem to be too much activity on the ZFS
On Apr 7, 2008, at 1:46 PM, David Loose wrote:
> my Solaris samba shares never really played well with iTunes.
>
>
Another approach might be to stick with Solaris on the server, and
run netatalk instead of SAMBA (or, you
know your macs can speak NFS ;>).
--
Keith H. Bierman [EMAIL PROTECT
Hello,
I'm writing to report what I think is an incorrect or conflicting
suggestion in the error message displayed on a faulted pool that does
not have redundancy (equiv to RAID0?). I ran across this while testing
and learning about ZFS on a clean installation of NexentaCore 1.0.
Here is how
17 matches
Mail list logo