Thanks for all your replies. Lot of info to take it back. In this case it seems 
like emcp carried out a repair to a path to LUN Followed by a panic. 

Jun  4 16:30:12 su621dwdb emcp: [ID 801593 kern.notice] Info: Assigned volume 
Symm 000290100491 vol 0ffe to

I don't think panic should be the answer in this type of scenario, as there is 
redundant path to the LUN and Hardware Raid is in place inside SAN. From what I 
gather there is work being carried out to find a better solution. What is the 
proposed solution or when it will be availble is the question ?

Thanks again.

Roshan


----- Original Message -----
From: Richard Elling <[EMAIL PROTECTED]>
Date: Tuesday, June 19, 2007 6:28 pm
Subject: Re: [zfs-discuss] Re: ZFS - SAN and Raid
To: Victor Engle <[EMAIL PROTECTED]>
Cc: Bruce McAlister <[EMAIL PROTECTED]>, zfs-discuss@opensolaris.org, Roshan 
Perera <[EMAIL PROTECTED]>

> Victor Engle wrote:
> > Roshan,
> > 
> > As far as I know, there is no problem at all with using SAN storage
> > with ZFS and it does look like you were having an underlying problem
> > with either powerpath or the array.
> 
> Correct.  A write failed.
> 
> > The best practices guide on opensolaris does recommend replicated
> > pools even if your backend storage is redundant. There are at 
> least 2
> > good reasons for that. ZFS needs a replica for the self healing
> > feature to work. Also there is no fsck like tool for ZFS so it 
> is a
> > good idea to make sure self healing can work.
> 
> Yes, currently ZFS on Solaris will panic if a non-redundant write 
> fails.This is known and being worked on, but there really isn't a 
> good solution
> if a write fails, unless you have some ZFS-level redundancy.
> 
> NB. fsck is not needed for ZFS because the on-disk format is always
> consistent.  This is orthogonal to hardware faults.
> 
> > I think first I would track down the cause of the messages just 
> prior> to the zfs write error because even with replicated pools 
> if several
> > devices error at once then the pool could be lost.
> 
> Yes, multiple failures can cause data loss.  No magic here.
>  -- richard
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to