Hi John,
On Qui, 2008-09-11 at 20:23 -0600, John Antonio wrote:
> It is operating with Sol 10 u3 and also u4. Sun support is claiming
> the issue is related to quiet corruptions.
Probably, yes.
> Since the ZFS structure was not cleanly exported because of the event
> (Node crash), the statement
Hi Jack,
On Qui, 2008-09-11 at 15:37 -0700, Jack Dumson wrote:
> Issues with ZFS and Sun Cluster
>
> If a cluster node crashes and HAStoragePlus resource group containing
> ZFS structure (ie. Zpool) is transitioned to a surviving node, the
> zpool import can cause the surviving node to panic.
Jack Dumson wrote:
> Issues with ZFS and Sun Cluster
>
> If a cluster node crashes and HAStoragePlus resource group containing ZFS
> structure (ie. Zpool) is transitioned to a surviving node, the zpool
> import can cause the surviving node to panic. Zpool was obviously not
> exported in controlled
Issues with ZFS and Sun Cluster
If a cluster node crashes and HAStoragePlus resource group containing ZFS
structure (ie. Zpool) is transitioned to a surviving node, the zpool import can
cause the surviving node to panic. Zpool was obviously not exported in
controlled fashion because of hard cra
Miles Nordin wrote:
>> "c" == Miles Nordin <[EMAIL PROTECTED]> writes:
>>
>
> c> Did you guys ever fix this, or get a bug number, or
> c> anything?
>
I think it is a bug. I haven't been able to produce it myself,
so I won't file a bug on it, but recommend that an
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
>
> On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
>
>> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>>> The issue with any form of RAID >1, is that the instant a disk fails
>>> out of the RAID set, with the next write
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote:
> On Thu, 11 Sep 2008, Nicolas Williams wrote:
> > I bet you think it'd be nice if we had a public equivalent of
> > _getgroupsbymember()...
>
> Indeed, that would be useful in numerous contexts. It would be even nicer
> if the approp
> "c" == Miles Nordin <[EMAIL PROTECTED]> writes:
c> Did you guys ever fix this, or get a bug number, or
c> anything?
I found two bugs about this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6736213
http://bugs.opensolaris.org/view_bug.do?bug_id=6739532
I don't think either o
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>> The issue with any form of RAID >1, is that the instant a disk fails
>> out of the RAID set, with the next write I/O to the remaining members
>> of the RAID set, the failed disk (
Haiou Fu (Kevin) wrote:
> Excuse me but could you please copy and paste the part of "zfs send -l " ?
> I couldn't find it in the link you send me:
>
> http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view
>
Not "ell" 'l', try capital-i 'I'
> What release is this "send -l " option available
Carson Gaspar wrote:
> Richard Elling wrote:
>
>> For campus or metro sized systems, many people just use HA clusters.
>> The complexity level is similar and you automatically avoid the NFS
>> file handle problem. There is a lot of expertise in this area as NFS
>> is one of the most popular clus
Did you guys ever fix this, or get a bug number, or anything? Should
I avoid that release? I was about to install b96 for ZFS fixes but
this 'zpool import -f' problem looks bad.
Corey
-8<-
pr1# zpool offline tank c5t0d0s0
pr1# zpool status
pool: rpool
state: ONLINE
scrub: none reques
> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:
mb> When using AVS's "Async replication with memory queue", am I
mb> guaranteed a consistent ZFS on the distant end? The assumed
mb> failure case is that the replication broke, and now I'm trying
mb> to promote the secondary rep
Excuse me but could you please copy and paste the part of "zfs send -l " ?
I couldn't find it in the link you send me:
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view
What release is this "send -l " option available ?
--
This message posted from opensolaris.org
_
Richard Elling wrote:
>
> For campus or metro sized systems, many people just use HA clusters.
> The complexity level is similar and you automatically avoid the NFS
> file handle problem. There is a lot of expertise in this area as NFS
> is one of the most popular clustered services.
> http://www.o
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote:
> On Thu, 11 Sep 2008, Nicolas Williams wrote:
>
> > I bet you think it'd be nice if we had a public equivalent of
> > _getgroupsbymember()...
>
> Indeed, that would be useful in numerous contexts. It would be even nicer
> if the app
On Thu, 11 Sep 2008, Nicolas Williams wrote:
> I bet you think it'd be nice if we had a public equivalent of
> _getgroupsbymember()...
Indeed, that would be useful in numerous contexts. It would be even nicer
if the appropriate standards body added it alongside of the current
getgr* functions to
On Wed, Sep 10, 2008 at 06:35:49PM -0700, Paul B. Henson wrote:
> I'd appreciate any feedback, particularly about things that don't work
> right :).
I bet you think it'd be nice if we had a public equivalent of
_getgroupsbymember()...
Even better if we just had utility functions to do ACL evaluat
When using AVS's "Async replication with memory queue", am I guaranteed a
consistent ZFS on the distant end?
The assumed failure case is that the replication broke, and now I'm trying to
promote the secondary replicate with what might be stale data. Recognizing in
advance that some of the data
Carson Gaspar wrote:
> Let me drag this thread kicking and screaming back to ZFS...
>
> Use case:
>
> - We need an NFS server that can be replicated to another building to
> handle both scheduled powerdowns and unplanned outages. For scheduled
> powerdowns we'd want to fail over a week in advance
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
> The issue with any form of RAID >1, is that the instant a disk fails
> out of the RAID set, with the next write I/O to the remaining members
> of the RAID set, the failed disk (and its replica) are instantly out
> of sync.
Does ra
Matt,
> Just to clarify a few items... consider a setup where we desire to
> use AVS to replicate the ZFS pool on a 4 drive server to like
> hardware. The 4 drives are setup as RaidZ.
>
> If we lose a drive (say #2) in the primary server, RaidZ will take
> over, and our data will still be "
Ralf,
> Jim, at first: I never said that AVS is a bad product. And I never
> will. I wonder why you act as if you were attacked personally.
> To be honest, if I were a customer with the original question, such
> a reaction wouldn't make me feel safer.
I am sorry that my response came across
On Wed, Sep 10, 2008 at 1:46 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Wed, 10 Sep 2008, Keith Bierman wrote:
>>> ...
>>> That is reasonable. It adds to product cost and size though.
>>> Super-capacitors are not super-small.
>>>
>> True, but for enterprise class devices they are sufficie
24 matches
Mail list logo