On Mon, Jun 16, 2008 at 5:33 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Have you got more details or at least bug ids?
> Is it only (I dboubt) fc related?
I ran into something that looks like
6594621 dangling dbufs (dn=ff056a5ad0a8, dbuf=ff0520303300)
during stress
with LDoms 1.0.
Jeff Bonwick wrote:
> Using ZFS to mirror two hardware RAID-5 LUNs is actually quite nice.
> Because the data is mirrored at the ZFS level, you get all the benefits
> of self-healing. Moreover, you can survive a great variety of hardware
> failures: three or more disks can die (one in the first ar
Using ZFS to mirror two hardware RAID-5 LUNs is actually quite nice.
Because the data is mirrored at the ZFS level, you get all the benefits
of self-healing. Moreover, you can survive a great variety of hardware
failures: three or more disks can die (one in the first array, two or
more in the seco
Hello Erik,
Monday, June 16, 2008, 9:45:13 AM, you wrote:
ET> One thing I should mention on this is that I've had _very_ bad
ET> experience with using single-LUN ZFS filesystems over FC.
ET> that is, using an external SAN box to create a single LUN, export that
ET> LUN to a FC-connected host,
On Mon, 16 Jun 2008, Vincent Fox wrote:
> Also the array has SAN connectivity and caching and
> dual-controllers that just don't exist in the JBOD world.
As a clarification, you can convince your StorageTek 2540 to appear as
JBOD on the SAN. Then you obtain the SAN connectivity and caching and
I'm not sure why people obsess over this issue so much. Disk is cheap.
We have a fair number of 3510 and 2540 on our SAN. They make RAID-5 LUNs
available to various servers.
On the servers we take RAID-5 LUNs from different arrays and ZFS mirror them.
So if any array goes away we are still u
One thing I should mention on this is that I've had _very_ bad
experience with using single-LUN ZFS filesystems over FC.
that is, using an external SAN box to create a single LUN, export that
LUN to a FC-connected host, then creating a pool as follows:
zpool create tank
It works fine, up unt
On Sun, 15 Jun 2008, Brian Hechinger wrote:
>> how long the scrub takes. My pool is set to be scrubbed every night
>> via a cron job:
>
> And like all other things of this nature, the more often you do it, the
> less invasive it will be as there is less to do. That being said, I still
> wouldn't
On Sat, Jun 14, 2008 at 02:51:31PM -0500, Bob Friesenhahn wrote:
>
> I think that "none requested" likely means that the administrator has
> never issued a request to scrub the pool.
Or the system. That status line will show the last scrub/resilver to
have taken place. "None requested" means t
On Sat, Jun 14, 2008 at 02:19:05PM -0500, Bob Friesenhahn wrote:
> On Sat, 14 Jun 2008, Brian Wilson wrote:
>
> > What are the odds, in that configuration of zpool (no mirroring,
> > just using the intelligent disk as concatenated luns in the zpool)
> > that if we have this silent corruption, th
On Sat, Jun 14, 2008 at 12:11 PM, Brian Wilson <[EMAIL PROTECTED]> wrote:
>
>
>
>
>> On Sat, 14 Jun 2008, zfsmonk wrote:
>>
>> > Mentioned on
>> >
>> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>>
>> > is the following: "ZFS works well with storage based protected LUNs
>
On Sat, 14 Jun 2008, dick hoogendijk wrote:
>> With zfs you can scrub the pool at the system level. This allows you
>> to discover many issues early before they become nightmares.
>
> #zpool status
> scrub: none requested
>
> My question is really, do I wait 'till scrub is requested or am I
> sup
On Sat, 14 Jun 2008 14:19:05 -0500 (CDT)
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
> With zfs you can scrub the pool at the system level. This allows you
> to discover many issues early before they become nightmares.
#zpool status
scrub: none requested
My question is really, do I wait 'till s
On Sat, 14 Jun 2008, Brian Wilson wrote:
> What are the odds, in that configuration of zpool (no mirroring,
> just using the intelligent disk as concatenated luns in the zpool)
> that if we have this silent corruption, the whole zpool dies? If
> anyone knows, what's the comparative odds of the
- Original Message -
From: Brian Wilson <[EMAIL PROTECTED]>
Date: Saturday, June 14, 2008 12:12 pm
Subject: Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays
To: Bob Friesenhahn <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
> > On Sat, 14 Jun 2
> On Sat, 14 Jun 2008, zfsmonk wrote:
>
> > Mentioned on
> >
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>
> > is the following: "ZFS works well with storage based protected LUNs
>
> > (RAID-5 or mirrored LUNs from intelligent storage arrays). However,
>
>
On Sat, 14 Jun 2008, zfsmonk wrote:
> Mentioned on
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> is the following: "ZFS works well with storage based protected LUNs
> (RAID-5 or mirrored LUNs from intelligent storage arrays). However,
> ZFS cannot heal corrupted b
On 14 June, 2008 - zfsmonk sent me these 0,7K bytes:
> Mentioned on
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> is the following: "ZFS works well with storage based protected LUNs
> (RAID-5 or mirrored LUNs from intelligent storage arrays). However,
> ZFS cannot hea
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide is the
following:
"ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from
intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are
detected by ZFS checksums."
bas
19 matches
Mail list logo