> "as" == Andras Spitzer writes:
as> So, you telling me that even if the SAN provides redundancy
as> (HW RAID5 or RAID1), people still configure ZFS with either
as> raidz or mirror?
There's some experience that, in the case where the storage device or
the FC mesh glitches or rebo
Hey guys,
I'll let this die in a sec, but I just wanted to say that I've gone
and read the on disk document again this morning, and to be honest
Richard, without the description you just wrote, I really wouldn't
have known that uberblocks are in a 128 entry circular queue that's 4x
redundant.
Ple
On Sat, 14 Feb 2009 15:40:04 -0600 (CST)
David Dyer-Bennet wrote:
>
> On Sat, February 14, 2009 13:04, Blake wrote:
> > I think you can kill the destroy command process using traditional
> > methods.
>
> kill and kill -9 failed. In fact, rebooting failed; I had to use a
> hard reset (it shut d
On Sat, February 14, 2009 13:04, Blake wrote:
> I think you can kill the destroy command process using traditional
> methods.
kill and kill -9 failed. In fact, rebooting failed; I had to use a hard
reset (it shut down most of the way, but then got stuck).
> Perhaps your slowness issue is becaus
Antonio wrote:
I can mount those partitions well using ext2fs, so I assume I won't
need to run gparted at all.
This is what prtpart says about my stuff.
Kind regards,
Antonio
r...@antonio:~# prtpart /dev/rdsk/c3d0p0 -ldevs
Fdisk information for device /dev/rdsk/c3d0p0
** NOTE **
/dev/dsk/c3
A useful article about long term use of the Intel SSD X25-M:
http://www.pcper.com/article.php?aid=669 - Long-term performance analysis
of Intel Mainstream SSDs.
Would a zfs cache (ZIL or ARC) based on a SSD device see this kind of issue?
Maybe a periodic scrub via a full disk erase would be a use
I think you can kill the destroy command process using traditional methods.
Perhaps your slowness issue is because the pool is an older format.
I've not had these problems since upgrading to the zfs version that
comes default with 2008.11
On Fri, Feb 13, 2009 at 4:14 PM, David Dyer-Bennet wrote
On Fri, 13 Feb 2009, Andras Spitzer wrote:
So, you telling me that even if the SAN provides redundancy (HW
RAID5 or RAID1), people still configure ZFS with either raidz or
mirror?
When ZFS's redundancy features are used, there is decreased risk of
total pool failure. With redundancy at the
On Fri, 13 Feb 2009, Frank Cusack wrote:
i'm sorry to berate you, as you do make very valuable contributions to
the discussion here, but i take offense at your attempts to limit
discussion simply because you know everything there is to know about
the subject.
The point is that those of us in t
Andras Spitzer wrote:
Is it worth to move the redundancy from the SAN array layer to the ZFS layer?
(configuring redundancy on both layers is sounds like a waste to me) There are
certain advantages on the array to have redundancy configured (beyond the
protection against simple disk failure).
Hi Antonio,
did you try to recreate this partition e.g. with Gparted?
Maybe is something wrong with this partition.
Can you also post what "prtpart "disk ID" -ldevs" says?
Regards,
Jan Hlodan
Antonio wrote:
Hi Jan,
I tried out what you say long ago, but zfs fails on pool creation.
This is,
On 14-Feb-09, at 2:40 AM, Andras Spitzer wrote:
Damon,
Yes, we can provide simple concat inside the array (even though
today we provide RAID5 or RAID1 as our standard, and using Veritas
with concat), the question is more of if it's worth it to switch
the redundancy from the array to the
Antonio wrote:
> Hi all,
>
> First of all let me say that, after a few days using it (and after several
> *years* of using Linux daily), I'm delighted with OpenSolaris 8.11. It's
> gonna be the OS of my choice.
>
> The fact is that I installed it in a partition of 16Gb in my hard disk and
> tha
13 matches
Mail list logo