On Thu, Jul 3, 2008 at 3:09 PM, Aaron Blew <[EMAIL PROTECTED]> wrote:
> My take is that since RAID-Z creates a stripe for every block
> (http://blogs.sun.com/bonwick/entry/raid_z), it should be able to
> rebuild the bad sectors on a per block basis. I'd assume that the
> likelihood of having bad s
Greetings,
I want to take advantage of the iSCSI target support in the latest release
(svn_91) of OpenSolaris, and I'm running into some performance problems when
reading/writing from/to my target. I'm including as much detail as I can so
bear with me here...
I've built an x86 OpenSolaris serv
I will be out of the office starting 07/03/2008 and will not return until
07/07/2008.
Please contact George Mederos, Shawn Luft or Bernard Wu for Unix Support.
The information contained in this e-mail message is intended only for
the personal and confidential use of the recipient(s) named abo
My take is that since RAID-Z creates a stripe for every block
(http://blogs.sun.com/bonwick/entry/raid_z), it should be able to
rebuild the bad sectors on a per block basis. I'd assume that the
likelihood of having bad sectors on the same places of all the disks
is pretty low since we're only read
On Thu, 3 Jul 2008, Richard Elling wrote:
>
> nit: SATA disks are single port, so you would need a SAS
> implementation to get multipathing to the disks. This will not
> significantly impact the overall availability of the data, however.
> I did an availability analysis of thumper to show this.
Miles Nordin wrote:
>> "djm" == Darren J Moffat <[EMAIL PROTECTED]> writes:
>> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
>>
>
>djm> Why are you planning on using RAIDZ-2 rather than mirroring ?
>
> isn't MTDL sometimes shorter for mirroring than raidz2? I thi
Anyone here read the article "Why RAID 5 stops working in 2009" at
http://blogs.zdnet.com/storage/?p=162
Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux
if the RAID has to be rebuilt because of a faulty disk? I imagine so because
of the physical constraints that p
[Richard Elling] wrote:
> Don Enrique wrote:
>> Hi,
>>
>> I am looking for some best practice advice on a project that i am working on.
>>
>> We are looking at migrating ~40TB backup data to ZFS, with an annual data
>> growth of
>> 20-25%.
>>
>> Now, my initial plan was to create one large pool co
Albert Chin wrote:
> On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote:
>
>> You are right that J series do not have nvram onboard. However most Jbods
>> like HPS's MSA series have some nvram.
>> The idea behind not using nvram on the Jbod's is
>>
>> -) There is no use to add limi
> "djm" == Darren J Moffat <[EMAIL PROTECTED]> writes:
> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
djm> Why are you planning on using RAIDZ-2 rather than mirroring ?
isn't MTDL sometimes shorter for mirroring than raidz2? I think that
is the biggest point of raidz2, is it no
On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote:
> You are right that J series do not have nvram onboard. However most Jbods
> like HPS's MSA series have some nvram.
> The idea behind not using nvram on the Jbod's is
>
> -) There is no use to add limited ram to a JBOD as disks alr
Don Enrique wrote:
> Hi,
>
> I am looking for some best practice advice on a project that i am working on.
>
> We are looking at migrating ~40TB backup data to ZFS, with an annual data
> growth of
> 20-25%.
>
> Now, my initial plan was to create one large pool comprised of X RAIDZ-2
> vdevs ( 7 +
I'm going down a bit of a different path with my reply here. I know that all
shops and their need for data are different, but hear me out.
1) You're backing up 40TB+ of data, increasing at 20-25% per year. That's
insane. Perhaps it's time to look at your backup strategy no from a hardware
perspect
On Thu, 3 Jul 2008, Don Enrique wrote:
>
> This means that i potentially could loose 40TB+ of data if three
> disks within the same RAIDZ-2 vdev should die before the resilvering
> of at least one disk is complete. Since most disks will be filled i
> do expect rather long resilvering times.
Yes
> Don Enrique wrote:
> > Now, my initial plan was to create one large pool
> comprised of X RAIDZ-2 vdevs ( 7 + 2 )
> > with one hotspare per 10 drives and just continue
> to expand that pool as needed.
> >
> > Between calculating the MTTDL and performance
> models i was hit by a rather scary thou
Don Enrique wrote:
> Now, my initial plan was to create one large pool comprised of X RAIDZ-2
> vdevs ( 7 + 2 )
> with one hotspare per 10 drives and just continue to expand that pool as
> needed.
>
> Between calculating the MTTDL and performance models i was hit by a rather
> scary thought.
>
Hi,
I am looking for some best practice advice on a project that i am working on.
We are looking at migrating ~40TB backup data to ZFS, with an annual data
growth of
20-25%.
Now, my initial plan was to create one large pool comprised of X RAIDZ-2 vdevs
( 7 + 2 )
with one hotspare per 10 drives
Mertol Ozyoney wrote:
> Hi;
>
> You are right that J series do not have nvram onboard. However most Jbods
> like HPS's MSA series have some nvram.
> The idea behind not using nvram on the Jbod's is
>
> -) There is no use to add limited ram to a JBOD as disks already have a lot
> of cache.
> -)
You should be able to buy them today. GA should be next week
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]
-Original Message-
From: Tim [mailto:[EMAIL PROTEC
Hi;
You are right that J series do not have nvram onboard. However most Jbods
like HPS's MSA series have some nvram.
The idea behind not using nvram on the Jbod's is
-) There is no use to add limited ram to a JBOD as disks already have a lot
of cache.
-) It's easy to design a redundant Jbod wit
"Walter Faleiro" <[EMAIL PROTECTED]> writes:
> GC Warning: Large stack limit(10485760): only scanning 8 MB
> Hi,
> I reinstalled our Solaris 10 box using the latest update available.
> However I could not upgrade the zpool
>
> bash-3.00# zpool upgrade -v
> This system is currently running ZFS vers
Regarding the error checking, as others suggested you're best buying two
devices and mirroring them. ZFS has great error checking, why not use it :D
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
And regarding the memory loss after the battery runs down, that's no different
to any
Hi,
I reinstalled our Solaris 10 box using the latest update available.
However I could not upgrade the zpool
bash-3.00# zpool upgrade -v
This system is currently running ZFS version 4.
The following versions are supported:
VER DESCRIPTION
--- --
23 matches
Mail list logo