On Tue, Sep 2, 2008 at 2:15 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Silly me. It is still Monday, and I am coffee challenged. RAIDoptimizer
> is still an internal tool. However, for those who are interested in the
> results
> of a RAIDoptimizer run for 48 disks, see:
> http://blogs.sun.c
> "bs" == Bill Sommerfeld <[EMAIL PROTECTED]> writes:
bs> In an ip network, end nodes generally know no more than the
bs> pipe size of the first hop -- and in some cases (such as true
bs> CSMA networks like classical ethernet or wireless) only have
bs> an upper bound on the pip
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote:
> It's sort of like network QoS, but not quite, because:
>
> (a) you don't know exactly how big the ``pipe'' is, only
> approximately,
In an ip network, end nodes generally know no more than the pipe size of
the first hop -- and in
On Sun, 2008-08-31 at 12:00 -0700, Richard Elling wrote:
> 2. The algorithm *must* be computationally efficient.
>We are looking down the tunnel at I/O systems that can
>deliver on the order of 5 Million iops. We really won't
>have many (any?) spare cycles to play with.
> ZFS Administration Guide (in PDF format) does not
> look very professional (at least on
> Evince/OS2008.05). Please see attached screenshot.
I have cleaned up the original pdf file. Please see:
http://tinyurl.com/zfs-pdf
The invisible parts (original) are now visible (corrected).
It is not
Richard Elling wrote:
> Barton Fisk wrote:
>
>> Hi,
>> Forgive my ignorance of ZFS, but I have a customer that would like to set up
>> three 14+2 raidz2 groups on a new thor with 48 1TB drives (updated thumper)
>> so that 42TB for data could be achieved. What performance or other technical
>>
Barton Fisk wrote:
> Hi,
> Forgive my ignorance of ZFS, but I have a customer that would like to set up
> three 14+2 raidz2 groups on a new thor with 48 1TB drives (updated thumper)
> so that 42TB for data could be achieved. What performance or other technical
> issues with a stripe 14 disks wid
Barton Fisk wrote:
> Sorry I omitted that CF will be the boot device. Thanks again.
>
What are you using for redundancy of the boot device?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Sorry I omitted that CF will be the boot device. Thanks again.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 2, 2008 at 15:39, Barton Fisk <[EMAIL PROTECTED]> wrote:
> Hi,
> Forgive my ignorance of ZFS, but I have a customer that would like to set up
> three 14+2 raidz2 groups on a new thor with 48 1TB drives (updated thumper)
> so that 42TB for data could be achieved. What performance or ot
Hi,
Forgive my ignorance of ZFS, but I have a customer that would like to set up
three 14+2 raidz2 groups on a new thor with 48 1TB drives (updated thumper) so
that 42TB for data could be achieved. What performance or other technical
issues with a stripe 14 disks wide would he likely see? He doe
On Tue, 2 Sep 2008, Kenny wrote:
>
> I used your script (thanks) but I fail to see which controller
> controls which disk... Your white paper shows six luns with the
> active state first and then six with the active state second,
> however mine all show active state first.
>
> Yes, I've verified
On Tue, Sep 2, 2008 at 11:44, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> The fiber channel ... offers a bit more bandwidth than SAS.
The bandwidth part of this statement is not accurate. SAS uses wide
ports composed of (usually, other widths are possible) four 3 gbit
links. Each of these has a
Bob,
I used your script (thanks) but I fail to see which controller controls which
disk... Your white paper shows six luns with the active state first and then
six with the active state second, however mine all show active state first.
Yes, I've verified that both controllers are up and CAM see
On Tue, 2 Sep 2008, Mertol Ozyoney wrote:
> That's exactly what I said in a private email. J4200 or J4400 can offer
> better price/performance. However the price difference is not as much as you
> think. Besides 2540 have a few function that can not be found on J series ,
> like SAN connectivity,
On Mon, 1 Sep 2008, Gavin Maltby wrote:
> I'd like to be able to utter cmdlines such as
>
> $ zfs set readonly=on .
> $ zfs snapshot [EMAIL PROTECTED]
>
> with '.' interpreted to mean the dataset corresponding to the current
> working directory.
Sounds like it would be a useful RFE.
> This woul
Hello zfs-discuss,
I installed Open Solaris 2008.05 on my notebook then I
upgraded it to b95 (following required procedure). Everything
worked fine.
So now I booted into Windows, installed virtual box and wanted
it to boot OS from physical partition.
So I crea
Thinking about it, we could make use of this too. The ability to add a
remote iSCSI mirror to any pool without sacrificing local performance
could be a huge benefit.
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
> Subject: Re: Availabilit
That's exactly what I said in a private email. J4200 or J4400 can offer
better price/performance. However the price difference is not as much as you
think. Besides 2540 have a few function that can not be found on J series ,
like SAN connectivity, internal redundant raid controllers [redundancy is
19 matches
Mail list logo