On Tue, 2 Sep 2008, Kenny wrote:
>
> I used your script (thanks) but I fail to see which controller
> controls which disk... Your white paper shows six luns with the
> active state first and then six with the active state second,
> however mine all show active state first.
>
> Yes, I've verified
On Tue, Sep 2, 2008 at 11:44, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> The fiber channel ... offers a bit more bandwidth than SAS.
The bandwidth part of this statement is not accurate. SAS uses wide
ports composed of (usually, other widths are possible) four 3 gbit
links. Each of these has a
Bob,
I used your script (thanks) but I fail to see which controller controls which
disk... Your white paper shows six luns with the active state first and then
six with the active state second, however mine all show active state first.
Yes, I've verified that both controllers are up and CAM see
On Tue, 2 Sep 2008, Mertol Ozyoney wrote:
> That's exactly what I said in a private email. J4200 or J4400 can offer
> better price/performance. However the price difference is not as much as you
> think. Besides 2540 have a few function that can not be found on J series ,
> like SAN connectivity,
obile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]
-Original Message-
From: Al Hopper [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 02, 2008 3:53 AM
To: [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Proposed 2540 and ZFS configuration
On Mon,
On Mon, Sep 1, 2008 at 5:18 PM, Mertol Ozyoney <[EMAIL PROTECTED]> wrote:
> A few quick notes.
>
> 2540's first 12 drives are extremely fast due to the fact that they have
> direct unshared connections. I do not mean that additional disks are slow, I
> want to say that first 12 is extremely fast, c
Robert Milkowski wrote:
> Hello Bob,
>
> Friday, August 29, 2008, 7:25:14 PM, you wrote:
>
> BF> On Fri, 29 Aug 2008, Kyle McDonald wrote:
>>> What would one look for to decide what vdev to place each LUN?
>>>
>>> All mine have the same Current Load Balance value: round robin.
>
> BF> That is a
2:04 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Proposed 2540 and ZFS configuration
Personally I'd go for an 11 disk raid-z2, with one hot spare. You loose
some capacity, but you've got more than enough for your current needs, and
with 1TB disks single parity raid means a l
Hello Bob,
Friday, August 29, 2008, 7:25:14 PM, you wrote:
BF> On Fri, 29 Aug 2008, Kyle McDonald wrote:
>>>
>> What would one look for to decide what vdev to place each LUN?
>>
>> All mine have the same Current Load Balance value: round robin.
BF> That is a good question and I will have to rem
On Sun, 31 Aug 2008, Ross wrote:
> You could split this into two raid-z2 sets if you wanted, that would
> have a bit better performance, but if you can cope with the speed of
> a single pool for now I'd be tempted to start with that. It's
> likely that by Christmas you'll be able to buy flash
With the restriping: wouldn't it be as simple as creating a new
folder/dataset/whatever on the same pool and doing an rsync to the
same pool/new location. This would obviously cause a short downtime
to switch over and delete the old dataset, but seems like it should
work fine. If you're doubling
Personally I'd go for an 11 disk raid-z2, with one hot spare. You loose some
capacity, but you've got more than enough for your current needs, and with 1TB
disks single parity raid means a lot of time with your data unprotected when
one fails.
You could split this into two raid-z2 sets if you
On Fri, 29 Aug 2008, Kyle McDonald wrote:
>>
> What would one look for to decide what vdev to place each LUN?
>
> All mine have the same Current Load Balance value: round robin.
That is a good question and I will have to remind myself of the
answer. The "round robin" is good because that means
Bob Friesenhahn wrote:
> On Fri, 29 Aug 2008, Bob Friesenhahn wrote:
>
>> If you do use the two raidz2 vdevs, then if you pay attention to how
>> MPxIO works, you can balance the load across your two fiber channel
>> links for best performance. Each raidz2 vdev can be served (by
>> default) by
On Fri, 29 Aug 2008, Bob Friesenhahn wrote:
>
> If you do use the two raidz2 vdevs, then if you pay attention to how
> MPxIO works, you can balance the load across your two fiber channel
> links for best performance. Each raidz2 vdev can be served (by
> default) by a differente FC link.
As a foll
On Fri, 29 Aug 2008, Kenny wrote:
>
> 1) I didn't do raid2 because I didn't want to lose the space. Is
> this a bas idea??
Raidz2 is the most reliable vdev configuration other than
triple-mirror. The pool is only as strong as its weakest vdev. In
private email I suggested using all 12 drives
Hello again...
Now that I've got my 2540 up and running. I'm considering which configuration
is best. I have a proposed config and wanted your opinions and comments on it.
Background
I have a requirement to host syslog data from approx 30 servers. Currently the
data is about 3.5TB in
17 matches
Mail list logo