> On Wed, 27 Feb 2008, Cyril Plisko wrote:
> >>
> >>
> http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf
> >
> > Nov 26, 2008 ??? May I borrow your time machine ? ;-)
>
> Are there any stock prices you would like to know about? Perhaps you
>
> are inter
On Wed, 27 Feb 2008, Cyril Plisko wrote:
>>
>>
>> http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf
>
> Nov 26, 2008 ??? May I borrow your time machine ? ;-)
Are there any stock prices you would like to know about? Perhaps you
are interested in the outcome of the
On Wed, Feb 27, 2008 at 6:17 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Sun, 17 Feb 2008, Mertol Ozyoney wrote:
>
> > Hi Bob;
> >
> > When you have some spare time can you prepare a simple benchmark report in
> > PDF that I can share with my customers to demonstrate the performance of
On Sun, 17 Feb 2008, Mertol Ozyoney wrote:
> Hi Bob;
>
> When you have some spare time can you prepare a simple benchmark report in
> PDF that I can share with my customers to demonstrate the performance of
> 2540 ?
While I do not claim that it is "simple" I have created a report on my
configura
It is the same for the 2530, and I am fairly certain it is also valid
for the 6130,6140, & 6540.
-Joel
On Feb 18, 2008, at 3:51 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Joel,
>
> Saturday, February 16, 2008, 4:09:11 PM, you wrote:
>
> JM> Bob,
>
> JM> Here is how you can tell th
Hello Joel,
Saturday, February 16, 2008, 4:09:11 PM, you wrote:
JM> Bob,
JM> Here is how you can tell the array to ignore cache sync commands
JM> and the force unit access bits...(Sorry if it wraps..)
JM> On a Solaris CAM install, the 'service' command is in "/opt/SUNWsefms/bin"
JM> To read th
Bob Friesenhahn writes:
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
> >>> What was the interlace on the LUN ?
> >
> > The question was about LUN interlace not interface.
> > 128K to 1M works better.
>
> The "segment size" is set to 128K. The max the 2540 allows is 512K.
> Unfortuna
On Mon, 18 Feb 2008, Ralf Ramge wrote:
> I'm a bit disturbed because I think about switching to 2530/2540
> shelves, but a maximum 250 MB/sec would disqualify them instantly, even
Note that this is single-file/single-thread I/O performance. I suggest
that you read the formal benchmark report for
Mertol Ozyoney wrote:
>
> 2540 controler can achieve maximum 250 MB/sec on writes on the first
> 12 drives. So you are pretty close to maximum throughput already.
>
> Raid 5 can be a little bit slower.
>
I'm a bit irritated now. I have ZFS running for some Sybase ASE 12.5
databases using X4600 s
+905339310752
Fax +90212335
Email [EMAIL PROTECTED]
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bob Friesenhahn
Sent: 16 Şubat 2008 Cumartesi 19:57
To: Joel Miller
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Performance with Sun
On Sat, 16 Feb 2008, Joel Miller wrote:
> Here is how you can tell the array to ignore cache sync commands and
> the force unit access bits...(Sorry if it wraps..)
Thanks to the kind advice of yourself and Mertol Ozyoney, there is a
huge boost in write performance:
Was: 154MB/second
Now: 279MB
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]
-Original Message-
From: Bob Friesenhahn [mailto:[EMAIL PROTECTED]
Sent: 16 Şubat 2008 Cumartesi 18:43
To: Mertol Ozyoney
Cc: zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] Performance with Sun StorageTek 2540
On Sat
On Sat, 16 Feb 2008, Mertol Ozyoney wrote:
>
> Please try to distribute Lun's between controllers and try to benchmark by
> disabling cache mirroring. (it's different then disableing cache)
By the term "disabling cache mirroring" are you talking about "Write
Cache With Replication Enabled" in the
On Sat, 16 Feb 2008, Peter Tribble wrote:
> Agreed. My 2530 gives me about 450MB/s on writes and 800 on reads.
> That's zfs striped across 4 LUNs, each of which is hardware raid-5
> (24 drives in total, so each raid-5 LUN is 5 data + 1 parity).
Is this single-file bandwidth or multiple-file/thread
Bob,
Here is how you can tell the array to ignore cache sync commands and the force
unit access bits...(Sorry if it wraps..)
On a Solaris CAM install, the 'service' command is in "/opt/SUNWsefms/bin"
To read the current settings:
service -d arrayname -c read -q nvsram region=0xf2 host=0x00
sav
222
Email [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim
Sent: 15 Şubat 2008 Cuma 03:13
To: Bob Friesenhahn
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Performance with Sun StorageTek 2540
On 2/14/08, B
On Feb 15, 2008 10:20 PM, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Hi Bob,
>
> On 2/15/08 12:13 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:
>
> > I only managed to get 200 MB/s write when I did RAID 0 across all
> > drives using the 2540's RAID controller and with ZFS on top.
>
> Ridiculousl
The segment size is amount of contiguous space that each drive contributes to a
single stripe.
So if you have a 5 drive RAID-5 set @ 128k segment size, a single stripe =
(5-1)*128k = 512k
BTW, Did you tweak the cache sync handling on the array?
-Joel
This message posted from opensolaris.or
On Fri, 15 Feb 2008, Albert Chin wrote:
>
> http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=st&q=#0b500afc4d62d434
This is really discouraging. Based on these newsgroup postings I am
thinking that the Sun StorageTek 2540 was not a good inv
Bob Friesenhahn wrote:
> On Fri, 15 Feb 2008, Luke Lonergan wrote:
>
>>> I only managed to get 200 MB/s write when I did RAID 0 across all
>>> drives using the 2540's RAID controller and with ZFS on top.
>>>
>> Ridiculously bad.
>>
>
> I agree. :-(
>
>
>>> While I agree that data
On Fri, 15 Feb 2008, Luke Lonergan wrote:
>> I only managed to get 200 MB/s write when I did RAID 0 across all
>> drives using the 2540's RAID controller and with ZFS on top.
>
> Ridiculously bad.
I agree. :-(
>> While I agree that data is sent twice (actually up to 8X if striping
>> across four
Hi Bob,
On 2/15/08 12:13 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:
> I only managed to get 200 MB/s write when I did RAID 0 across all
> drives using the 2540's RAID controller and with ZFS on top.
Ridiculously bad.
You should max out both FC-AL links and get 800 MB/s.
> While I agree
On Fri, Feb 15, 2008 at 09:00:05PM +, Peter Tribble wrote:
> On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
> > On Fri, 15 Feb 2008, Peter Tribble wrote:
> > >
> > > May not be relevant, but still worth checking - I have a 2530 (which
> > ought
> > > to be tha
On Fri, 15 Feb 2008, Bob Friesenhahn wrote:
>
> Notice that the first six LUNs are active to one controller while the
> second six LUNs are active to the other controller. Based on this, I
> should rebuild my pool by splitting my mirrors across this boundary.
>
> I am really happy that ZFS makes s
On Fri, 15 Feb 2008, Peter Tribble wrote:
> Each LUN is accessed through only one of the controllers (I presume the
> 2540 works the same way as the 2530 and 61X0 arrays). The paths are
> active/passive (if the active fails it will relocate to the other path).
> When I set mine up the first time i
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 15 Feb 2008, Peter Tribble wrote:
> >
> > May not be relevant, but still worth checking - I have a 2530 (which ought
> > to be that same only SAS instead of FC), and got fairly poor performance
> > at first. T
On Fri, 15 Feb 2008, Peter Tribble wrote:
>
> May not be relevant, but still worth checking - I have a 2530 (which ought
> to be that same only SAS instead of FC), and got fairly poor performance
> at first. Things improved significantly when I got the LUNs properly
> balanced across the controller
On Fri, Feb 15, 2008 at 12:30 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
> up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
> connected via load-shared 4Gbit FC links. This week I have tried many
> differen
On Fri, 15 Feb 2008, Luke Lonergan wrote:
I'm assuming you're measuring sequential write speed posting the iozone
results would help guide the discussion.
Posted below. I am also including the output from mpathadm in case
there is something wrong with the load sharing.
For the configura
Hi Bob,
I¹m assuming you¹re measuring sequential write speed posting the iozone
results would help guide the discussion.
For the configuration you describe, you should definitely be able to sustain
200 MB/s write speed for a single file, single thread due to your use of
4Gbps Fibre Channel inte
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>> What was the interlace on the LUN ?
>
> The question was about LUN interlace not interface.
> 128K to 1M works better.
The "segment size" is set to 128K. The max the 2540 allows is 512K.
Unfortunately, the StorageTek 2540 and CAM documentation do
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit :
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>>
>>> As mentioned before, the write rate peaked at 200MB/second using
>>> RAID-0 across 12 disks exported as one big LUN.
>>
>> What was the interlace on the LUN ?
>
The question was about LUN in
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>
>> As mentioned before, the write rate peaked at 200MB/second using
>> RAID-0 across 12 disks exported as one big LUN.
>
> What was the interlace on the LUN ?
There are two 4Gbit FC interfaces on an Emulex LPe11002 card which are
supposedly acting
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit :
> On Thu, 14 Feb 2008, Tim wrote:
>>
>> If you're going for best single file write performance, why are you
>> doing
>> mirrors of the LUNs? Perhaps I'm misunderstanding why you went
>> from one
>> giant raid-0 to what is essentially a raid-1
On Fri, 15 Feb 2008, Will Murnane wrote:
> What is the workload for this system? Benchmarks are fine and good,
> but application performance is the determining factor of whether a
> system is performing acceptably.
The system is primarily used for image processing where the image data
is uncompr
On Fri, Feb 15, 2008 at 2:34 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> As mentioned before, the write rate peaked at 200MB/second using
> RAID-0 across 12 disks exported as one big LUN. Other firmware-based
> methods I tried typically offered about 170MB/second. Even a four
> disk firm
On Thu, 14 Feb 2008, Tim wrote:
>
> If you're going for best single file write performance, why are you doing
> mirrors of the LUNs? Perhaps I'm misunderstanding why you went from one
> giant raid-0 to what is essentially a raid-10.
That decision was made because I also need data reliability.
As
On 2/14/08, Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
>
> Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
> up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
> connected via load-shared 4Gbit FC links. This week I have tried many
> different configurations, using
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the contro
39 matches
Mail list logo