Robert,
> I belive it's not solved yet but you may want to try with
> latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
___
zfs-discuss mailing list
zfs
ndexed access alongside heavy analytical 'update rarely if ever' kind of
workloads.
- Luke
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: Luke Lonergan
Cc: [EMAIL PROTECTED] <[EMAIL PROTECTED]>; zfs-discuss@opensolaris.org
Sent: Sat Nov 22 2
ZFS works marvelously well for data warehouse and analytic DBs. For lots of
small updates scattered across the breadth of the persistent working set, it's
not going to work well IMO.
Note that we're using ZFS to host databases as large as 10,000 TB - that's 10PB
(!!). Solaris 10 U5 on X4540.
Hi Bob,
On 2/15/08 12:13 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:
> I only managed to get 200 MB/s write when I did RAID 0 across all
> drives using the 2540's RAID controller and with ZFS on top.
Ridiculously bad.
You should max out both FC-AL links and get 800 MB/s.
> While I agree
Hi Bob,
I¹m assuming you¹re measuring sequential write speed posting the iozone
results would help guide the discussion.
For the configuration you describe, you should definitely be able to sustain
200 MB/s write speed for a single file, single thread due to your use of
4Gbps Fibre Channel inte
Hi Eric,
On 10/10/07 12:50 AM, "eric kustarz" <[EMAIL PROTECTED]> wrote:
> Since you were already using filebench, you could use the
> 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
> nthreads set to 20, iosize set to 128k) to achieve the same things.
Yes but once again we see th
Has someone e-mailed the author to recommend upgrading to S10U3? I'm
shocked the eval was favorable with S10U2 given S10U3's substantial
performance improvements...
- Luke
> > Rayson Ho wrote:
> >
> >> Interesting...
> >>
> >>
> http://www.rhic.bnl.gov/RCF/LiaisonMeeting/20070118/Other/thumper
Thanks for all the hard work on ZFS performance fixes George! U3 works
great.
- Luke
On 12/28/06 9:18 AM, "George Wilson" <[EMAIL PROTECTED]> wrote:
> Now that Solaris 10 11/06 is available, I wanted to post the complete list of
> ZFS features and bug fixes that were included in that release.
Anton,
On 12/8/06 7:18 AM, "Anton B. Rang" <[EMAIL PROTECTED]> wrote:
> If your database performance is dominated by sequential reads, ZFS may not be
> the best solution from a performance perspective. Because ZFS uses a
> write-anywhere layout, any database table which is being updated will quic
Roch,
On 11/2/06 12:51 AM, "Roch - PAE" <[EMAIL PROTECTED]> wrote:
> This one is not yet fixed :
> 6415647 Sequential writing is jumping
Yep - I mistook this one for another problem with drive firmware on
pre-revenue units. Since Robert has a customer release X4500 it doesn't
have the firmware
Robert,
On 10/31/06 3:55 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Right now with S10U3 beta with over 40 disks I can get only about
> 1.6GB/s peak.
That's decent - is that the number reported by "zpool iostat"? In that case
then I think 1GB = 1024^4, my GB measurements are roughly "b
Robert,
On 10/31/06 3:12 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Almost definitely not true. I did some simple test today with U3 beta
> on thumper and still can observe "jumping" writes with sequential
> 'dd'.
We crossed posts. There are some firmware issues with the Hitachi disks
Robert,
On 10/31/06 3:10 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Even then I would try first to test with more real load on ZFS as it
> can turn out that ZFS performs better anyway. Despite problems with
> large sequential writings I find ZFS to perform better in many more
> complex s
Robert,
> I belive it's not solved yet but you may want to try with
> latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
___
zfs-discuss mailing list
zfs
Opteron 280 or 275 with blowfish cipher does 33MB/s, default (DES?) cipher does
25MB/s.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Anantha N. Srirama [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 30, 2006 12:34 PM Eastern Standard Time
To: zfs-discuss@op
Matthew,
On 8/20/06 6:20 PM, "Matthew Ahrens" <[EMAIL PROTECTED]> wrote:
> This was not the design, we're working on fixing this bug so that many
> threads will be used to do the compression.
Is this also true of decompression?
- Luke
___
zfs-discus
Steffen,
On 8/10/06 8:12 AM, "Steffen Weiberle" <[EMAIL PROTECTED]> wrote:
> Those are compelling numbers! Have you seen them yourself? Or know who has?
O'Reilly Research is a good one, they were using MySQL for data mining work
and each query was taking 10 hours, despite all tuning on modern ha
Steffen,
Are they open to Postgres if it performs 1000 times faster, clusters to 120
nodes and 1.2 Petabytes?
- Luke
On 8/9/06 1:34 PM, "Steffen Weiberle" <[EMAIL PROTECTED]> wrote:
> Does anybody have real-world experince with MySQL 5 datastore on ZFS? Any
> feedback on clustering of
> nodes?
Doug,
On 8/8/06 10:15 AM, "Doug Scott" <[EMAIL PROTECTED]> wrote:
> I dont think there is much chance of achieving anywhere near 350MB/s.
> That is a hell of a lot of IO/s for 6 disks+raid(5/Z)+shared fibre. While you
> can always get very good results from a single disk IO, your percentage
> gai
Jochen,
On 8/8/06 10:47 AM, "Jochen M. Kaiser" <[EMAIL PROTECTED]> wrote:
> I really appreciate such information, could you please give us some additional
> insight regarding your statement, that "[you] tried to drive ZFS to its limit,
> [...]
> found that the results were less consistent or pre
Robert,
> LL> Most of my ZFS experiments have been with RAID10, but there were some
> LL> massive improvements to seq I/O with the fixes I mentioned - I'd expect
> that
> LL> this shows that they aren't in snv44.
>
> So where did you get those fixes?
>From the fine people who implemented them!
Robert,
On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> 1. UFS, noatime, HW RAID5 6 disks, S10U2
> 70MB/s
> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
> 87MB/s
> 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
> 130MB/s
> 4. ZFS, atime=off, SW
Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch
logic?
These are great results for random I/O, I wonder how the sequential I/O looks?
Of course you'll not get great results for sequential I/O on the 3510 :-)
- Luke
Sent from my GoodLink synchronized handheld (www.g
Nce! Hooray ZFS!
- Luke
Sent from my GoodLink synchronized handheld (www.good.com)
-Original Message-
From: Robert Milkowski [mailto:[EMAIL PROTECTED]
Sent: Monday, August 07, 2006 11:25 AM Eastern Standard Time
To: zfs-discuss@opensolaris.org
Subject:[zfs-discuss
David,
On 8/6/06 12:08 AM, "David Dyer-Bennet" <[EMAIL PROTECTED]> wrote:
> Okay, since it looks like I didn't get caught in the layoffs, I'm
> looking for the hardware platform to run the home disk server on. ZFS
> is the goal.
I built a home server for about $1,200 and it's nearly silent. It
Richard,
On 8/2/06 11:37 AM, "Richard Elling" <[EMAIL PROTECTED]> wrote:
>> Now with thumper - you are SPoF'd on the motherboard and operating
>> system - so you're not really getting the availability aspect from dual
>> controllers .. but given the value - you could easily buy 2 and still
>> co
Torrey,
On 8/1/06 10:30 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:
> http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
>
> Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
attachments.
That means that 14 disks, each with a
Torrey,
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 31, 2006 8:32 PM
>
> You might want to check the specs of the the 3510. In some
> configs you
> only get 2 ports. However, in others you can get 8.
Really? 8 active Fibre Channel por
Torrey,
On 7/28/06 10:11 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:
> That said a 3510 with a raid controller is going to blow the door, drive
> brackets, and skin off a JBOD in raw performance.
I'm pretty certain this is not the case.
If you need sequential bandwidth, each 3510 only bring
Title: Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
The prefetch and I/O scheduling of nv41 were responsible for some quirky performance. First time read performance might be good, then subsequent reads might be very poor.
With a very recent update to the zfs
30 matches
Mail list logo