> What I find interesting is that so far Guido's C2D Mac laptop has
> gotten the highest values by far in this set of experiments, and no
> one else is even close.
> The slowest results, Michael's, are on the system with what appears
> to be the slowest CPU of the bunch; and the ranking of the rest
> I just made another test with a second Gentoo machine:
>
> Pentium 4 3.0Ghz Prescott
> GCC 4.1.1
> Glibc 2.4
> PostgreSQL 8.1.5
> Kernel 2.6.17
>
> Same postgresql.conf as yesterday's.
>
> First test
> ==
> GLIBC: -O2 -march=i686
> PostgreSQL: -O2 -march=i686
> Results: 974.63873
I was working on a project that was considering using a Dell/EMC (dell's
rebranded emc hardware) and here's some thoughts on your questions based
on that.
> 1. Is iscsi a decent way to do a san? How much performance do I
loose
> vs connecting the hosts directly with a fiber channel controller?
>Does anyone have any performance experience with the Dell Perc 5i
controllers in
>RAID 10/RAID 5?
Check the archives for details- I posted some numbers a while ago. I was
getting around 250 MB/s sequential write (dd) on Raid5x6, and about 220
MB/s on Raid 10x4 (keep in mind that's dd- RAID10 sh
>
> I could only find the 6 disk RAID5 numbers in the archives that were
run
> with
> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on
6
> disk
> RAID5 or 2xRAID1 + 4XRAID10?
>
Unfortunately most of the tests were run with bonnie 1.9 since they were
before I realized that p
Dells (at least the 1950 and 2950) come with the Perc5, which is
basically just the LSI MegaRAID. The units I have come with a 256MB BBU,
I'm not sure if it's upgradeable, but it looks like a standard DIMM in
there...
I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back
on a 29
> a) order Opterons. That doesn't solve the overload problem as such,
> but these pesky cs storms seems to have gone away this way.
I haven't run into context switch storms or similar issues with the new
Intel Woodcrests (yet.. they're still pretty new and not yet under real
production load), has
>
> Yes, this issue comes up often - I wonder if the Woodcrest Xeons
resolved
> this? Have these problems been experienced on both Linux and Windows
(we
> are
> running Windows 2003 x64)
>
> Carlo
>
IIRC Woodcrest doesn't have HT, just dual core with shared cache.
- Bucky
-
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of John Philips
> Sent: Monday, October 23, 2006 8:17 AM
> To: Ben Suffolk
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Optimizing disk throughput on quad Opteron
>
>
-logic raid-controller) and found it to be a very nice machine.
>
> But again, they also offer (the same?) Broadcom networking on board.
> Just like Dell and HP. And it is a LSI Logic sas-controller on board,
so
> if FBSD has trouble with either of those, its hard to find anything
> suitable at al
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Joshua D. Drake
> Sent: Friday, October 20, 2006 2:52 PM
> To: Ben Suffolk
> Cc: Dave Cramer; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] New hardware thoughts
>
> Ben S
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Merlin Moncure
> Sent: Tuesday, October 17, 2006 4:29 PM
> To: Rohit_Behl
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Jdbc/postgres performance
>
> On 10/17/06, R
> I completely agree that it's much better *in the long run* to improve
> the planner and the statistics system so that we don't need hints. But
> there's been no plan put forward for how to do that, which means we
also
> have no idea when some of these problems will be resolved. If someone
> comes
> > Well, one nice thing about the per-query method is you can post
before
> > and after EXPLAIN ANALYZE along with the hints.
>
> One bad thing is that application designers will tend to use the hint,
fix
> the immediate issue, and never report a problem at all. And query
hints
> would not be co
>What is it about hinting that makes it so easily breakable with new versions?
>I >don't have any experience with Oracle, so I'm not sure how they screwed
>logic like >this up.
I don't have a ton of experience with oracle either, mostly DB2, MSSQL and PG.
So, I thought I'd do some googling,
> Brian Herlihy <[EMAIL PROTECTED]> writes:
> > What would it take for hints to be added to postgres?
>
> A *whole lot* more thought and effort than has been expended on the
> subject to date.
>
> Personally I have no use for the idea of "force the planner to do
> exactly X given a query of exact
>
> So, I'd like my cake and eat it too... :-)
>
> I'd like to have my indexes built as rows are inserted into the
> partition so help with the drill down...
>
So you want to drill down so fine grained that summary tables don't do
much good? Keep in mind, even if you roll up only two records, th
> > The bottom line here is likely to be "you need more RAM" :-(
>
> Yup. Just trying to get a handle on what I can do if I need more than
> 16G
> Of ram... That's as much as I can put on the installed based of
> servers 100s of them.
>
> >
> > I wonder whether there is a way to use table pa
> On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote:
> > Hi all,
> >
> > I still have an dual dual-core opteron box with a 3Ware 9550SX-12
> sitting
> > here and I need to start getting it ready for production. I also
have
> to
> > send back one processor since we were mistakenly sent two
Markus,
First, thanks- your email was very enlightining. But, it does bring up a
few additional questions, so thanks for your patience also- I've listed
them below.
> It applies per active backend. When connecting, the Postmaster forks a
> new backend process. Each backend process has its own sca
> > Do you think that adding some posix_fadvise() calls to the backend
to
> > pre-fetch some blocks into the OS cache asynchroneously could
improve
> > that situation?
>
> Nope - this requires true multi-threading of the I/O, there need to be
> multiple seek operations running simultaneously. The
Mike,
> On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:
> >If you have a table with 100million records, each of which is
200bytes
> long,
> >that gives you roughtly 20 gig of data (assuming it was all written
> neatly
> >and hasn't been updated much).
>
I'll keep that in mind (minimi
> good normalization skills are really important for large databases,
> along with materialization strategies for 'denormalized sets'.
Good points- thanks. I'm especially curious what others have done for
the materialization. The matview project on gborg appears dead, and I've
only found a smatter
>Yes. What's pretty large? We've had to redefine large recently, now
we're
>talking about systems with between 100TB and 1,000TB.
>
>- Luke
Well, I said large, not gargantuan :) - Largest would probably be around
a few TB, but the problem I'm having to deal with at the moment is large
numbers (p
>When we first started working with Solaris ZFS, we were getting about
>400-600 MB/s, and after working with the Solaris Engineering team we
now >get
>rates approaching 2GB/s. The updates needed to Solaris are part of the
>Solaris 10 U3 available in October (and already in Solaris Express, aka
>So
>Hyper threading. It's usually not recommended to enable it on
>PostgreSQL servers. On most servers, you can disable it directly in
>the BIOS.
Maybe for specific usage scenarios, but that's generally not been my experience
with relatively recent versions of PG. We ran some tests with pgbench, and
Setting to 0.1 finally gave me the result I was looking for. I know
that the index scan is faster though. The seq scan never finished (i
killed it after 24+ hours) and I'm running the query now with indexes
and it's progressing nicely (will probably take 4 hours).
In regards to "progressing nic
cky
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 24, 2006 3:38 PM
To: Merlin Moncure
Cc: Jeff Davis; Bucky Jordan; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PowerEdge 2950 questions
On Thu, 2006-08-24 at 13:57, Merlin Moncure wro
Also, as Tom stated, defining your test cases is a good idea before you
start benchmarking. Our application has a load data phase, then a
query/active use phase. So, we benchmark both (data loads, and then
transactions) since they're quite different workloads, and there's
different ways to optimize
Hi Jeff,
My experience with the 2950 seemed to indicate that RAID10x6 disks did
not perform as well as RAID5x6. I believe I posted some numbers to
illustrate this in the post you mentioned.
If I remember correctly, the numbers were pretty close, but I was
expecting RAID10 to significantly beat R
ync)"
I get ~255 mb/s from the above.
Bucky
-Original Message-
From: Marty Jia [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 22, 2006 3:38 PM
To: Bucky Jordan; Joshua D. Drake
Cc: Alex Turner; Mark Lewis; pgsql-performance@postgresql.org; DBAs;
Rich Wilson; Ernest Wurzbach
Subject:
Marty,
Here's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install
on a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB
cache/socket) with 6x300GB 10k SAS drives:
pgbench -c 10 -t 1 -d bench 2>/dev/null
pghost: pgport: (null) nclients: 10 nxacts: 1 dbName: ben
We've been doing some research in this area (the new Woodcrest from
Intel, the Opterons from Dell, and SAS).
In a nutshell, here's what I'm aware of:
Dell does provide a 15 disk external SAS enclosure- the performance
numbers they claim look pretty good (of course, go figure) and as far as
I can
That's about what I was getting for a 2 disk RAID 0 setup on a PE 2950.
Here's bonnie++ numbers for the RAID10x4 and RAID0x2, unfortunately I
only have the 1.93 numbers since this was before I got the advice to run
with the earlier version of bonnie and larger file sizes, so I don't
know how meanin
uke Lonergan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 16, 2006 2:18 AM
To: Bucky Jordan; Vivek Khera; Pgsql-Performance ((E-mail))
Subject: Re: [PERFORM] Dell PowerEdge 2950 performance
Cool - seems like the posters caught that "auto memory pick" problem
before
you posted, but you got th
Luke,
For some reason it looks like bonnie is picking a 300M file.
> bonnie++ -d bonnie
Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %C
...
I see you are running bonnie++ version 1.93c. The numbers it reports are
very different from version 1.03a, which is the one everyone runs - can
you
post your 1.03a numbers from bonnie++?
...
Luke,
Thanks for the pointer. Here's the 1.03 numbers, but at the moment I'm
only able to run them on
...
Of more interest would be a test which involved large files with lots
of seeks all around (something like bonnie++ should do that).
...
Here's the bonnie++ numbers for the RAID 5 x 6 disks. I believe this was
with write-through and 64k striping. I plan to run a few others with
different bloc
...
Is the PERC 5/i dual channel? If so, are 1/2 the drives on one channel and the
other half on the other channel? I find this helps RAID10 performance when the
mirrored pairs are on separate channels.
...
With the SAS controller (PERC 5/i), every drive gets it's own 3 GB/s port.
...
Your t
Hello,
I’ve recently been tasked with scalability/performance
testing of a Dell PowerEdge 2950. This is the one with the new Intel Woodcrest
Xeons. Since I haven’t seen any info on this box posted to the list, I
figured people might be interested in the results, and maybe in return shar
40 matches
Mail list logo