h=2) (actual time=0.025..0.349 rows=160 loops=1)
Total runtime: 180.807 ms
(7 rows)
If you like to toy around with the datasets on your heavily optimized
postgresql-installs, let me know. The data is just generated for
testing-purposes and I'd happily send a copy to anyone interested.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
On 6-4-2005 19:04, Steve Atkins wrote:
On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:
Hi list,
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL
On 6-4-2005 19:42, Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL was ten times worse on th
On 24-8-2005 16:43, Alexandre Barros wrote:
Hello,
i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron 2600+,
1Gb RAM on IDE HD ( which could be called a "heavy desktop" ), measuring
this performance with pgbench ( found on /contrib ) it gave me an
average ( after several runs ) o
Hi list,
I'm writing an application that will aggregate records with a few
million records into averages/sums/minimums etc grouped per day.
Clients can add filters and do lots of customization on what they want
to see. And I've to translate that to one or more queries. Basically, I
append ea
On 26-8-2005 15:05, Richard Huxton wrote:
Arjen van der Meijden wrote:
I left all the configuration-stuff to the defaults since changing
values didn't seem to impact much. Apart from the buffers and
effective cache, increasing those made the performance worse.
I've not looked a
On 27-8-2005 0:56, Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
As said, it chooses sequential scans or "the wrong index plans" over a
perfectly good plan that is just not selected when the parameters are
"too well tuned" or sequential scanni
On 27-8-2005 16:27, Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
Is a nested loop normally so much (3x) more costly than a hash join? Or
is it just this query that gets estimated wronly?
There's been some discussion that we are overestimating the cost of
n
On 1-9-2005 19:42, Matthew Sackman wrote:
Obviously, to me, this is a problem, I need these queries to be under a
second to complete. Is this unreasonable? What can I do to make this "go
faster"? I've considered normalising the table but I can't work out
whether the slowness is in dereferencing t
assume its "in the order of days" for most RAID controllers.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 23-9-2005 13:05, Michael Stone wrote:
On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:
Ok, that's great, but you didn't respond to the suggestion of using COPY
INTO instead of INSERT.
But I have no clue where to begin with determining the bottleneck (it
even may be a norma
On 23-9-2005 15:35, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 13:19 +0200, Arjen van der Meijden wrote:
Drop all of them and recreate them once the table is filled. Of course
that only works if you know your data will be ok (which is normal for
imports of already conforming data like
On 15-11-2005 15:18, Steve Wampler wrote:
Magnus Hagander wrote:
(This is after putting an index on the (id,name,value) tuple.) That outer seq
scan
is still annoying, but maybe this will be fast enough.
I've passed this on, along with the (strong) recommendation that they
upgrade PG.
Have yo
'-fields of pg_statistic, will it?
I'll run another analyze on the database to see if that makes any
difference, but after that I'm not sure what to check first to figure
out where things go wrong?
Best regards,
Arjen van der Meijden
Tweakers.net
---(end
Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
I'll run another analyze on the database to see if that makes any
difference, but after that I'm not sure what to check first to figure
out where things go wrong?
Look for changes in plans?
Yeah, there are
On 7-12-2006 7:01 Jim C. Nasby wrote:
Can you post them on the web somewhere so everyone can look at them?
No, its not (only) the size that matters, its the confidentiality I'm
not allowed to just break by myself. Well, at least not on a scale like
that. I've been mailing off-list with Tom and
These benchmarks are all done using 64 bit linux:
http://tweakers.net/reviews/646
Best regards,
Arjen
On 7-12-2006 11:18 Mindaugas wrote:
Hello,
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
On 7-12-2006 12:05 Mindaugas wrote:
Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly
priced dual core [EMAIL PROTECTED] and four core [EMAIL PROTECTED] According to
article's scaling data PostgreSQL performance should be similar (1.86GHz
* 2 * 80% = ~3GHz). And quad core has
On 16-12-2006 4:24 Jeff Frost wrote:
We can add more RAM and drives for testing purposes. Can someone
suggest what benchmarks with what settings would be desirable to see how
this system performs. I don't believe I've seen any postgres benchmarks
done on a quad xeon yet.
We've done our "sta
On 18-1-2007 0:37 Adam Rich wrote:
4) Complex queries that might take advantage of the MySQL "Query Cache"
since the base data never changes
Have you ever compared MySQL's performance with complex queries to
PostgreSQL's? I once had a query which would operate on a recordlist and
see whether
On 18-1-2007 17:20 Scott Marlowe wrote:
Besides that, mysql rewrites the entire table for most table-altering
statements you do (including indexes).
Note that this applies to the myisam table type. innodb works quite
differently. It is more like pgsql in behaviour, and is an mvcc storage
A
On 18-1-2007 18:28 Jeremy Haile wrote:
I once had a query which would operate on a recordlist and
see whether there were any gaps larger than 1 between consecutive
primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
Well it was over a
On 18-1-2007 23:11 Tom Lane wrote:
Increase work_mem? It's not taking the hash because it thinks it won't
fit in memory ...
When I increase it to 128MB in the session (arbitrarily selected
relatively large value) it indeed has the other plan.
Best regards,
Arjen
--
exhaust and power
supply. It was one of the reasons we decided to use seperate enclosures,
seperating the processors/memory from the big disk array.
Best regards and good luck,
Arjen van der Meijden
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Alvaro Herrera wrote:
Interesting -- the MySQL/Linux graph is very similar to the graphs from
the .nl magazine posted last year. I think this suggests that the
"MySQL deficiency" was rather a performance bug in Linux, not in MySQL
itself ...
The latest benchmark we did was both with Solaris an
On 28-2-2007 0:42 Geoff Tolley wrote:
[2] How do people on this list monitor their hardware raid? Thus far we
have used Dell and the only way to easily monitor disk status is to use
their openmanage application. Do other controllers offer easier means
of monitoring individual disks in a raid co
And here is that latest benchmark we did, using a 8 dual core opteron
Sun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties
scaling over 8 cores, but not as bad as MySQL.
http://tweakers.net/reviews/674
Best regards,
Arjen
Arjen van der Meijden wrote:
Alvaro Herrera
Stefan Kaltenbrunner wrote:
ouch - do I read that right that even after tom's fixes for the
"regressions" in 8.2.0 we are still 30% slower then the -HEAD checkout
from the middle of the 8.2 development cycle ?
Yes, and although I tested about 17 different cvs-checkouts, Tom and I
weren't real
On 5-3-2007 21:38 Tom Lane wrote:
Keep in mind that Arjen's test exercises some rather narrow scenarios;
IIRC its performance is mostly determined by some complicated
bitmap-indexscan cases. So that "30% slower" bit certainly doesn't
represent an across-the-board figure. As best I can tell, the
On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
This is a SAS set
On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've
found them to be unreliable when used with dual AMD64 processors 4+ GB
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet
inevitable ext3 corruptions. 3ware cards, on th
If the 3U case has a SAS-expander in its backplane (which it probably
has?) you should be able to connect all drives to the Adaptec
controller, depending on the casing's exact architecture etc. That's
another two advantages of SAS, you don't need a controller port for each
harddisk (we have a D
On 5-4-2007 17:58 [EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
We ha
Can't you use something like this? Or is the distinct on the t.cd_id
still causing the major slowdown here?
SELECT ... FROM cd
JOIN tracks ...
WHERE cd.id IN (SELECT DISTINCT t.cd_id FROM tracks t
WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 10)
If that is your main cul
On 7-4-2007 18:24 Tilo Buschmann wrote:
Unfortunately, the query above will definitely not work correctly, if
someone searches for "a" or "the".
That are two words you may want to consider not searching on at all.
As Tom said, its not very likely to be fixed in PostgreSQL. But you can
always
On 21-4-2007 1:42 Mark Kirkwood wrote:
I don't think that will work for the vector norm i.e:
|x - y| = sqrt(sum over j ((x[j] - y[j])^2))
I don't know if this is usefull here, but I was able to rewrite that
algorithm for a set of very sparse vectors (i.e. they had very little
overlapping fac
I assume red is PostgreSQL and green is MySQL. That reflects my own
benchmarks with those two.
But I don't fully understand what the graph displays. Does it reflect
the ability of the underlying database to support a certain amount of
users per second given a certain database size? Or is the g
ecture for cache coherency.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
n try the newest 5.0-verion (5.0.41?)
which eliminates several scaling issues in InnoDB, but afaik not all of
them. Besides that, it just can be pretty painful to get a certain query
fast, although we've not very often seen it failing completely in the
last few years
There are two solutions:
You can insert all data from tableB in tableA using a simple insert
select-statement like so:
INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;
Or you can visually combine them without actually putting the records in
a single table. That can be with a normal select
Have you also tried the COPY-statement? Afaik select into is similar to
what happens in there.
Best regards,
Arjen
On 17-7-2007 21:38 Thomas Finneid wrote:
Hi
I was doing some testing on "insert" compared to "select into". I
inserted 100 000 rows (with 8 column values) into a table, which t
Perhaps you should've read the configuration-manual-page more carefully. ;)
Besides, WAL-archiving is turned off by default, so if you see them
being archived you actually enabled it earlier
The "archive_command" is empty by default: "If this is an empty string
(the default), WAL archiving is
On 9-8-2007 23:50 Merlin Moncure wrote:
Where the extra controller especially pays off is if you have to
expand to a second tray. It's easy to add trays but installing
controllers on a production server is scary.
For connectivity-sake that's not a necessity. You can either connect
(two?) extr
On 6-9-2007 14:35 Harsh Azad wrote:
2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)
I don't understand this sentence. You seem to imply you might be able to
fit more processors in your system?
Currently the only Quad Core's you can buy are dual-processor
processors, unless you already
On 6-9-2007 20:42 Scott Marlowe wrote:
On 9/6/07, Harsh Azad <[EMAIL PROTECTED]> wrote:
Hi,
How about the Dell Perc 5/i card, 512MB battery backed cache or IBM
ServeRAID-8k Adapter?
All Dell Percs have so far been based on either adaptec or LSI
controllers, and have ranged from really bad to
On 6-9-2007 20:29 Mark Lewis wrote:
Maybe I'm jaded by past experiences, but the only real use case I can
see to justify a SAN for a database would be something like Oracle RAC,
but I'm not aware of any PG equivalent to that.
PG Cluster II seems to be able to do that, but I don't know whether
On 5-10-2007 16:34 Cláudia Macedo Amorim wrote:
[13236.470] statement_type=0, statement='select
a_teste_nestle."CODCLI",
a_teste_nestle."CODFAB",
a_teste_nestle."CODFAMILIANESTLE",
a_teste_nestle."CODFILIAL",
a_teste_nestle."CODGRUPONESTLE",
a_teste_nestle."CODSUBGRUPONESTLE",
a_t
On 18-3-2010 16:50 Scott Marlowe wrote:
It's different because it only takes pgsql 5 milliseconds to run the
query, and 40 seconds to transfer the data across to your applicaiton,
which THEN promptly throws it away. If you run it as
MySQL's client lib doesn't transfer over the whole thing. Thi
What about FreeBSD with ZFS? I have no idea which features they support
and which not, but it at least is a bit more free than Solaris and still
offers that very nice file system.
Best regards,
Arjen
On 2-4-2010 21:15 Christiaan Willemsen wrote:
Hi there,
About a year ago we setup a machine
On 12-8-2010 2:53 gnuo...@rcn.com wrote:
- The value of SSD in the database world is not as A Faster HDD(tm).
Never was, despite the naive' who assert otherwise. The value of SSD
is to enable BCNF datastores. Period. If you're not going to do
that, don't bother. Silicon storage will never rea
On 13-8-2010 1:40 Scott Carey wrote:
Agreed. There is a HUGE gap between "ooh ssd's are fast, look!" and
engineering a solution that uses them properly with all their
strengths and faults. And as 'gnuoytr' points out, there is a big
difference between an Intel SSD and say, this thing:
http://ww
Isn't it more fair to just flush the cache before doing each of the
queries? In real-life, you'll also have disk caching... Flushing the
buffer pool is easy, just restart PostgreSQL (or perhaps there is a
admin command for it too?). Flushing the OS-disk cache is obviously
OS-dependent, for linu
On 16-11-2010 11:50, Louis-David Mitterrand wrote:
I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?
1) one 'price' row per price change:
create table price (
id_price primary key,
id_product integer
On 10-12-2010 14:58 Andy wrote:
We use ZFS and use SSDs for both the log device and L2ARC. All
disks and SSDs are behind a 3ware with BBU in single disk mode.
Out of curiosity why do you put your log on SSD? Log is all
sequential IOs, an area in which SSD is not any faster than HDD. So
I'd thi
On 10-12-2010 18:57 Arjen van der Meijden wrote:
Have a look here: http://www.anandtech.com/show/2829/21
The sequential writes-graphs consistently put several SSD's at twice the
performance of the VelociRaptor 300GB 10k rpm disk and that's a test
from over a year old, current
On 2-3-2011 16:29 Robert Haas wrote:
On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus wrote:
Does anyone have the hardware to test FlashCache with PostgreSQL?
http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx
I'd be interested to hear how it performs ...
It'd be a lot more int
On 18-3-2011 4:02 Scott Marlowe wrote:
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
wrote:
Another point. My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5" drive
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems with them. That inc
Hi List,
In the past few weeks we have been developing a read-heavy
mysql-benchmark to have an alternative take at cpu/platform-performance.
Not really to have a look at how fast mysql can be.
This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled
after our website's production
Qingqing Zhou wrote:
"Arjen van der Meijden" <[EMAIL PROTECTED]> wrote
Some sort of web query behavior is quite optimized in MySQL. For example,
the query below is runing very fast due to the query result cache
implementation in MySQL.
Loop N times
SELECT * FROM A WHERE
ost all where inserts in log-tables which aren't
actively read in this benchmark.
But I'll give it a try.
Best regards,
Arjen
Arjen van der Meijden wrote:
Hi List,
In the past few weeks we have been developing a read-heavy
mysql-benchmark to have an alternative take at
cpu/platform-
related. So iostat output with -c option to include CPU times
helps to put it in the right perspective.
Also do check the tunables mentioned and make sure they are set.
Regards,
Jignesh
Arjen van der Meijden wrote:
Hi Jignesh,
Jignesh K. Shah wrote:
Hi Arjen,
Looking at your outputs...
le prstat -amLc > prstat.txt)
and find the pids with high user cpu time and then use the usrcall.d on
few of those pids.
Also how many database connections do you have and what's the type of
query run by each connection?
-Jignesh
Arjen van der Meijden wrote:
Hi Jignesh,
The sett
On 16-6-2006 17:18, Robert Lor wrote:
I think this system is well suited for PG scalability testing, among
others. We did an informal test using an internal OLTP benchmark and
noticed that PG can scale to around 8 CPUs. Would be really cool if all
32 virtual CPUs can be utilized!!!
I can al
On 17-6-2006 1:24, Josh Berkus wrote:
Arjen,
I can already confirm very good scalability (with our workload) on
postgresql on that machine. We've been testing a 32thread/16G-version
and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores
(with all four threads enabled).
Keen.
On 22-6-2006 15:03, David Roussel wrote:
Sureky the 'perfect' line ought to be linear? If the performance was
perfectly linear, then the 'pages generated' ought to be G times the
number (virtual) processors, where G is the gradient of the graph. In
such a case the graph will go through the or
On 29-7-2006 17:43, Joshua D. Drake wrote:
I would love to get my hands on that postgresql version and see how much
farther it could be optimized.
You probably mean the entire installation? As said in my reply to
Jochem, I've spent a few days testing all queries to improve their
performance
because Tweakers.net runs on
MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has
done basic optimizations like adding indexes.
There were a few minor datatype changes (like enum('Y', 'N') to boolean,
but on the other hand also 'int unsigned' to
On 29-7-2006 19:01, Joshua D. Drake wrote:
Well I would be curious about the postgresql.conf and how much ram
etc... it had.
It was the 8core version with 16GB memory... but actually that's just
overkill, the active portions of the database easily fits in 8GB and a
test on another machine wit
hat, we did no extra tuning of the OS, nor did Hans for the
MySQL-optimizations (afaik, but then again, he knows best).
Best regards,
Arjen van der Meijden
Jignesh Shah wrote:
Hi Arjen,
I am curious about your Sun Studio compiler options also.
Can you send that too ?
Any other tweakings tha
gh it will be in Dutch, you can still read the
graphs).
Best regards,
Arjen van der Meijden
Tweakers.net
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 1-8-2006 19:26, Jim C. Nasby wrote:
On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:
I'd love to get an english translation that we could use for PR.
Actually, we have an english version of the Socket F follow-up.
http://tweakers.net/reviews/638 which basically displays the
Hi Markus,
As said, our environment really was a read-mostly one. So we didn't do
much inserts/updates and thus spent no time tuning those values and left
them as default settings.
Best regards,
Arjen
Markus Schaber wrote:
Hi, Arjen,
Arjen van der Meijden wrote:
It was the
uld still be able to have two top-off-the-line x86 cpu's (amd
opteron 285 or intel woorcrest 5160) and 16GB of memory (even FB Dimm,
which is pretty expensive).
Best regards,
Arjen van der Meijden
On 8-8-2006 22:43, Kenji Morishige wrote:
I've asked for some help here a few mo
ion. Regarding the JBOD enclosures, are these generally just 2U or 4U
units with SCSI interface connectors? I didn't see these types of boxes
availble on Dell website, I'll look again.
-Kenji
On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:
With such a budget you
On 16-8-2006 18:48, Peter Hardman wrote:
Using identically structured tables and the same primary key, if I run this on
Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms,
and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same
Windows XP Pro machine wit
are considerably more than the
Dells. Is it worth waiting a few more weeks/months for Dell to release
something newer?
-Kenji
On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:
With such a budget you should easily be able to get something like:
- A 1U high-performance serve
the moment. I currently am runing a load
average of about .5 on a dual Xeon 3.06Ghz P4 setup. How much CPU
performance improvement do you think the new woodcrest cpus are over these?
-Kenji
On Fri, Aug 18, 2006 at 09:41:55PM +0200, Arjen van der Meijden wrote:
Hi Kenji,
I'm not sure
l-controller so its faster in sequential IO.
These benchmarks were not done using postgresql, so you shouldn't read
them as absolute for all your situations ;-) But you can get a good
impression I think.
Best regards,
Arjen van der Meijden
Tweakers.net
---(end of
Dave Cramer wrote:
Hi, Arjen,
The Woodcrest is quite a bit faster than the Opterons. Actually...
With Hyperthreading *enabled* the older Dempsey-processor is also
faster than the Opterons with PostgreSQL. But then again, it is the
top-model Dempsey and not a top-model Opteron so that isn't a
On 8-9-2006 15:01 Dave Cramer wrote:
But then again, systems with the Woodcrest 5150 (the subtop one) and
Opteron 280 (also the subtop one) are about equal in price, so its not
a bad comparison in a bang-for-bucks point of view. The Dempsey was
added to show how both the Opteron and the newer
On 8-9-2006 18:18 Stefan Kaltenbrunner wrote:
interesting - so this is a mostly CPU-bound benchmark ?
Out of curiousity have you done any profiling on the databases under
test to see where they are spending their time ?
Yeah, it is.
We didn't do any profiling.
We had a Sun-engineer visit us t
On 15-9-2006 17:53 Tom Lane wrote:
If that WHERE logic is actually what you need, then getting this query
to run quickly seems pretty hopeless. The database must form the full
outer join result: it cannot discard any listing0_ rows, even if they
have lastupdate outside the given range, because t
u need maximum performance, is when you have
to service a lot of concurrent visitors.
But if you benchmark only with a single thread or do benchmarks that are
no where near a real-life environment, it may show very different
results of course.
Best regards,
Arjen van der Meijden
--
Try the translation ;)
http://tweakers.net/reviews/646/13
On 22-9-2006 10:32 Hannes Dorbath wrote:
A colleague pointed me to this site tomorrow:
http://tweakers.net/reviews/642/13
I can't read the language, so can't get a grip on what exactly the
"benchmark" was about.
Their diagrams show
x27;t
rule out Intel, just because with previous processors they were the
slower player ;)
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 12-10-2006 21:07 Jeff Davis wrote:
On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:
To formalize the proposal a litte, you could have syntax like:
CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;
Where "some_hint" would be a hinting language perhaps like Jim's, except
On 20-10-2006 16:58 Dave Cramer wrote:
Ben,
My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks
(all SAS), or if I pick a different server I can have 6 x 15K rpm or 8
x 10K rpm (again SAS). In each case controlled by a PERC 5/i (which I
think is an LSI Mega Raid SAS 8408E card
On 20-10-2006 22:33 Ben Suffolk wrote:
How about the Fujitsu Siemens Sun Clones? I have not really looked at
them but have heard the odd good thing about them.
Fujitsu doesn't build Sun clones! That really is insulting for them ;-)
They do offer Sparc-hardware, but that's a bit higher up the m
Alvaro Herrera wrote:
Performance analysis of strange queries is useful, but the input queries
have to be meaningful as well. Otherwise you end up optimizing bizarre
and useless cases.
I had a similar one a few weeks ago. I did some batch-processing over a
bunch of documents and discovered p
On 17-11-2006 18:45 Jeff Frost wrote:
I see many of you folks singing the praises of the Areca and 3ware SATA
controllers, but I've been trying to price some systems and am having
trouble finding a vendor who ships these controllers with their
systems. Are you rolling your own white boxes or a
Jeff,
You can find some (Dutch) results here on our website:
http://tweakers.net/reviews/647/5
You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and
1160 with up to 14 disks and a Promise and LSI sata-raid controller with
each up to 8 disks. Btw, that Dell Perc5 (sas) is afa
ith 8 disks each and they have
been excellent. Here (on your site) are results that bear this out:
http://tweakers.net/reviews/639/9
- Luke
On 11/22/06 11:07 AM, "Arjen van der Meijden" <[EMAIL PROTECTED]>
wrote:
Jeff,
You can find some (Dutch) results here on our websit
don't try to
have 10 cases like I did)?
The database is a lightly optimised gentoo-compile of 7.4.2, the
mysql-version was 4.0.18 in case anyone wanted to know that.
Best regards,
Arjen van der Meijden
PS, don't try to "help improve the query" I discarded the idea as too
Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
Anyway, I was looking into the usefullness of a INSERT INTO newtable
SELECT field, field, CASE pkey WHEN x1 THEN y1 WHEN x2 THEN y2 etc END
FROM oldtable
The resulting select was about 1.7MB of query-text, mostly compo
Greg Stark wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
Was this the select with the CASE, or the update?
It was just the select to see how long it'd take. I already anticipated
it to be possibly a "slow query", so I only did the select first.
Best regards,
ections, 40 apache-processes and 10 db-connections. In
case of the non-pooled setup, you'd still have 40 db-connections.
In a simple test I did, I did feel pgpool had quite some overhead
though. So it should be well tested, to find out where the
turnover-point is where it will be a gain in
On 31-10-2007 17:45 Ketema wrote:
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
It really depends
On 28-1-2008 20:25 Christian Nicolaisen wrote:
So, my question is: should I go for the 2.5" disk setup or 3.5" disk
setup, and does the raid setup in either case look correct?
Afaik they are about equal in speed. With the smaller ones being a bit
faster in random access and the larger ones a b
There are several suppliers who offer Seagate's 2.5" 15k rpm disks, I
know HP, Dell are amongst those. So I was actually refering to those,
rather than to the 10k one's.
Best regards,
Arjen
[EMAIL PROTECTED] wrote:
On Mon, 28 Jan 2008, Arjen van der Meijden wrote:
On
1 - 100 of 131 matches
Mail list logo