Try random_page_cost=100
- Luke
- Original Message -
From: pgsql-performance-ow...@postgresql.org
To: pgsql-performance@postgresql.org
Sent: Sun Apr 11 14:12:30 2010
Subject: [PERFORM] planer chooses very bad plan
Hi,
I'm having a query where the planer chooses a very bad plan.
exp
XFS
- Luke
From: pgsql-performance-ow...@postgresql.org
To: pgsql-performance@postgresql.org
Sent: Thu Mar 26 05:47:55 2009
Subject: [PERFORM] I have a fusion IO drive available for testing
So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 wr
Hmm - I wonder what OS it runs ;-)
- Luke
- Original Message -
From: da...@lang.hm
To: Luke Lonergan
Cc: glynast...@yahoo.co.uk ;
pgsql-performance@postgresql.org
Sent: Fri Jan 23 04:52:27 2009
Subject: Re: [PERFORM] SSD performance
On Fri, 23 Jan 2009, Luke Lonergan wrote:
>
Why not simply plug your server into a UPS and get 10-20x the performance using
the same approach (with OS IO cache)?
In fact, with the server it's more robust, as you don't have to transit several
intervening physical devices to get to the RAM.
If you want a file interface, declare a RAMDISK.
Not to mention the #1 cause of server faults in my experience: OS kernel bug
causes a crash. Battery backup doesn't help you much there.
Fsync of log is necessary IMO.
That said, you could use a replication/backup strategy to get a consistent
snapshot in the past if you don't mind losing some
I believe they write at 200MB/s which is outstanding for sequential BW. Not
sure about the write latency, though the Anandtech benchmark results showed
high detail and IIRC the write latencies were very good.
- Luke
- Original Message -
From: da...@lang.hm
To: Luke Lonergan
Cc: st
The new MLC based SSDs have better wear leveling tech and don't suffer the
pauses. Intel X25-M 80 and 160 GB SSDs are both pause-free. See Anandtech's
test results for details.
Intel's SLC SSDs should also be good enough but they're smaller.
- Luke
- Original Message -
From: pgsql-pe
Your expected write speed on a 4 drive RAID10 is two drives worth, probably 160
MB/s, depending on the generation of drives.
The expect write speed for a 6 drive RAID5 is 5 drives worth, or about 400
MB/s, sans the RAID5 parity overhead.
- Luke
- Original Message -
From: [EMAIL PROTECT
Hi Stephane,
On 7/21/08 1:53 AM, "Stephane Bailliez" <[EMAIL PROTECTED]> wrote:
>> I'd suggest RAID5, or even better, configure all eight disks as a JBOD
>> in the RAID adapter and run ZFS RAIDZ. You would then expect to get
>> about 7 x 80 = 560 MB/s on your single query.
>>
> Do you have a pa
pgbench is unrelated to the workload you are concerned with if ETL/ELT and
decision support / data warehousing queries are your target.
Also - placing the xlog on dedicated disks is mostly irrelevant to data
warehouse / decision support work or ELT. If you need to maximize loading
speed while
The Arecas are a lot faster than the 9550, more noticeable with disk counts
from 12 on up. At 8 disks you may not see much difference.
The 3Ware 9650 is their answer to the Areca and it put the two a lot closer.
FWIW we got some Arecas at one point and had trouble getting them
configured and w
t;[EMAIL PROTECTED]>
To: Luke Lonergan
Cc: Pavan Deolasee <[EMAIL PROTECTED]>; Greg Smith <[EMAIL PROTECTED]>; Alvaro
Herrera <[EMAIL PROTECTED]>; pgsql-performance@postgresql.org
Sent: Thu May 22 12:10:02 2008
Subject: Re: [PERFORM] I/O on select count(*)
On Thu, 2008-05
The problem is that the implied join predicate is not being propagated. This
is definitely a planner deficiency.
- Luke
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: pgsql-performance@postgresql.org
Sent: Wed May 21 07:37:49 2008
Subject: Re: [PERFORM] Posible pl
Try 'set enable-mergejoin=false' and see if you get a hashjoin.
- Luke
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: Richard Huxton <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Sent: Fri May 16 04:00:41 2008
Subject: Re: [PERFORM] Join runs for > 10 hou
BTW we¹ve removed HINT bit checking in Greenplum DB and improved the
visibility caching which was enough to provide performance at the same level
as with the HINT bit optimization, but avoids this whole ³write the data,
write it to the log also, then write it again just for good measure²
behavior
Yep just do something like this within sqlplus (from
http://www.dbforums.com/showthread.php?t=350614):
set termout off
set hea off
set pagesize 0
spool c:\whatever.csv
select a.a||','||a.b||','||a.c
from a
where a.a="whatever";
spool off
COPY is the fastest approach to get it into PG.
- Luk
Hi Francisco,
Generally, PG sorting is much slower than hash aggregation for performing
the distinct operation. There may be small sizes where this isn¹t true, but
for large amounts of data (in-memory or not), hash agg (used most often, but
not always by GROUP BY) is faster.
We¹ve implemented a
You might try turning ³enable_bitmapscan² off, that will avoid the full
index scan and creation of the bitmap.
- Luke
On 3/27/08 8:34 AM, "Jesper Krogh" <[EMAIL PROTECTED]> wrote:
> Hi
>
> I have a table with around 10 million entries The webpage rendered hits
> at most 200 records which are
So your table is about 80 MB in size, or perhaps 120 MB if it fits in
shared_buffers. You can check it using ³SELECT
pg_size_pretty(pg_relation_size(mytable¹))²
- Luke
On 3/26/08 4:48 PM, "Peter Koczan" <[EMAIL PROTECTED]> wrote:
> FWIW, I did a select count(*) on a table with just over 300
Hello Sathiya,
1st: you should not use a ramdisk for this, it will slow things down as
compared to simply having the table on disk. Scanning it the first time
when on disk will load it into the OS IO cache, after which you will get
memory speed.
2nd: you should expect the ³SELECT COUNT(*)² to r
The Dell MD1000 is good. The most trouble you will have will be with the raid
adapter - to get the best support I suggest trying to buy the dell perc 5e
(also an LSI) - that way you'll get drivers that work and are supported.
Latest seq scan performance I've seen on redhat 5 is 400 MB/s on eigh
Improvements are welcome, but to compete in the industry, loading will need to
speed up by a factor of 100.
Note that Bizgres loader already does many of these ideas and it sounds like
pgloader does too.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Dimitri Fontaine
Hi Greg,
On 2/6/08 7:56 AM, "Greg Smith" <[EMAIL PROTECTED]> wrote:
> If I'm loading a TB file, odds are good I can split that into 4 or more
> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders
> at once, and get way more than 1 disk worth of throughput reading. You
> ha
Hi Greg,
On 2/4/08 12:09 PM, "Greg Smith" <[EMAIL PROTECTED]> wrote:
> Do you have any suggestions on how people should run TPC-H? It looked
> like a bit of work to sort through how to even start this exercise.
To run "TPC-H" requires a license to publish, etc.
However, I think you can use the
Hi Simon,
On 2/4/08 11:07 AM, "Simon Riggs" <[EMAIL PROTECTED]> wrote:
>> "executor-executor" test and we/you should be sure that the PG planner has
>> generated the best possible plan.
>
> If it doesn't then I'd regard that as a performance issue in itself.
Agreed, though that's two problems t
Hi Simon,
Note that MonetDB/X100 does not have a SQL optimizer, they ran raw
hand-coded plans. As a consequence, these comparisons should be taken as an
"executor-executor" test and we/you should be sure that the PG planner has
generated the best possible plan.
That said, we've already done the
Deadline works best for us. The new AS is getting better, but last we
tried there were issues with it.
- Luke
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Adrian Moisey
> Sent: Monday, January 21, 2008 11:01 PM
> To: pgsql-performance@postg
Hi Peter,
If you run into a scaling issue with PG (you will at those scales 1TB+), you
can deploy Greenplum DB which is PG 8.2.5 compatible. A large internet
company (look for press soon) is in production with a 150TB database on a
system capable of doing 400TB and we have others in production at
On 11/7/07 10:21 PM, "Gregory Stark" <[EMAIL PROTECTED]> wrote:
>> part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;
>> QUERY PLAN
>> -
>> ---
BTW - Mark has volunteered to work a Postgres patch together. Thanks Mark!
- Luke
On 10/29/07 10:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
> Luke Lonergan wrote:
>> Sure - it's here:
>> http://momjian.us/mhonarc/patches_hold/msg00381.html
>>
Sure - it's here:
http://momjian.us/mhonarc/patches_hold/msg00381.html
- Luke
On 10/29/07 6:40 AM, "Gregory Stark" <[EMAIL PROTECTED]> wrote:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
>
>> And I repeat - 'we fixed that and submitted
Original Message-
From: Simon Riggs [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 27, 2007 05:34 PM Eastern Standard Time
To: Luke Lonergan
Cc: Heikki Linnakangas; Anton; pgsql-performance@postgresql.org
Subject:Re: [PERFORM] partitioned table and ORDER BY indexed_field
;, and so is an easy, high payoff low
amount of code fix.
I'd suggest we take this approach while also considering a more powerful set of
append merge capabilities.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-----
From: Luke Lonergan [mailto:[EMAIL PROTECTED]
Sent: Satu
And I repeat - 'we fixed that and submitted a patch' - you can find it in the
unapplied patches queue.
The patch isn't ready for application, but someone can quickly implement it I'd
expect.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Heikki Linnakangas [mailto:[EM
Mindaugas,
The Anandtech results appear to me to support a 2.5 GHz Barcelona
performing better than the available Intel CPUs overall.
If you can wait for the 2.5 GHz AMD parts to come out, they'd be a
better bet IMO especially considering 4 sockets. In fact, have you seen
quad QC Intel benchmark
Greg,
> I think this seems pretty impractical for regular
> (non-bitmap) index probes though. You might be able to do it
> sometimes but not very effectively and you won't know when it
> would be useful.
Maybe so, though I think it's reasonable to get multiple actuators going
even if the seek
Hi Josh,
On 9/10/07 2:26 PM, "Josh Berkus" <[EMAIL PROTECTED]> wrote:
> So, when is this getting contributed? ;-)
Yes, that's the right question to ask :-)
One feeble answer: "when we're not overwhelmed by customer activity"...
- Luke
---(end of broadcast)--
Hi Mark, Greg,
On 9/10/07 3:08 PM, "Mark Mielke" <[EMAIL PROTECTED]> wrote:
> One suggestion: The plan is already in a tree. With some dependency analysis,
> I assume the tree could be executed in parallel (multiple threads or event
> triggered entry into a state machine), and I/O to fetch index
Should be a lot higher, something like 10-15 is approximating accurate.
Increasing the number of disks in a RAID actually makes the number higher,
not lower. Until Postgres gets AIO + the ability to post multiple
concurrent IOs on index probes, random IO does not scale with increasing
disk count,
Below is a patch against 8.2.4 (more or less), Heikki can you take a look at
it?
This enables the use of index scan of a child table by recognizing sort
order of the append node. Kurt Harriman did the work.
- Luke
Index: cdb-pg/src/backend/optimizer/path/indxpath.c
=
We just fixed this - I'll post a patch, but I don't have time to verify
against HEAD.
- Luke
On 8/24/07 3:38 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
> Anton wrote:
=# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;
Andrew,
I'd say that commodity systems are the fastest with postgres - many have seen
big slowdowns with high end servers. 'Several orders of magnitude' is not
possible by just changing the HW, you've got a SW problem to solve first. We
have done 100+ times faster than both Postgres and popul
Yay - looking forward to your results!
- Luke
On 8/16/07 3:14 PM, "Merlin Moncure" <[EMAIL PROTECTED]> wrote:
> On 8/16/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>>
>> Hi Michael,
>>
>> There is a problem with some Dell "perc 5&qu
Hi Michael,
There is a problem with some Dell ³perc 5² RAID cards, specifically we¹ve
had this problem with the 2950 as of 6 months ago they do not support
RAID10. They have a setting that sounds like RAID10, but it actually
implements spanning of mirrors. This means that you will not get more
Marc,
You should expect that for the kind of OLAP workload you describe in steps 2
and 3 you will have exactly one CPU working for you in Postgres.
If you want to accelerate the speed of this processing by a factor of 100 or
more on this machine, you should try Greenplum DB which is Postgres 8.2
changed since then.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Dimitri [mailto:[EMAIL PROTECTED]
Sent: Monday, July 30, 2007 05:26 PM Eastern Standard Time
To: Luke Lonergan
Cc: Josh Berkus; pgsql-performance@postgresql.org; Marc Mamin
Subject:Re
1) Yes
All rows are treated the same, there are no in place updates.
2) No
Truncate recreates the object as a new one, releasing the space held by the old
one.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Scott Feldstein [mailto:[EMAIL PROTECTED]
Sent: Thursday,
Josh,
On 7/20/07 4:26 PM, "Josh Berkus" <[EMAIL PROTECTED]> wrote:
> There are some specific tuning parameters you need for ZFS or performance
> is going to suck.
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> (scroll down to "PostgreSQL")
> http://www.sun.com/serv
Dimitri,
> Seems to me that :
> - GreenPlum provides some commercial parallel query engine on top of
>PostgreSQL,
I certainly think so and so do our customers in production with 100s of
terabytes :-)
> - plproxy could be a solution to the given problem.
>https://developer.skype.com/
users should just be faster with 8.2, so I'd
think it's a problem with plans.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Douglas J Hunley [mailto:[EMAIL PROTECTED]
Sent: Monday, June 04, 2007 08:40 AM Eastern Standard Time
To: Luke Lonergan
Cc: Tom L
When you initdb, a config file is edited from the template by initdb to reflect
your machine config.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Douglas J Hunley [mailto:[EMAIL PROTECTED]
Sent: Sunday, June 03, 2007 02:30 PM Eastern Standard Time
To: Tom Lane
C
Steinar,
On 6/1/07 2:35 PM, "Steinar H. Gunderson" <[EMAIL PROTECTED]> wrote:
> Either do your md discovery in userspace via mdadm (your distribution can
> probably help you with this), or simply use the raid10 module instead of
> building raid1+0 yourself.
I found md raid10 to be *very* slow co
Dimitri,
LVM is great, one thing to watch out for: it is very slow compared to pure
md. That will only matter in practice if you want to exceed 1GB/s of
sequential I/O bandwidth.
- Luke
On 6/1/07 11:51 AM, "Dimitri" <[EMAIL PROTECTED]> wrote:
> Craig,
>
> to make things working properly here
Mark,
On 5/30/07 8:57 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> One part is corruption. Another is ordering and consistency. ZFS represents
> both RAID-style storage *and* journal-style file system. I imagine consistency
> and ordering is handled through journalling.
Yep and versionin
27;t getting that on a single drive, there's something wrong with the
SATA driver or the drive(s).
- Luke
> A Dimecres 30 Maig 2007 16:09, Luke Lonergan va escriure:
>> This sounds like a bad RAID controller - are you using a built-in hardware
>> RAID? If so, you will likely wa
> This is standard stuff, very well proven: try googling 'self healing zfs'.
The first hit on this search is a demo of ZFS detecting corruption of one of
the mirror pair using checksums, very cool:
http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
D464883F194061E341F
1:11 AM Eastern Standard Time
To: pgsql-performance@postgresql.org
Subject:Re: [PERFORM] setting up raid10 with more than 4 drives
On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
>> I don't see how that's better at all; in fact, it reduces to
>> exactly
> I don't see how that's better at all; in fact, it reduces to
> exactly the same problem: given two pieces of data which
> disagree, which is right?
The one that matches the checksum.
- Luke
---(end of broadcast)---
TIP 5: don't forget to inc
This sounds like a bad RAID controller - are you using a built-in hardware
RAID? If so, you will likely want to use Linux software RAID instead.
Also - you might want to try a 512KB readahead - I've found that is optimal
for RAID1 on some RAID controllers.
- Luke
On 5/30/07 2:35 AM, "Albert C
Hi Peter,
On 5/30/07 12:29 AM, "Peter Childs" <[EMAIL PROTECTED]> wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at least
> you can take a vote on whats right. Where as if you only have 2 and they
> disagree how do you know which is right other than pick one and ho
Stephen,
On 5/29/07 8:31 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> It's just more copies of the same data if it's really a RAID1, for the
> extra, extra paranoid. Basically, in the example above, I'd read it as
> "D1, D2, D5 have identical data on them".
In that case, I'd say it's a wast
Hi Rajesh,
On 5/29/07 7:18 PM, "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> wrote:
> D1 raid1 D2 raid1 D5 --> MD0
> D3 raid1 D4 raid1 D6 --> MD1
> MD0 raid0 MD1 --> MDF (final)
AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
to me.
- Luke
-
Stripe of mirrors is preferred to mirror of stripes for the best balance of
protection and performance.
In the stripe of mirrors you can lose up to half of the disks and still be
operational. In the mirror of stripes, the most you could lose is two
drives. The performance of the two should be si
Cool!
Now we can point people to your faq instead of repeating the "dd" test
instructions. Thanks for normalizing this out of the list :-)
- Luke
On 5/15/07 8:55 PM, "Greg Smith" <[EMAIL PROTECTED]> wrote:
> I've been taking notes on what people ask about on this list, mixed that
> up with wo
You can use the workload management feature that we've contributed to
Bizgres. That allows you to control the level of statement concurrency by
establishing queues and associating them with roles.
That would provide the control you are seeking.
- Luke
On 5/8/07 4:24 PM, "[EMAIL PROTECTED]" <[E
WRT ZFS on Linux, if someone were to port it, the license issue would get
worked out IMO (with some discussion to back me up). From discussions with the
developers, the biggest issue is a technical one: the Linux VFS layer makes the
port difficult.
I don't hold any hope that the FUSE port will
NUMERIC operations are very slow in pgsql. Equality comparisons are somewhat
faster, but other operations are very slow compared to other vendor's NUMERIC.
We've sped it up a lot here internally, but you may want to consider using
FLOAT for what you are doing.
- Luke
Msg is shrt cuz m on ma t
The outer track / inner track performance ratio is more like 40 percent.
Recent example is 78MB/s outer and 44MB/s inner for the new Seagate 750MB drive
(see http://www.storagereview.com for benchmark results)
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Jim Nasby [
Set log_executor_stats=true;
Then look in the log after running statements (or tail -f logfile).
- Luke
On 4/3/07 7:12 AM, "Jean Arnaud" <[EMAIL PROTECTED]> wrote:
> Hi
>
> Is there a way to get the cache hit ratio in PostGreSQL ?
>
> Cheers
---(end of broadcast)--
Andreas,
On 3/22/07 4:48 AM, "Andreas Tille" <[EMAIL PROTECTED]> wrote:
> Well, to be honest I'm not really interested in the performance of
> count(*). I was just discussing general performance issues on the
> phone line and when my colleague asked me about the size of the
> database he just wo
Hi Merlin,
On 2/14/07 8:20 AM, "Merlin Moncure" <[EMAIL PROTECTED]> wrote:
> I am curious what is your take on the maximum insert performance, in
> mb/sec of large bytea columns (toasted), and how much if any greenplum
> was able to advance this over the baseline. I am asking on behalf of
> anot
Here¹s one:
Insert performance is limited to about 10-12 MB/s no matter how fast the
underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts
this to perhaps 20 MB/s. We¹ve found that the biggest time consumer in the
profile is the collection of routines that ³convert to datum².
> \o /tmp/really_big_cursor_return
>
> ;)
Tough crowd :-D
- Luke
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Tom,
On 2/2/07 2:18 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> as of 8.2 there's a psql variable
> FETCH_COUNT that can be set to make it happen behind the scenes.)
FETCH_COUNT is a godsend and works beautifully for exactly this purpose.
Now he's got to worry about how to page through 8GB of r
Tom,
On 1/30/07 9:55 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Gregory Stark wrote:
>>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on
>>> your data distribution. It's not hard to come up with distributions where
>>> it
Argh!
### ###
#
##
### ###
#
##
## ##
Alvaro,
On 1/30/07 9:04 AM, "Alvaro Herrera" <[EMAIL PROTECTED]> wrote:
>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on
>> your data distribution. It's not hard to come up with distributions where
>> it's
>> 1000x as fast and others where there's no speed differenc
Chad,
On 1/30/07 7:03 AM, "Chad Wagner" <[EMAIL PROTECTED]> wrote:
> On 1/30/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> Not that it helps Igor, but we've implemented single pass sort/unique,
>> grouping and limit optimizations and it speeds thin
Chad,
On 1/30/07 6:13 AM, "Chad Wagner" <[EMAIL PROTECTED]> wrote:
> Sounds like an opportunity to implement a "Sort Unique" (sort of like a hash,
> I guess), there is no need to push 3M rows through a sort algorithm to only
> shave it down to 1848 unique records.
>
> I am assuming this optimiza
Chris,
On 1/18/07 1:42 PM, "Chris Mair" <[EMAIL PROTECTED]> wrote:
> A lot of data, but not a lot of records... I don't know if that's
> valid. I guess the people at Greenplum and/or Sun have more exciting
> stories ;)
You guess correctly :-)
Given that we're Postgres 8.2, etc compatible, that
Adam,
This optimization would require teaching the planner to use an index for
MAX/MIN when available. It seems like an OK thing to do to me.
- Luke
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Adam Rich
> Sent: Sunday, January 14, 2007 8:52 P
Mark,
Note that selecting an index column means that Postgres fetches the whole
rows from disk. I think your performance problem is either: 1) slow disk or
2) index access of distributed data. If it¹s (1), there are plenty of
references from this list on how to check for that and fix it. If it¹
Mark,
This behavior likely depends on how the data is loaded into the DBMS. If
the records you are fetching are distributed widely among the 3M records on
disk, then
On 1/12/07 4:31 PM, "Mark Dobbrow" <[EMAIL PROTECTED]> wrote:
> Hello -
>
> I have a fairly large table (3 million records),
Colin,
On 1/6/07 8:37 PM, "Colin Taylor" <[EMAIL PROTECTED]> wrote:
> Hi there, we've partioned a table (using 8.2) by day due to the 50TB of data
> (500k row size, 100G rows) we expect to store it in a year.
> Our performance on inserts and selects against the master table is
> disappointing,
Tsuraan,
"Select count(*) from bigtable" is testing your disk drive speed up till
about 300MB/s, after which it is CPU limited in Postgres.
My guess is that your system has a very slow I/O configuration, either due
to faulty driver/hardware or the configuration.
The first thing you should do is
Daniel,
Good stuff.
Can you try this with just "-O3" versus "-O2"?
- Luke
On 12/11/06 2:22 PM, "Daniel van Ham Colchete" <[EMAIL PROTECTED]>
wrote:
> Hi yall,
>
> I made some preliminary tests.
>
> Before the results, I would like to make some acknowledgments:
> 1 - I didn't show any prove
Merlin,
On 12/11/06 12:19 PM, "Merlin Moncure" <[EMAIL PROTECTED]> wrote:
> ...and this 6 of them (wow!). the v40z was top of its class. Will K8L
> run on this server?
No official word yet.
The X4600 slipped in there quietly under the X4500 (Thumper) announcement,
but it's a pretty awesome ser
Michael,
On 12/11/06 10:57 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> That's kinda the opposite of what I meant by general code. I was trying
> (perhaps poorly) to distinguish between scientific codes and other
> stuff (especially I/O or human interface code).
Yes - choice of language has
The Sun X4600 is very good for this, the V40z is actually EOL so I'd stay
away from it.
You can currently do 8 dual core CPUs with the X4600 and 128GB of RAM and
soon you should be able to do 8 quad core CPUs and 256GB of RAM.
- Luke
On 12/11/06 8:26 AM, "Cosimo Streppone" <[EMAIL PROTECTED]>
Michael,
On 12/11/06 9:31 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> [1] I will say that I have never seen a realistic benchmark of general
> code where the compiler flags made a statistically significant
> difference in the runtime.
Here's one - I wrote a general purpose Computational Flu
Brian,
On 12/6/06 8:40 AM, "Brian Hurt" <[EMAIL PROTECTED]> wrote:
> But actually looking things up, I see that PCI-Express has a theoretical 8
> Gbit/sec, or about 800Mbyte/sec. It's PCI-X that's 533 MByte/sec. So there's
> still some headroom available there.
See here for the official specifi
Brian,
On 12/6/06 8:02 AM, "Brian Hurt" <[EMAIL PROTECTED]> wrote:
> These numbers are close enough to bus-saturation rates
PCIX is 1GB/s + and the memory architecture is 20GB/s+, though each CPU is
likely to obtain only 2-3GB/s.
We routinely achieve 1GB/s I/O rate on two 3Ware adapters and 2GB
Glenn,
On 12/5/06 9:12 AM, "Glenn Sullivan" <[EMAIL PROTECTED]> wrote:
> I am wanting some ideas about improving the performance of ORDER BY in
> our use. I have a DB on the order of 500,000 rows and 50 columns.
> The results are always sorted with ORDER BY. Sometimes, the users end up
> with a
Mark,
This fits the typical pattern of the "Big Honking Datamart" for clickstream
analysis, a usage pattern that stresses the capability of all DBMS. Large
companies spend $1M + on combinations of SW and HW to solve this problem,
and only the large scale parallel DBMS can handle the load. Player
http://www.tpc.org/tpch/spec/tpch_20060831.tar.gz
- Luke
On 11/24/06 8:47 AM, "Felipe Rondon Rocha" <[EMAIL PROTECTED]> wrote:
> Hi everyone,
>
> does anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can
> i find the database and queries?
>
> Thks,
> Felipe
>
Arjen,
As usual, your articles are excellent!
Your results show again that the 3Ware 9550SX is really poor at random I/O
with RAID5 and all of the Arecas are really good. 3Ware/AMCC have designed
the 96xx to do much better for RAID5, but I've not seen results - can you
get a card and test it?
W
Jeff,
On 11/17/06 11:45 AM, "Jeff Frost" <[EMAIL PROTECTED]> wrote:
> I see many of you folks singing the praises of the Areca and 3ware SATA
> controllers, but I've been trying to price some systems and am having trouble
> finding a vendor who ships these controllers with their systems. Are you
Similar experiences with HP and their SmartArray 5i controller on Linux.
The answer was: "this controller has won awards for performance! It can't be
slow!", so we made them test it in their own labs an prove just how awfully
slow it was. In the case of the 5i, it became apparent that HP had no
in
Select count(*) from table-twice-size-of-ram
Divide the query time by the number of pages in the table times the pagesize
(normally 8KB) and you have your net disk rate.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Brian Hurt [mailto:[EMAIL PROTECTED]
Sent: Monday,
John,
On 10/31/06 8:29 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> 'chrX' and StartPosition > 1000500 and EndPosition < 200;
>
> Also, there's the PostGIS stuff, though it might be overkill for what
> you want.
Oops - I missed the point earlier. Start and End are separate attributes so
th
1 - 100 of 347 matches
Mail list logo