On Tue, 22 May 2007, Stephane Bailliez wrote:
Out of curiosity, can anyone share his tips & tricks to validate a machine
before labelling it as 'ready to use postgres - you probably won't trash my
data today' ?
Write a little script that runs pgbench in a loop forever. Set your
shared_buffe
You forgot pulling some RAID drives at random times to see how the hardware
deals with the fact. And how it deals with the rebuild afterwards. (Many RAID
solutions leave you with worst of both worlds, taking longer to rebuild than a
restore from backup would take, while at the same ime providing
On Tue, 22 May 2007, Gregory Stark wrote:
However as mentioned a while back in practice it doesn't work quite right and
you should expect to get 1/2 the expected performance. So even with 10 clients
you should expect to see 5*120 tps on a 7200 rpm drive and 5*250 tps on a
15kprm drive.
I would
Checking out right now
Thanks for the fast response.
-Original Message-
From: Andreas Kostyrka [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 22, 2007 11:49 AM
To: Orhan Aglagul
Cc:
Subject: AW: [PERFORM] Drop table vs Delete record
Consider table partitioning (it's described in the m
Consider table partitioning (it's described in the manual).
Andreas
-- Ursprüngl. Mitteil. --
Betreff:[PERFORM] Drop table vs Delete record
Von:"Orhan Aglagul" <[EMAIL PROTECTED]>
Datum: 22.05.2007 18:42
My application has two threads, one inserts thousands of records per
My application has two threads, one inserts thousands of records per second
into a table (t1) and the other thread periodically deletes expired records
(also in thousands) from the same table (expired ones). So, we have one thread
adding a row while the other thread is trying to delete a ro
Are there any performance improvements that come from using a domain
over a check constraint (aside from the ease of management component)?
thanks
--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC 27560
Phone: 877-258-8987/919-463-0999
-
Arnau <[EMAIL PROTECTED]> writes:
> As you can see the time difference are very big
>Timestamp:318.328 ms
>int8 index: 120.804 ms
>double precision: 57.065 ms
As already suggested elsewhere, you probably weren't sufficiently
careful in taking your measurements.
A look at
Out of curiosity, can anyone share his tips & tricks to validate a
machine before labelling it as 'ready to use postgres - you probably
won't trash my data today' ?
I'm looking for a way to stress test components especially kernel/disk
to have confidence > 0 that I can use postgres on top of it.
On 5/22/07, Stephane Bailliez <[EMAIL PROTECTED]> wrote:
Out of curiosity, can anyone share his tips & tricks to validate a
machine before labelling it as 'ready to use postgres - you probably
won't trash my data today' ?
I'm looking for a way to stress test components especially kernel/disk
to h
Hi,
Out of curiosity, can anyone share his tips & tricks to validate a
machine before labelling it as 'ready to use postgres - you probably
won't trash my data today' ?
I'm looking for a way to stress test components especially kernel/disk
to have confidence > 0 that I can use postgres on top
On 5/22/07, Steinar H. Gunderson <[EMAIL PROTECTED]> wrote:
On Tue, May 22, 2007 at 02:39:33PM +0200, Alexander Staubo wrote:
> PostgreSQL uses B-tree indexes for scalar values. For an expression
> such as "t between a and b", I believe it's going to match both sides
> of the table independently
On Tue, May 22, 2007 at 02:39:33PM +0200, Alexander Staubo wrote:
> PostgreSQL uses B-tree indexes for scalar values. For an expression
> such as "t between a and b", I believe it's going to match both sides
> of the table independently (ie., t >= a and t <= b) and intersect
> these subsets. This i
On 5/22/07, Arnau <[EMAIL PROTECTED]> wrote:
On older versions of PostgreSQL, at least in my experience, queries
on timestamps fields even having indexes where performing quite bad
mainly sequential scans where performed.
PostgreSQL uses B-tree indexes for scalar values. For an expression
su
Le mardi 22 mai 2007, Richard Huxton a écrit :
> valgog wrote:
> > I found several post about INSERT/UPDATE performance in this group,
> > but actually it was not really what I am searching an answer for...
> >
> > I have a simple reference table WORD_COUNTS that contains the count of
> > words tha
Hi all,
I have some tables where all the queries that will be executed are
timestamps driven, so it'd be nice to have an index over those fields.
On older versions of PostgreSQL, at least in my experience, queries
on timestamps fields even having indexes where performing quite bad
mainly
On May 22, 12:00 pm, valgog <[EMAIL PROTECTED]> wrote:
> I have rewritten the code like
>
> existing_words_array := ARRAY( select word
>from WORD_COUNTS
> where word = ANY
> ( array_of_words ) );
>
On May 22, 12:14 pm, [EMAIL PROTECTED] (PFC) wrote:
> On Tue, 22 May 2007 10:23:03 +0200, valgog <[EMAIL PROTECTED]> wrote:
> > I found several post about INSERT/UPDATE performance in this group,
> > but actually it was not really what I am searching an answer for...
>
> > I have a simple reference
On Tue, 22 May 2007 10:23:03 +0200, valgog <[EMAIL PROTECTED]> wrote:
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear
Note that while the average hits/s between 100 and 500 is over 600 tps
for
Postgres there is a consistent smattering of plot points spread all the
way
down to 200 tps, well below the 400-500 tps that MySQL is getting.
Yes, these are due to checkpointing, mostly.
Also, note that a re
I have rewritten the code like
existing_words_array := ARRAY( select word
from WORD_COUNTS
where word = ANY
( array_of_words ) );
not_existing_words_array := ARRAY( select distinct_word
Joost Kraaijeveld wrote:
Hi,
I have a table with a file size of 400 MB with an index of 100 MB.
Does PostgreSQL take the file sizes of both the table and the index
into account when determing if it should do a table or an index scan?
In effect yes, although it will think in terms of row sizes
Hi,
I have a table with a file size of 400 MB with an index of 100 MB. Does
PostgreSQL take the file sizes of both the table and the index into account
when determing if it should do a table or an index scan?
TIA
Joost
---(end of broadcast)---
valgog wrote:
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a word array storage in another table.
I think this
On 22 May 2007 01:23:03 -0700, valgog <[EMAIL PROTECTED]> wrote:
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a word array storage in another table.
CREATE TABLE WORD_COUNTS
(
w
What's interesting here is that on a couple metrics the green curve is
actually *better* until it takes that nosedive at 500 MB. Obviously it's not
better on average hits/s, the most obvious metric. But on deviation and
worst-case hits/s it's actually doing better.
Note that while the average hit
"Alvaro Herrera" <[EMAIL PROTECTED]> writes:
> Scott Marlowe wrote:
>>
>> I thought you were limited to 250 or so COMMITS to disk per second, and
>> since >1 client can be committed at once, you could do greater than 250
>> tps, as long as you had >1 client providing input. Or was I wrong?
>
>
Well, CLUSTER is so slow (and it doesn't cluster the toast tables
associated with the table to be clustered).
However, when people use CLUSTER they use it to speed up their queries.
For that the table does not need to be perfectly in-order.
So, here is a new idea for
29 matches
Mail list logo