> - Deferred Transactions, since adding a comment to a blog post
> doesn't need the same guarantees than submitting a paid order, it makes
> sense that the application could tell postgres which transactions we
> care about if power is lost. This will massively boost performance for
> websites I
Greg Smith írta:
On Mon, 21 May 2007, Guido Neitzer wrote:
Yes, that right, but if a lot of the transactions are selects, there
is no entry in the x_log for them and most of the stuff can come from
the cache - read from memory which is blazing fast compared to any
disk ... And this was a pg_b
Jim C. Nasby írta:
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over l
Am 21.05.2007 um 23:51 schrieb Greg Smith:
The standard pgbench transaction includes a select, an insert, and
three updates.
I see. Didn't know that, but it makes sense.
Unless you went out of your way to turn it off, your drive is
caching writes; every Seagate SATA drive I've ever seen do
On Mon, 21 May 2007, Guido Neitzer wrote:
Yes, that right, but if a lot of the transactions are selects, there is no
entry in the x_log for them and most of the stuff can come from the cache -
read from memory which is blazing fast compared to any disk ... And this was
a pg_bench test - I don'
* set reasonable statement timeout
* backup with pitr. pg_dump is a headache on extremely busy servers.
Where do you put your pitr wal logs so that they don't take up extra
I/O ?
* get good i/o system for your box. start with 6 disk raid 10 and go
from there.
* spend some time reading about
Scott Marlowe wrote:
> Jim C. Nasby wrote:
> >On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
> >
> >>I also went into benchmarking mode last night for my own
> >>amusement when I read on the linux-kernel ML that
> >>NCQ support for nForce5 chips was released.
> >>I tried curr
Jim C. Nasby wrote:
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over
I assume red is the postgresql. AS you add connections, Mysql always dies.
On 5/20/07, PFC <[EMAIL PROTECTED]> wrote:
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting
and
viewing forums and topics on a PHP website.
Am 21.05.2007 um 15:01 schrieb Jim C. Nasby:
I'd be willing to bet money that the drive is lying about commits/
fsync.
Each transaction committed essentially requires one revolution of the
drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.
Yes, that right, but if a lot of the t
On Mon, 21 May 2007 23:05:22 +0200, Jim C. Nasby <[EMAIL PROTECTED]>
wrote:
On Sun, May 20, 2007 at 04:58:45PM +0200, PFC wrote:
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting and
viewing forums and topics on a PHP
Well that matches up well with my experience, better even yet, file a
performance bug to the commercial support and you'll get an explanation
why your schema (or your hardware, well anything but the database
software used) is the guilty factor.
Yeah, I filed a bug last week since REPEATA
"Chuck D." <[EMAIL PROTECTED]> writes:
> Doesn't that seem a bit strange? Does it have to do with the smaller size of
> the new table maybe?
No, it seems to be a planner bug:
http://archives.postgresql.org/pgsql-hackers/2007-05/msg00920.php
I imagine that your table statistics are close to the c
On May 18, 2007, at 2:30 PM, Andrew Sullivan wrote:
Note also that your approach of updating all 121 million records in
one statement is approximately the worst way to do this in Postgres,
because it creates 121 million dead tuples on your table. (You've
created some number of those by killing
On Sun, May 20, 2007 at 04:58:45PM +0200, PFC wrote:
>
> I felt the world needed a new benchmark ;)
> So : Forum style benchmark with simulation of many users posting and
> viewing forums and topics on a PHP website.
>
> http://home.peufeu.com/ftsbench/forum1.png
Any chance of
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
> I also went into benchmarking mode last night for my own
> amusement when I read on the linux-kernel ML that
> NCQ support for nForce5 chips was released.
> I tried current PostgreSQL 8.3devel CVS.
> pgbench over local TCP connec
On Mon, May 21, 2007 at 03:50:27PM -0400, Merlin Moncure wrote:
> I work on a system about like you describe400tps constant24/7.
> Major challenges are routine maintenance and locking. Autovacuum is
> your friend but you will need to schedule a full vaccum once in a
> while because of tps
I'm looking for a database+hardware solution which should be
able to handle up to 500 requests per second.
What proportion of reads and writes in those 500 tps ?
(If you have 450 selects and 50 inserts/update transactions, your
hardware requirements will be different than t
Thanks again! I'll make the change and get those numbers.
Yudhvir
On 5/21/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Fri, May 18, 2007 at 04:26:05PM -0700, Y Sidhu wrote:
> >To answer your original question, a way to take a look at how bloated
> >your tables are would be to ANALYZE, divide
On 5/12/07, Tarhon-Onu Victor <[EMAIL PROTECTED]> wrote:
Hi guys,
I'm looking for a database+hardware solution which should be able
to handle up to 500 requests per second. The requests will consist in:
- single row updates in indexed tables (the WHERE clauses will use
t
On 5/21/07, Chris Hoover <[EMAIL PROTECTED]> wrote:
Hi everyone,
I am testing my shared_buffers pool and am running into a problem with slow
inserts and commits. I was reading in several places that in the 8.X
PostgreSQL engines should set the shared_buffers closer to 25% of the
systems memory.
On Monday 21 May 2007 11:34, Richard Huxton wrote:
> Chuck D. wrote:
>
> The only thing I can think of is that the CLUSTERing on city.country_id
> makes the system think it'll be cheaper to seq-scan the whole table.
>
> I take it you have got 2 million rows in "city"?
Well here is where it gets st
Chuck D. wrote:
On Monday 21 May 2007 03:14, Josh Berkus wrote:
Chuck,
Can we see the plan?
--Josh
Sorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN
ANALYZE.
# explain
SELECT country_id, country_name
FROM geo.country
WHERE country_id IN
(select country_id FROM
On Fri, May 18, 2007 at 04:26:05PM -0700, Y Sidhu wrote:
> >To answer your original question, a way to take a look at how bloated
> >your tables are would be to ANALYZE, divide reltuples by relpages from
> >pg_class (gives how many rows per page you have) and compare that to 8k
> >/ average row siz
On Monday 21 May 2007 05:40, Richard Huxton wrote:
> Chuck D. wrote:
>
> Any good reason why country_id is NULLable?
It has been a while since I imported the data so it took some time to examine
it but here is what I found.
In the original data, some cities do not have coutries. Strange huh? M
On Monday 21 May 2007 03:14, Josh Berkus wrote:
> Chuck,
>
> Can we see the plan?
>
> --Josh
>
Sorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN
ANALYZE.
# explain
SELECT country_id, country_name
FROM geo.country
WHERE country_id IN
(select country_id FROM geo.city)
;
Hi everyone,
I am testing my shared_buffers pool and am running into a problem with slow
inserts and commits. I was reading in several places that in the
8.XPostgreSQL engines should set the shared_buffers closer to 25% of
the
systems memory. On me development system, I have done that. We have
Chuck D. wrote:
Table "geo.city"
Column | Type | Modifiers
++---
city_id| integer| not null
state_id | smallint |
country_id | smallint |
latitude | numeric(9
Chuck,
explain analyze
SELECT country_id, country_name
FROM geo.country
WHERE country_id IN
(select country_id FROM geo.city)
;
-- won't complete in a reasonable amount of time.
Can we see the plan?
--Josh
---(end of broadcast)---
TIP
29 matches
Mail list logo