There seems to be a wide range of opinion on this I am new to PG and
grew up on Oracle, where more SGA is always a good thing ... I know people
who run Oracle on 2TB Superdome's with titanic SGA sizes to keep the whole
DB in RAM. I'd be using a 40GB+ Oracle SGA on that box of yours.
A lot of
As a fellow PG newbie, some thoughts / ideas
1. What is the prupose of the dump (backup, migration, ETL, etc.)? Why
plain? Unless you have a need to load this into a different brand of
database at short notice, I'd use native format.
2. If you goal is indeed to get the data into another DB,
On Mon, Mar 15, 2010 at 11:58 AM, Matthew Wakeling wrote:
> On Thu, 25 Feb 2010, Bruce Momjian wrote:
>>
>> Was there every any conclusion on this issue?
>
> Not really. Comments inline:
>
>> Matthew Wakeling wrote:
>>>
>>> Revisiting the thread a month back or so, I'm still investigating
>>> perf
On Sun, Mar 14, 2010 at 4:01 AM, David Newall
wrote:
> an expected 40 - 45GB of compressed output. CPU load is 100% on the core
> executing pg_dump, and negligible on all others cores. The system is
> read-mostly, and largely idle. The exact invocation was:
>
> nohup time pg_dump -f databas
On Thu, Mar 11, 2010 at 5:19 PM, Paul McGarry wrote:
> On 11 March 2010 16:16, Ben Chobot wrote:
>
>> I *can* say a 10GB shared_buffer value is working "well" with my 128GB of
>> RAM. whether or not it's "optimal," I couldn't say without a lot of
>> experimentation I can't afford to do righ
On Sat, 2010-03-13 at 20:10 +0800, Craig Ringer wrote:
> On 13/03/2010 5:54 AM, Jeff Davis wrote:
> > On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:
> >> of course. You can always explicitly open a transaction on the remote
> >> side over dblink, do work, and commit it at the last possib
Dear performance users,
I am writing on the mailing list, after several unsuccessful attemts to
contact mailing list administrators on "-owner" e-mail addresses.
Namely, it seems that it is not possible to subscribe to ANY Postgres
mailing list, using e-mail addresses from newly created TLD-s
On 11 March 2010 16:16, Ben Chobot wrote:
> I *can* say a 10GB shared_buffer value is working "well" with my 128GB of
> RAM. whether or not it's "optimal," I couldn't say without a lot of
> experimentation I can't afford to do right now. You might have a look at the
> pg_buffercache contri
Hi there,
I'm after a little bit of advice on the shared_buffers setting (I have
read the various docs on/linked from the performance tuning wiki page,
some very helpful stuff there so thanks to those people).
I am setting up a 64bit Linux server running Postgresql 8.3, the
server has 64gigs of m
Hi group,
We have two related tables with event types and events. We query for a join
between these two tables and experience that, when there is an
to-be-expected very small result set, this query performs particularly
poor. Understanding in this matter would be appreciated.
SELECT * from event
On Mar 11, 2010, at 12:39 AM, Greg Smith wrote:
>
> Giving all the buffers to the database doesn't work for many reasons:
> -Need a bunch leftover for clients to use (i.e. work_mem)
> -Won't be enough OS cache for non-buffer data the database expects
> cached reads and writes will perform well o
On Mon, Mar 15, 2010 at 10:53 AM, Tom Lane wrote:
> Greg Smith writes:
> > VJK wrote:
> >> Since Pg does not use the concept of rollback segments, it is unclear
> >> why deletion produces so much disk IO (4GB).
>
> For an example like this one, you have to keep in mind that the
> toast-table ro
On Thu, 25 Feb 2010, Bruce Momjian wrote:
Was there every any conclusion on this issue?
Not really. Comments inline:
Matthew Wakeling wrote:
Revisiting the thread a month back or so, I'm still investigating
performance problems with GiST indexes in Postgres.
Looking at http://wiki.postgresq
On Mon, 15 Mar 2010, Tom Lane wrote:
For an example like this one, you have to keep in mind that the
toast-table rows for the large bytea value have to be marked deleted,
too. Also, since I/O happens in units of pages, the I/O volume to
delete a tuple is just as much as the I/O to create it. (T
Inline:
On Mon, Mar 15, 2010 at 10:42 AM, Greg Smith wrote:
> VJK wrote:
>
>> Since Pg does not use the concept of rollback segments, it is unclear why
>> deletion produces so much disk IO (4GB).
>>
>
> With PostgreSQL's write-ahead log, MVCC and related commit log, and
> transactional DDL feat
Matthew Wakeling writes:
> On Mon, 15 Mar 2010, Tom Lane wrote:
>> If deleting a whole table is significant for you performance-wise,
>> you might look into using TRUNCATE instead.
> Might you still end up with a normal delete operation
> on the TOAST table when performing a TRUNCATE on the owne
Greg Smith writes:
> VJK wrote:
>> Since Pg does not use the concept of rollback segments, it is unclear
>> why deletion produces so much disk IO (4GB).
> With PostgreSQL's write-ahead log, MVCC and related commit log, and
> transactional DDL features, there's actually even more overhead that
Inline:
On Mon, Mar 15, 2010 at 10:12 AM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:
> VJK wrote:
>
> > the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO
>
> > Deletion (delete from x2) took 32 seconds with 12 seconds CPU and
> > 20 sec sleep + wait for IO. Actual disk I
VJK wrote:
Since Pg does not use the concept of rollback segments, it is unclear
why deletion produces so much disk IO (4GB).
With PostgreSQL's write-ahead log, MVCC and related commit log, and
transactional DDL features, there's actually even more overhead that can
be involved than a simple
VJK wrote:
> the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO
> Deletion (delete from x2) took 32 seconds with 12 seconds CPU and
> 20 sec sleep + wait for IO. Actual disk IO was about 4GB.
>
> Since Pg does not use the concept of rollback segments, it is
> unclear why deleti
"fka...@googlemail.com" wrote:
> Simply because the test case had just < 50 rows (deleting
> all rows older than 2 minues). Later on I would use indices.
Running a performance test with 50 rows without indexes and
extrapolating to a much larger data set with indexes won't tell you
much. The p
A quick test:
-
1. create table x1(x int, y bytea);
2. Load some data say with python:
cp /opt/java/src.zip ~/tmp/a.dat (19MB)
##
import psycopg2
conn = psycopg2.connect("dbname='test' user='*' password=''
host='127.0.0.1'");
conn.cursor().execute("INSERT INTO x1 VALUES (1, %s)",
(psyc
Carlo Stonebanks wrote:
The whole topic of messing with stats makes my head spin but I am
concerned about some horridly performing queries that have had bad
rows estimates and others which always choose seq scans when indexes
are available. Reading up on how to improve planner estimates, I have
Carlo Stonebanks wrote:
> The whole topic of messing with stats makes my head spin but I am concerned
> about some horridly performing queries that have had bad rows estimates and
> others which always choose seq scans when indexes are available. Reading up
> on how to improve planner estimates,
As part of my PG East prep work, I just did a big overhaul of the
various benchmarking information available on the wiki and have tagged
all the relevant links into a separate category:
http://wiki.postgresql.org/wiki/Category:Benchmarking
Several of those were really difficult to find pages
25 matches
Mail list logo