On Mar 16, 2010, at 6:04 PM, Rob Wultsch wrote:
> Lets say I have a large table bigTable to which I would like to add
> two btree indexes. Is there a more efficient way to create indexes
> than:
> CREATE INDEX idx_foo on bigTable (foo);
> CREATE INDEX idx_baz on bigTable (baz);
> Or
> CREATE INDEX
Lets say I have a large table bigTable to which I would like to add
two btree indexes. Is there a more efficient way to create indexes
than:
CREATE INDEX idx_foo on bigTable (foo);
CREATE INDEX idx_baz on bigTable (baz);
Or
CREATE INDEX CONCURRENTLY idx_foo on bigTable (foo);
CREATE INDEX CONCURREN
Dave Crooke escribió:
> An awesomely simple alternative is to just specify the extension as e.g. 5%
> of the existing table size it starts by adding one block at a time for
> tiny tables, and once your table is over 20GB, it ends up adding a whole 1GB
> file and pre-allocating it. Very little
I agree with Tom, any reordering attempt is at best second guessing the
filesystem and underlying storage.
However, having the ability to control the extent size would be a worthwhile
improvement for systems that walk and chew gum (write to lots of tables)
concurrently.
I'm thinking of Oracle's A
Alvaro Herrera wrote:
Maybe it would make more sense to try to reorder the fsync calls
instead.
The pretty obvious left behind idea from 8.3 spread checkpoint
development was to similarly spread the fsync calls around. Given that
we know, for example, Linux with ext3 is going to dump the
Tom Lane escribió:
> Alvaro Herrera writes:
> > Tom Lane escribió:
> >> That's not going to do anything towards reducing the actual I/O volume.
> >> Although I suppose it might be useful if it just cuts the number of
> >> seeks.
>
> > Oh, they had no problems with I/O volume. It was relation ext
Alvaro Herrera writes:
> Tom Lane escribió:
>> That's not going to do anything towards reducing the actual I/O volume.
>> Although I suppose it might be useful if it just cuts the number of
>> seeks.
> Oh, they had no problems with I/O volume. It was relation extension
> lock that was heavily c
Tom Lane escribió:
> Alvaro Herrera writes:
> > Tom Lane escribi�:
> >> Reorder to what, though? You still have the problem that we don't know
> >> much about the physical layout on-disk.
>
> > Well, to block numbers as a first step.
>
> fsync is a file-based operation, and we know exactly zip
Alvaro Herrera writes:
> Tom Lane escribió:
>> Reorder to what, though? You still have the problem that we don't know
>> much about the physical layout on-disk.
> Well, to block numbers as a first step.
fsync is a file-based operation, and we know exactly zip about the
relative positions of dif
Tom Lane escribió:
> Alvaro Herrera writes:
> > Maybe it would make more sense to try to reorder the fsync calls
> > instead.
>
> Reorder to what, though? You still have the problem that we don't know
> much about the physical layout on-disk.
Well, to block numbers as a first step.
However, th
Alvaro Herrera writes:
> Maybe it would make more sense to try to reorder the fsync calls
> instead.
Reorder to what, though? You still have the problem that we don't know
much about the physical layout on-disk.
regards, tom lane
--
Sent via pgsql-performance mailing l
Greg Stark escribió:
> On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
> > "Pierre C" writes:
> >> Does PG issue checkpoint writes in "sorted" order ?
> >
> > No. IIRC, a patch for that was submitted, and rejected because no
> > significant performance improvement could be demonstrated. We don
Greg Stark wrote:
On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
"Pierre C" writes:
Does PG issue checkpoint writes in "sorted" order ?
No. IIRC, a patch for that was submitted, and rejected because no
significant performance improvement could be demonstrated.
If the OS fi
Pierre C wrote:
Actually, I meant that in the case of a seq scan, PG will try to use
just a few buffers (a ring) in shared_buffers instead of thrashing the
whole buffers. But if there was actually a lot of free space in
shared_buffers, do the pages stay, or do they not ?
Pages inserted into t
Meena_Ramkumar escribió:
How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it
be made without shutting the server? If so, then what will be performance
degradation percentage?
To execute vacuum, you can´t stop the server, is another process of it.
If you are using a r
Autovacuum is your friend for minimal downtime. It is configurable to let you
adjust how invasive it will be, and you can have different settings per table
if you wish.
As for the reindex, why do you think you will be reindexing regularly?
On Mar 15, 2010, at 10:30 PM, Meena_Ramkumar wrote:
>
On Mon, Mar 15, 2010 at 11:30 PM, Meena_Ramkumar
wrote:
>
> How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it
> be made without shutting the server? If so, then what will be performance
> degradation percentage?
vacuum can be tuned by the various vacuum_* parameters in
Matthew Wakeling wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php
for
How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it
be made without shutting the server? If so, then what will be performance
degradation percentage?
--
View this message in context:
http://old.nabble.com/Postgres-DB-maintainenance---vacuum-and-reindex-tp27913694p279136
On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane wrote:
> "Pierre C" writes:
>> Does PG issue checkpoint writes in "sorted" order ?
>
> No. IIRC, a patch for that was submitted, and rejected because no
> significant performance improvement could be demonstrated. We don't
> have enough information abou
On Tue, Mar 16, 2010 at 1:48 PM, Pierre C wrote:
> Actually, I meant that in the case of a seq scan, PG will try to use just a
> few buffers (a ring) in shared_buffers instead of thrashing the whole
> buffers. But if there was actually a lot of free space in shared_buffers, do
> the pages stay, or
"Pierre C" writes:
> Does PG issue checkpoint writes in "sorted" order ?
No. IIRC, a patch for that was submitted, and rejected because no
significant performance improvement could be demonstrated. We don't
have enough information about the actual on-disk layout to be very
intelligent about thi
I think the logic you are referring to is the clock sweep buffer
accounting
scheme. That just makes sure that the most popular pages stay in the
buffers. If your entire db fits in the buffer pool then it'll all get in
there real fast.
Actually, I meant that in the case of a seq scan, PG wil
On Tue, Mar 16, 2010 at 7:24 AM, Pierre C wrote:
>
> I wonder about something, too : if your DB size is smaller than RAM, you
> could in theory set shared_buffers to a size larger than your DB provided
> you still have enough free RAM left for work_mem and OS writes management.
> How does this int
-My warnings about downsides related to checkpoint issues with larger
buffer pools isn't an opinion at all; that's a fact based on limitations
in how Postgres does its checkpoints. If we get something more like
Oracle's incremental checkpoint logic, this particular concern might go
away.
cr...@postnewspapers.com.au (Craig Ringer) writes:
> On 13/03/2010 5:54 AM, Jeff Davis wrote:
>> On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:
>>> of course. You can always explicitly open a transaction on the remote
>>> side over dblink, do work, and commit it at the last possible mom
Dave Crooke wrote:
There seems to be a wide range of opinion on this I am new to PG
and grew up on Oracle, where more SGA is always a good thing ... I
know people who run Oracle on 2TB Superdome's with titanic SGA sizes
to keep the whole DB in RAM. I'd be using a 40GB+ Oracle SGA on that
27 matches
Mail list logo