mthing you need. (Oh I think checkpoints
might come into this as well but I'm not sure how)
Or at least thats my understanding...
So if your base backup takes a while I would advise running vacuum
afterwards. But then if your running autovacuum there is probably very
little need to worry.
Peter Childs
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2008/10/3 Peter Eisentraut <[EMAIL PROTECTED]>:
> Peter Childs wrote:
>>
>> I have a problem where by an insert on a "large" table will sometimes
>> take longer than usual.
>
>> I think the problem might have something to do with checkpoints,
>
>
away after a longer insert and not found loads of
space in the fsm.
I'm using 8.3.1 (I thought I'd upgraded to 8.3.3 but it does not look
like the upgrade worked) I'm more than happy to upgrade just have to
find the down time (even a few seconds can be difficult)
Any help would be
2008/4/28 Gauri Kanekar <[EMAIL PROTECTED]>:
> All,
>
> We have a table "table1" which get insert and updates daily in high
> numbers, bcoz of which its size is increasing and we have to vacuum it every
> alternate day. Vacuuming "table1" take almost 30min and during that time the
> site is down.
On 03/01/2008, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Peter Childs" <[EMAIL PROTECTED]> writes:
> > Using Postgresql 8.1.10 every so often I get a transaction that takes a
> > while to commit.
>
> > I log everything that takes over 50
Using Postgresql 8.1.10 every so often I get a transaction that takes a
while to commit.
I log everything that takes over 500ms and quite reguallly it says things
like
707.036 ms statement: COMMIT
Is there anyway to speed this up?
Peter Childs
On 25/11/2007, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
>
> Tom Lane wrote:
> > "Peter Childs" <[EMAIL PROTECTED]> writes:
> >
> >> On 25/11/2007, Erik Jones <[EMAIL PROTECTED]> wrote:
> >>
> >>>> Does the pg_dump
On 25/11/2007, Erik Jones <[EMAIL PROTECTED]> wrote:
>
> On Nov 25, 2007, at 10:46 AM, Pablo Alcaraz wrote:
>
> > Hi all,
> >
> > I read that pg_dump can run while the database is being used and makes
> > "consistent backups".
> >
> > I have a huge and *heavy* selected, inserted and updated databas
On 14/09/2007, Peter Childs <[EMAIL PROTECTED]> wrote:
>
>
>
> On 13/09/2007, Greg Smith <[EMAIL PROTECTED]> wrote:
> >
> >
> > Every time the all scan writes a buffer that is frequently used, that
> > write has a good chance that it was wasted bec
hought oh just one of those things but if they
can be reduced by changing a few config variables that would be great. I'm
just trying to workout what figures are worth trying to see if I can reduce
them.
>From time to time I get commits that take 6 or 7 seconds but not all the
time.
I'm currently working with the defaults.
Peter Childs
On 05/09/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
>
> "Gregory Stark" <[EMAIL PROTECTED]> writes:
>
> > "JS Ubei" <[EMAIL PROTECTED]> writes:
> >
> >> I need to improve a query like :
> >>
> >> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;
> >...
> > I don't think you'll fi
On 30/05/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Wed, 30 May 2007, Jonah H. Harris wrote:
> On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> AFAIK you can't RAID1 more than two drives, so the above doesn't make
>> sense
>> to me.
>
> Yeah, I've never seen a way to RAID-1 m
On 22 May 2007 01:23:03 -0700, valgog <[EMAIL PROTECTED]> wrote:
I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a
On 26/02/07, Pallav Kalva <[EMAIL PROTECTED]> wrote:
Hi,
I am in the process of cleaning up one of our big table, this table
has 187 million records and we need to delete around 100 million of them.
I am deleting around 4-5 million of them daily in order to catchup
with vacuum and als
On 12/01/07, Tobias Brox <[EMAIL PROTECTED]> wrote:
We have a table with a timestamp attribute (event_time) and a state flag
which usually changes value around the event_time (it goes to 4). Now
we have more than two years of events in the database, and around 5k of
future events.
It is importa
On 20/12/06, Steinar H. Gunderson <[EMAIL PROTECTED]> wrote:
On Tue, Dec 19, 2006 at 11:19:39PM -0800, Brian Herlihy wrote:
> Actually, I think I answered my own question already. But I want to
> confirm - Is the GROUP BY faster because it doesn't have to sort results,
> whereas DISTINCT must pr
On 24/11/06, Arnau <[EMAIL PROTECTED]> wrote:
Hi all,
I have a table with statistics with more than 15 million rows. I'd
like to delete the oldest statistics and this can be about 7 million
rows. Which method would you recommend me to do this? I'd be also
interested in calculate some kind of
On 28/08/06, Michal Taborsky - Internet Mall <[EMAIL PROTECTED]> wrote:
Markus Schaber napsal(a):
> Hi, Michal,
>
> Michal Taborsky - Internet Mall wrote:
>
>> When using this view, you are interested in tables, which have the
>> "bloat" column higher that say 2.0 (in freshly dump/restored/analyz
ts are: 4 page slots, 1000 relations, using 299 KB.
if the required page slots (9760 in my case) goes above the current
limit (4 in my case) you will need to do a vacuum full to reclaim
the free space. (cluster of the relevent tables may work.
If you run Vacuum Verbose regullally you can check you are vacuuming
often enough and that your free space map is big enough to hold your
free space.
Peter Childs
cess.
Hmm but then you would have to include Access Vacuum too I'll think you
will find "Tools -> Database Utils -> Compact Database" preforms
a simular purpose and is just as important as I've seen many Access
Databases bloat in my time.
Peter Childs
On 18/10/05, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> [Please copy the mailing list on replies so others can participate
> in and learn from the discussion.]
>
> On Tue, Oct 18, 2005 at 07:09:08PM +, Rodrigo Madera wrote:
> > > What language and API are you using?
> >
> > I'm using libpqxx. A
ugs and
7.3.4 was produced within 24 hours.(must upgrade at some point)
Oh yes Index have problems (I think this is fix in later
versions...) so you might want to try reindex.
They are all worth a try its a brief summary of what been on
preform for weeks and weeks now.
Pet
ld be greatly appreciated. Thanks for your help!
> >> Seth
> >>
> >>
> > Try group by instead. I think this is an old bug its fixed in
> > 7.3.2 which I'm using.
> >
> > Peter Childs
> > `
> >
> >
would be greatly appreciated. Thanks for your help!
> Seth
>
>
Try group by instead. I think this is an old bug its fixed in
7.3.2 which I'm using.
Peter Childs
`
[EMAIL PROTECTED]:express=# explain select distinct
or my large table
select max(field) from table; (5264.21 msec)
select field from table order by field limit 1; (54.88 msec)
Peter Childs
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
On Tue, 5 Aug 2003, Shridhar Daithankar wrote:
> On 5 Aug 2003 at 14:15, Peter Childs wrote:
>
> > On Tue, 5 Aug 2003, Shridhar Daithankar wrote:
> >
> > > On 5 Aug 2003 at 8:09, Jeff wrote:
> > >
> > > I would suggest autovacuum daemon which is i
.
I think that many vacuums may be slowing does my database....
Peter Childs
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
d
> trying it?
If there is such a daemon, what is it called? As I can't see it.
Is it part of gborg?
Peter Childs
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
28 matches
Mail list logo