On 29.12.2016 16:10, Tom Lane wrote:
Adrian Klaver writes:
On 12/28/2016 11:54 PM, Gerhard Wiesinger wrote:
vacuumdb --analyze-only --all --verbose
INFO: analyzing "public.log"
INFO: "log": scanned 3 of 30851 pages, containing 3599899 live rows
and 0 dead rows; 3 rows in sample, 3702
Adrian Klaver writes:
> On 12/28/2016 11:54 PM, Gerhard Wiesinger wrote:
>> vacuumdb --analyze-only --all --verbose
>> INFO: analyzing "public.log"
>> INFO: "log": scanned 3 of 30851 pages, containing 3599899 live rows
>> and 0 dead rows; 3 rows in sample, 3702016 estimated total rows
>>
On 12/28/2016 11:54 PM, Gerhard Wiesinger wrote:
Hello,
PostgreSQl 9.6.1: after a pg_dump/restore procedure it scans all pages
(at least for some of the tables, analyze-only switch is specified).
I would expect that only the sample rows are scanned.
"log_details": scanned 2133350 of 2133350 p
Hello,
PostgreSQl 9.6.1: after a pg_dump/restore procedure it scans all pages
(at least for some of the tables, analyze-only switch is specified).
I would expect that only the sample rows are scanned.
"log_details": scanned 2133350 of 2133350 pages
vacuumdb --analyze-only --all --verbose
IN
Alexander Shutyaev wrote:
> We have the following issue. When we use vacuumdb (NOT full) on
> our postgres database (~320Gb) it takes up ~10Gb of disk space
> which is never returned. Why is the space not returned?
Does that happen every time? (i.e., if you run vacuumdb 10 times
in a row while
Hi all!
We have the following issue. When we use vacuumdb (NOT full) on our
postgres database (~320Gb) it takes up ~10Gb of disk space which is never
returned. Why is the space not returned?
Thanks in advance!
thank you depesz, your help was very useful!
Am 12.05.2011 13:19, schrieb hubert depesz lubaczewski:
On Thu, May 12, 2011 at 10:56:20AM +0200, Andreas Laggner wrote:
Hi list,
i always vaccumed my postgresql automatically with crontab, because
autovacuum is not suitable for my applications. W
On Thu, May 12, 2011 at 10:56:20AM +0200, Andreas Laggner wrote:
> Hi list,
>
> i always vaccumed my postgresql automatically with crontab, because
> autovacuum is not suitable for my applications. With version 8.2 it
> works perfect for me with this command line:
>
> 00 02 * * *postgres /usr
Andreas Laggner writes:
> Hi list,
>
> i always vaccumed my postgresql automatically with crontab, because
> autovacuum is not suitable for my applications. With version 8.2 it
> works perfect for me with this command line:
>
> 00 02 * * *postgres /usr/bin/vacuumdb -d gis -z
>
> But not with
Hi list,
i always vaccumed my postgresql automatically with crontab, because
autovacuum is not suitable for my applications. With version 8.2 it
works perfect for me with this command line:
00 02 * * *postgres /usr/bin/vacuumdb -d gis -z
But not with 9.0, because vacuumdb now wants to ha
Carl von Clausewitz writes:
>>> sqlstate=23505ERROR: duplicate key value violates unique constraint
>>> "pg_index_indexrelid_index"
>>> sqlstate=23505DETAIL: Key (indexrelid)=(2678) already exists.
After a considerable amount of fooling around I've been able to
reproduce this and identify the c
Everything was fine, the reordered script fixed everything. Thanks all.
Regards,
Carl
2011/4/14 Carl von Clausewitz
> Ok thanks, the information. I've made the mistake, I will change the
> script, but I will try, that Vidhya told me. Let me see, what will going
> on.
>
> Regards,
> Carl
>
> 201
Ok thanks, the information. I've made the mistake, I will change the script,
but I will try, that Vidhya told me. Let me see, what will going on.
Regards,
Carl
2011/4/14 Tom Lane
> Carl von Clausewitz writes:
> > Maintenance:
> > #!/bin/sh
> > date >> /var/log/postgresql_maintenance.log
> > /u
Carl von Clausewitz writes:
> Maintenance:
> #!/bin/sh
> date >> /var/log/postgresql_maintenance.log
> /usr/local/bin/reindexdb --all --username=cvc >>
> /var/log/postgresql_maintenance.log
> echo "Reindex done" >> /var/log/postgresql_maintenance.log
> /usr/local/bin/vacuumdb --all --full --analyz
Hi,
see the two scripts attached. First one is the postgres_maintenance.sh, and
the second is the postgres_backup.sh. I've attached it, and copied, because
of the antivirus filters :-)
regards,
Carl
Maintenance:
#!/bin/sh
date >> /var/log/postgresql_maintenance.log
/usr/local/bin/reindexdb --all
Gipsz Jakab writes:
> Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
> maintenance script has started (vacuumdb --all --full --analyze), and
> stopped with this error:
> sqlstate=23505ERROR: duplicate key value violates unique constraint
> "pg_index_indexrelid_index"
> sqlsta
Gipsz,
We got this error too what we did is ran vacuum analyze verbose and
afterthat reindexed the db and we din't see the error croping again.
Regards
Vidhya
On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab wrote:
> Dear List,
>
> Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
Ok, thanks, I'll try at night.
Regards,
Carl
2011/4/14 Vidhya Bondre
> Gipsz,
>
> We got this error too what we did is ran vacuum analyze verbose and
> afterthat reindexed the db and we din't see the error croping again.
>
> Regards
> Vidhya
>
> On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab wrote
Dear List,
Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
maintenance script has started (vacuumdb --all --full --analyze), and
stopped with this error:
sqlstate=23505ERROR: duplicate key value violates unique constraint
"pg_index_indexrelid_index"
sqlstate=23505DETAIL: Key
On Tue, Feb 9, 2010 at 1:55 PM, John R Pierce wrote:
> Guillaume Lelarge wrote:
>>>
>>> is this a 64bit postgres build?
>>>
>>> if not, you're probably running out of virtual address space in the 32
>>> bit user space, which is limited to like 2gb.
>>>
>>>
>>
>> IIRC, the virtual address space in
Guillaume Lelarge wrote:
is this a 64bit postgres build?
if not, you're probably running out of virtual address space in the 32
bit user space, which is limited to like 2gb.
IIRC, the virtual address space in 32bit platforms is 4GB.
it is, but within that 4gb, the kernel uses the to
Magnus Hagander wrote:
On Tue, Feb 9, 2010 at 09:53, David Kerr wrote:
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actua
On Tue, Feb 9, 2010 at 09:53, David Kerr wrote:
> Guillaume Lelarge wrote:
>>
>> Le 09/02/2010 09:35, David Kerr a écrit :
>>>
>>> Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
>
> David Kerr wrote:
maintenance_work_mem = 1GB
>>>
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your
Le 09/02/2010 09:35, David Kerr a écrit :
> Guillaume Lelarge wrote:
>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>> David Kerr wrote:
>> maintenance_work_mem = 1GB
> So evidently, when it tries to actually allocate 1GB, it can't do it.
> Ergo, that setting is too high for your mach
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like i've got 2GB free.
is this a 64bit postgre
Le 09/02/2010 05:49, John R Pierce a écrit :
> David Kerr wrote:
maintenance_work_mem = 1GB
>>>
>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>> Ergo, that setting is too high for your machine.
>>> ...
>>
>> seems like i've got 2GB free.
>
>
> is this a 64bit pos
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like i've got 2GB free.
is this a 64bit postgres build?
if not, you're probably running out of virtual address spa
Tom Lane wrote:
David Kerr writes:
Tom Lane wrote:
David Kerr writes:
I get:
vacuumdb: vacuuming of database "assessment" failed: ERROR: out of memory
DETAIL: Failed on request of size 1073741820.
What have you got maintenance_work_mem set to?
maintenance_work_mem = 1GB
So evidently,
David Kerr writes:
> Tom Lane wrote:
>> David Kerr writes:
>>> I get:
>>> vacuumdb: vacuuming of database "assessment" failed: ERROR: out of memory
>>> DETAIL: Failed on request of size 1073741820.
>>
>> What have you got maintenance_work_mem set to?
> maintenance_work_mem = 1GB
So evidently
Tom Lane wrote:
David Kerr writes:
I'm getting error:
When I try
vacuumdb -z assessment
or
vacuumdb assessment
I get:
vacuumdb: vacuuming of database "assessment" failed: ERROR: out of memory
DETAIL: Failed on request of size 1073741820.
What have you got maintenance_work_mem set to?
David Kerr writes:
> I'm getting error:
> When I try
> vacuumdb -z assessment
> or
> vacuumdb assessment
> I get:
> vacuumdb: vacuuming of database "assessment" failed: ERROR: out of memory
> DETAIL: Failed on request of size 1073741820.
What have you got maintenance_work_mem set to?
I'm getting error:
When I try
vacuumdb -z assessment
or
vacuumdb assessment
I get:
vacuumdb: vacuuming of database "assessment" failed: ERROR: out of memory
DETAIL: Failed on request of size 1073741820.
The only way i can actually analyze the DB is if i do a vacuumdb -f
The database is curren
Hi Scott,
On Sat, Nov 28, 2009 at 3:12 PM, Irene Barg wrote:
> Hi Scott,
>
> Scott Marlowe wrote:
>>
>> On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg wrote:
>>>
>>> I've had a simple update running for over 4 hours now (see results from
>>> pg_top below). The sql is:
>>
>> Have you looked in
On Sat, Nov 28, 2009 at 3:12 PM, Irene Barg wrote:
> Hi Scott,
>
> Scott Marlowe wrote:
>>
>> On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg wrote:
>>>
>>> I've had a simple update running for over 4 hours now (see results from
>>> pg_top below). The sql is:
>>
>> Have you looked in pg_locks and pg_
Irene Barg wrote:
avg-cpu: %user %nice %system %iowait %steal %idle
0.000.000.010.000.00 99.99
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 0.60 0.00 1.00 0.0012.80
Hi Scott,
Scott Marlowe wrote:
On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg wrote:
I've had a simple update running for over 4 hours now (see results from
pg_top below). The sql is:
Have you looked in pg_locks and pg_stat_activity?
Yes, I did look at pg_stat_activity and did not see anythin
On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg wrote:
> I've had a simple update running for over 4 hours now (see results from
> pg_top below). The sql is:
Have you looked in pg_locks and pg_stat_activity?
> The database has 1016789 records, vacuumdb -z is ran once a day. I have not
> ran 'reindex
Le vendredi 27 novembre 2009 à 22:17:50, Irene Barg a écrit :
> I thought 'vacuumdb -z dbname' also reindex is this true?
>
No. vacuumdb -z is a VACUUM ANALYZE. Moreover, vacuumdb has no option to do a
REINDEX.
--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com
--
Sent via pgsql-
I thought 'vacuumdb -z dbname' also reindex is this true?
I've had a simple update running for over 4 hours now (see results from
pg_top below). The sql is:
The database has 1016789 records, vacuumdb -z is ran once a day. I have
not ran 'reindexdb' in weeks. The system is a:
2xIntel 4-core
2009/11/18 Tech 2010 :
> Hello!
>
> How do I location of this pointer and how do I zero it so I can access
> the rest of the data?
>
> "zero_damaged_pages = true" did not help in this case, because I
> always get same numbers being zeroed. This is with 8.4.0 and 8.4.1.
>
> Thanks.
>
You probably j
Hello!
How do I location of this pointer and how do I zero it so I can access
the rest of the data?
"zero_damaged_pages = true" did not help in this case, because I
always get same numbers being zeroed. This is with 8.4.0 and 8.4.1.
Thanks.
--
Sent via pgsql-general mailing list (pgsql-general
On Thu, Oct 1, 2009 at 5:02 PM, Tom Lane wrote:
> APseudoUtopia writes:
>>> Here's what happened:
>>>
>>> $ vacuumdb --all --full --analyze --no-password
>>> vacuumdb: vacuuming database "postgres"
>>> vacuumdb: vacuuming database "web_main"
>>> vacuumdb: vacuuming of database "web_main" failed:
Teodor Sigaev writes:
> ginHeapTupleFastCollect and ginEntryInsert checked tuple's size for
> TOAST_INDEX_TARGET, but ginHeapTupleFastCollect checks without one
> ItemPointer,
> as ginEntryInsert does it. So ginHeapTupleFastCollect could produce a tuple
> which 6-bytes larger than allowed by g
Teodor Sigaev writes:
>> Will you apply this, or do you want me to?
> I'm not able to provide a good error message in good English :(
OK, I'll take care of it later today.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make c
Looks reasonable, although since the error is potentially user-facing
I think we should put a bit more effort into the error message
(use ereport and make it mention the index name, at least --- is there
any other useful information we could give?)
Only sizes as it's done in BTree, I suppose.
W
Teodor Sigaev writes:
> Patch removes checking of TOAST_INDEX_TARGET and use checking only by
> GinMaxItemSize which is greater than TOAST_INDEX_TARGET. All size's check is
> now
> in GinFormTuple.
Looks reasonable, although since the error is potentially user-facing
I think we should put a bi
APseudoUtopia writes:
Here's what happened:
$ vacuumdb --all --full --analyze --no-password
vacuumdb: vacuuming database "postgres"
vacuumdb: vacuuming database "web_main"
vacuumdb: vacuuming of database "web_main" failed: ERROR: б═huge tuple
PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, comp
APseudoUtopia writes:
>> Here's what happened:
>>
>> $ vacuumdb --all --full --analyze --no-password
>> vacuumdb: vacuuming database "postgres"
>> vacuumdb: vacuuming database "web_main"
>> vacuumdb: vacuuming of database "web_main" failed: ERROR: Â huge tuple
> PostgreSQL 8.4.0 on i386-portbld-
Scott Marlowe escribió:
> Wow, that's pretty slow. I'd assumed it was a semi-automated process
> and the new version would be out now, 3 weeks later. At least look
> through the release notes to see if any mention is made of this bug
> being fixed in 8.4.1 I guess.
Both files on which that erro
On Thu, Oct 1, 2009 at 2:27 PM, APseudoUtopia wrote:
> On Thu, Oct 1, 2009 at 4:21 PM, Scott Marlowe wrote:
>> On Thu, Oct 1, 2009 at 1:12 PM, APseudoUtopia
>> wrote:
>>
>>> Sorry, I failed to mention:
>>>
>>> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
>>> 4.2.1 20070
On Thu, Oct 1, 2009 at 4:21 PM, Scott Marlowe wrote:
> On Thu, Oct 1, 2009 at 1:12 PM, APseudoUtopia wrote:
>
>> Sorry, I failed to mention:
>>
>> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
>> 4.2.1 20070719 [FreeBSD], 32-bit
>
> Have you tried updating to 8.4.1 to see
On Thu, Oct 1, 2009 at 1:12 PM, APseudoUtopia wrote:
> Sorry, I failed to mention:
>
> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
> 4.2.1 20070719 [FreeBSD], 32-bit
Have you tried updating to 8.4.1 to see if that fixes the problem?
--
Sent via pgsql-general mailing
On Thu, Oct 1, 2009 at 3:10 PM, APseudoUtopia wrote:
> Hey list,
>
> After some downtime of my site while completing rigorous database
> maintenance, I wanted to make sure all the databases were fully
> vacuumed and analyzed. I do run autovacuum, but since I made several
> significant changes, I w
Hey list,
After some downtime of my site while completing rigorous database
maintenance, I wanted to make sure all the databases were fully
vacuumed and analyzed. I do run autovacuum, but since I made several
significant changes, I wanted to force a vacuum before I brought my
site back online.
He
Richard Huxton <[EMAIL PROTECTED]> writes:
>> First try was using a file system copy to reduce downtime as it was two
>> same 7.4.x version but the result was not working (maybe related to
>> architecture change 32bits => 64 bits) so I finally dropped the db and
>> performed an dump/restore. I t
AlJeux wrote:
Richard Huxton a écrit :
1. Have you had crashes or other hardware problems recently?
No crash but we changed our server (<= seems the cause).
First try was using a file system copy to reduce downtime as it was two
same 7.4.x version but the result was not working (maybe relate
Richard Huxton a écrit :
Alain wrote:
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory
alloc request size 9000688640
After so
Richard Huxton <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> FWIW, a look in the source code shows that the 'corrupted item pointer'
>> message comes only from PageIndexTupleDelete, so that indicates a
>> damaged index which should be fixable by reindexing.
> Tom - could it be damage to a share
Tom Lane wrote:
Richard Huxton <[EMAIL PROTECTED]> writes:
Alain Peyrat wrote:
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory alloc
request size 9000688640
After some research, it seems to be related to a corr
Richard Huxton <[EMAIL PROTECTED]> writes:
> Alain Peyrat wrote:
>> Initial problem:
>>
>> # pg_dump -O dbname -Ft -f /tmp/database.tar
>> pg_dump: query to get table columns failed: ERROR: invalid memory alloc
>> request size 9000688640
>>
>> After some research, it seems to be related to a co
Alain Peyrat wrote:
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory alloc
request size 9000688640
After some research, it see
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory
alloc request size 9000688640
After some research, it seems to be related t
Tom Lane wrote:
"Matthew T. O'Connor" writes:
PostgreSQL 8.1.0 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
3.3.3 20040412 (Red Hat Linux 3.3.3-7)
... and this should definitely make you nervous. We don't release
update versions for idle amusement. Get onto 8.1.3 and see if
"Matthew T. O'Connor" writes:
> no tables in it. I logged into my server and reran the vacuumdb -a -z
> command and it went though with no problem. I also checked my log file
> and it shows that there have been 25 out of memory errors on my server
> today.
> My question is: Is this normal?
Hello all,
I run a nightly "vacuumdb -a -z" on my production server. The output of
the command is emailed to me every night. Today while checking my email
I received this:
vacuumdb: vacuuming database "postgres"
vacuumdb: vacuuming of database "postgres" failed: ERROR: out of memory
DETAIL
go schrieb:
> Hi, pgsql-general.
>
> Explain me please the difference between
> Vacuum full and Vacuum freeze
>
See you:
http://www.postgresql.org/docs/8.0/interactive/sql-vacuum.html
--
Mario Günterberg
mattheis. werbeagentur
IT Engineer / Projektleiter
Zillestrasse 105a. D - 10585 Berlin
Hi, pgsql-general.
Explain me please the difference between
Vacuum full and Vacuum freeze
--
Have a nice day!
go mailto:[EMAIL PROTECTED]
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
I have three scripts that I am running to do pg_dumpall and a vacuum on
my server.
one is run every night except Sundays.
one is run every Sunday night.
one is run the first of each month.
After I ftp the backup to a standby server the vacuum is run.
The database is pretty small it only grows by
On Wed, Dec 08, 2004 at 09:45:53 -0800,
Mark <[EMAIL PROTECTED]> wrote:
> Hi,
> What are recommendations about running vacuumdb?
You need to VACUUM tables to reclaim space created by DELETE and UPDATE
commands. You need to run ANALYZE tables when their distribution of
data changes. If you are do
Hi,
What are recommendations about running vacuumdb?
How frequently it need be executed and how will I know I have to run
it.
Can I run vaccumdb on production system or I need to do it on DB with
no users connected?
Thanks,
Mark.
__
Do you Yaho
Steve Crawford <[EMAIL PROTECTED]> writes:
>>> I tracked down the process that was "idle in transaction" and it
>>> was a pg_dump process running on another machine.
>>
>> What was it waiting on?
> Beats the heck out of me. We periodically dump some selected small
> tables via a script using:
>
On Monday 26 July 2004 2:18 pm, Tom Lane wrote:
> Steve Crawford <[EMAIL PROTECTED]> writes:
> > A couple hundred processes were showing as "startup waiting" and
> > one was "idle in transaction". The process in the "VACUUM
> > waiting" state was the only one connected to that database - all
> > ot
Steve Crawford <[EMAIL PROTECTED]> writes:
> A couple hundred processes were showing as "startup waiting" and one
> was "idle in transaction". The process in the "VACUUM waiting" state
> was the only one connected to that database - all other connections
> were to other databases.
I suspect wha
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Steve Crawford
> Sent: Monday, July 26, 2004 1:23 PM
> To: [EMAIL PROTECTED]
> Subject: [GENERAL] vacuumdb hanging database cluster
>
>
> When I run:
> vacuumdb --
When I run:
vacuumdb --full --all --analyze --quiet
on my database cluster it will complete in < 2 minutes (this cluster
is a few million total rows and ~2GB).
After testing, I set this up as an off-hours cron job and it worked
fine for several days then hung the whole database. After my pager
On Mon, May 10, 2004 at 07:49:42PM -0400, Tom Lane wrote:
>
> Hmm, I would expect that behavior for an overwrite-in-place REINDEX,
> but 7.2 only seems to use overwrite-in-place for critical system
> catalogs. What were you reindexing exactly? Were you running a
> standalone backend?
Not as far
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> Dunno if this is any help, but on a 7.2 system I saw a REINDEX which
> was interrupted leave the index at least partially working. We ended
> up with an index which seemed fine, but which didn't contain certain
> rows (so those rows were not visible wh
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
|>Indicating that they should produce the same results, but that they work
|>differently. I am not sure what that implies, but maybe someone else
knows ?
| The only difference the docs are talking about is what kind of lock is
| held whi
Denis Braekhus <[EMAIL PROTECTED]> writes:
> Indicating that they should produce the same results, but that they work
> differently. I am not sure what that implies, but maybe someone else knows ?
The only difference the docs are talking about is what kind of lock is
held while the rebuild proceed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Lonni Friedman wrote:
| Thanks for your reply. I thought (perhaps erroneously) that there
| wasn't any real difference between dropping an index then recreating
| it, and just reindexing an index?
I am definitely not sure, and I agree it sounds logica
Thanks for your reply. I thought (perhaps erroneously) that there
wasn't any real difference between dropping an index then recreating
it, and just reindexing an index?
On Thu, 06 May 2004 23:00:25 +0200, Denis Braekhus <[EMAIL PROTECTED]> wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SH
Lonni Friedman <[EMAIL PROTECTED]> writes:
> hrmmm, i'm not sure what would constitute 'off the beaten track'.
Neither am I ... if we knew what you were doing that triggers the bug,
we'd already be halfway there :-(
regards, tom lane
---(end of bro
On Wed, 05 May 2004 13:56:41 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> Lonni Friedman <[EMAIL PROTECTED]> writes:
> > On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> Once the complaint starts appearing, I'd expect it to continue until you
> >> reindex the index.
>
>
On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Lonni Friedman <[EMAIL PROTECTED]> writes:
> > Unfortunately, i have no clue how to replicate this. It was happening
> > fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly
> > every vacuumdb run).
>
>
Lonni Friedman <[EMAIL PROTECTED]> writes:
> On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Once the complaint starts appearing, I'd expect it to continue until you
>> reindex the index.
> That's exactly what happens. It consistantly errors until reindexed.
> Any sugg
Its _always_ that same index. No others have had this problem.
Unfortunately, i have no clue how to replicate this. It was happening
fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly
every vacuumdb run).
Then nothing for a month after going to 7.3.4, and now its happening
e
Lonni Friedman <[EMAIL PROTECTED]> writes:
> Unfortunately, i have no clue how to replicate this. It was happening
> fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly
> every vacuumdb run).
> Then nothing for a month after going to 7.3.4, and now its happening
> every vacuumd
Lonni Friedman <[EMAIL PROTECTED]> writes:
> All of a sudden last month (after about 3 years) I started getting
> this warning when vacuumdb was run:
> INFO: Index pg_largeobject_loid_pn_index: Pages 903; Tuples 323847:
> Deleted 0.CPU 0.04s/0.07u sec elapsed 0.10 sec.
> WARNING: Index pg_la
on 5/1/04 3:11 PM, [EMAIL PROTECTED] purportedly said:
> Keary Suska <[EMAIL PROTECTED]> writes:
>> I received the following errors from an automated full vacuum:
>> vacuumdb: vacuuming of database "milemgr" failed: ERROR: tuple concurrently
>> updated
>
> Hm, could you have had more than one of
Keary Suska <[EMAIL PROTECTED]> writes:
> I received the following errors from an automated full vacuum:
> vacuumdb: vacuuming of database "milemgr" failed: ERROR: tuple concurrently
> updated
Hm, could you have had more than one of these beasts running? It's
possible to get such an error from c
Gabriel Fernandez <[EMAIL PROTECTED]> writes:
> We have some db's in our server. When executing a vacuumdb, ONLY FOR
> SOME of them, the following message is shown:
> AbortTransaction and not in in-progress state
What postgres version?
> After this, the vacuum process is aborted, so we cannot v
Hi,
We have some db's in our server. When executing a vacuumdb, ONLY FOR
SOME of them, the following message is shown:
AbortTransaction and not in in-progress state
After this, the vacuum process is aborted, so we cannot vacuum these
'problematic' db's.
What can we do ?
Gabi :-)
--
Hi,
What exactly does vacuumdb do? In what way does it 'clean' the db?
Also what are the best ways to optimise a pg databaseThanks,
Jonathan Daniels
KuroiNeko wrote:
>
> Ray,
>
> > What am I doing wrong? Any ideas wold be helpful!
>
> Environment is dropped by cron. Either specify LD_LIBRARY_PATH in crontab
> explicitly, or add your PG libdir to /etc/ld.so.conf and rerun ldconfig.
>
Or do what I do in my cron scripts:
. ~/.bashrc ; myc
> -Original Message-
> From: Tom Lane
>
> > Now, I'm not sure if this is related, but while trying to
> do vacuumdb
> > , I got...
>
> > NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
> > referenced (private 0, global 1)
> > FATAL 1: VACUUM (vc_repair_frag): Flush
George Robinson II <[EMAIL PROTECTED]> writes:
> Last night, while my perl script was doing a huge insert operation, I
> got this error...
> DBD::Pg::st execute failed: ERROR: copy: line 4857, pg_atoi: error
> reading "2244904358": Result too large
> Now, I'm not sure if this is rel
Marcin Inkielman <[EMAIL PROTECTED]> writes:
> NOTICE: FlushRelationBuffers(osoby, 228): block 223 is referenced
> (private 0, global 1)
> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
> this table is referenced in my db by a tree of FOREIGN KEYs.
Hmm, I wonder whether th
Hi!
I have a problem with vacuumdb on one of my tables.
(spiral:3)-[~]$vacuumdb -t osoby -v dziekanat
NOTICE: --Relation osoby--
NOTICE: Pages 229: Changed 0, reaped 16, Empty 0, New 0; Tup 4427: Vac 5,
Keep/VTL 0/0, Crash 0, UnUsed 70, MinLen 64, MaxLen
616; Re-using: Free/Avail. Space 18176/
99 matches
Mail list logo