Noel Faux <[EMAIL PROTECTED]> writes:
> To clarify, when set on, every time it hits this error, postgres will
> rezero that block?
It'll only "re" zero if the page gets dropped from shared memory without
there having been any occasion to write it out. Otherwise, the first
write will clobber the
To clarify, when set on, every time it hits this error, postgres will
rezero that block?
Michael Fuhr wrote:
On Thu, Mar 09, 2006 at 03:57:46PM +1100, Noel Faux wrote:
Given that this seems problem has occurred a number of times for a
number I've written a small step by step proc
On Thu, Mar 09, 2006 at 03:57:46PM +1100, Noel Faux wrote:
> Given that this seems problem has occurred a number of times for a
> number I've written a small step by step procedure to address this
> issue. Is there any other comments you which to add. I was thinking
> that this should be added
On Thu, Mar 09, 2006 at 03:57:46PM +1100, Noel Faux wrote:
> Given that this seems problem has occurred a number of times for a
> number I've written a small step by step procedure to address this
> issue. Is there any other comments you which to add. I was thinking
> that this should be added
Given that this seems problem has occurred a number of times for a
number I've written a small step by step procedure to address this
issue. Is there any other comments you which to add. I was thinking
that this should be added to the FAQ / troubleshooting in the docs.
How to repair corrupte
On Thu, Mar 09, 2006 at 12:37:52PM +1100, Noel Faux wrote:
> I've been watching the post: Re: [GENERAL] Fixing up a corrupted toast table
> In there they mention deletion of the bad rows from the table based on
> the citid. If I could come up with a def of a back row, would this
> work, or are t
On Thu, Mar 09, 2006 at 12:29:17PM +1100, Noel Faux wrote:
> Thanks for all your help Michael, we wish to do a vacuum and dump before
> the upgrade to 8.02.
8.0.7 and 8.1.3 are the latest versions in their respective branches;
those are the versions to run to get the latest bug fixes.
> Do you b
I've been watching the post: Re: [GENERAL] Fixing up a corrupted toast
table
In there they mention deletion of the bad rows from the table based on
the citid. If I could come up with a def of a back row, would this
work, or are there other issues?
Cheers
Noel
Michael Fuhr wrote:
On Thu, M
Thanks for all your help Michael, we wish to do a vacuum and dump
before the upgrade to 8.02. Do you believe this data corruption is a
postgres issue of an OS / hardware issue?
Cheers
Noel
Michael Fuhr wrote:
On Thu, Mar 09, 2006 at 11:13:40AM +1100, Noel Faux wrote:
Ok it worke
On Thu, Mar 09, 2006 at 11:13:40AM +1100, Noel Faux wrote:
> Ok it worked but we ran into another bad block :(
> /vacuumdb: vacuuming of database "monashprotein" failed: ERROR: invalid
> page header in block 9022937 of relation "gap"
> /
> So the command we used was:
> dd bs=8k seek=110025 conv=no
Ok it worked but we ran into another bad block :(
vacuumdb: vacuuming of database "monashprotein" failed: ERROR:
invalid
page header in block 9022937 of relation "gap"
So the command we used was:
dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero
of=/usr/local/postgresql/postgresql-7.4.8/
On Tue, Mar 07, 2006 at 01:41:44PM +1100, Noel Faux wrote:
> Here is the output from the pg_filedump; is there anything which looks
> suss and where would we re-zero the data, if that's the next step:
[...]
> Block 110025
> -
> Block Of
On Mon, Mar 06, 2006 at 05:17:54PM +1100, Noel Faux wrote:
> dd bs=8k skip=115860 count=1
> if=/usr/local/postgresql/postgresql-7.4.8/data/base/37958/111685332.68 |
> od -x
Wrong block (115860) -- you used the number from my earlier message,
which was based on the bad block being 902292. After
On Mon, Mar 06, 2006 at 11:57:53AM +1100, Noel Faux wrote:
> >Anyway, if the block size is 8192
> >then 902292 sould be in the .6 file. If you can spare the time
> >then you might run the dd and od commands that Tom Lane mentions
> >in the above message and post the output.
> Here's the output:
Is your table really over 100G?
Yeap 600+ million rows.
Anyway, if the block size is 8192
then 902292 sould be in the .6 file. If you can spare the time
then you might run the dd and od commands that Tom Lane mentions
in the above message and post the output.
Here's the output:
000 0
On Fri, Mar 03, 2006 at 09:56:40AM +1100, Noel Faux wrote:
> Which config file will tell us how big the bock sizes are?
Run the query "SHOW block_size" in the database or use pg_controldata
from the shell. It's probably 8192; changing it is done at compile time.
--
Michael Fuhr
---
Thanks for the pointers Michael!
Which config file will tell us how big the bock sizes are?
Cheers
Noel
Michael Fuhr wrote:
On Wed, Mar 01, 2006 at 04:12:53PM +1100, Noel Faux wrote:
Now after doing some searches I managed to work out that the data
corruption starts at 902292.13
On Tue, Feb 28, 2006 at 10:54:48PM -0700, Michael Fuhr wrote:
> Is your table really over 100G? Anyway, if the block size is 8192
> then 902292 sould be in the .6 file. If you can spare the time
> then you might run the dd and od commands that Tom Lane mentions
> in the above message and post the
On Wed, Mar 01, 2006 at 04:12:53PM +1100, Noel Faux wrote:
> Now after doing some searches I managed to work out that the data
> corruption starts at 902292.137
> using this sql:
> SELECT * FROM gap WHERE ctid = '(902292,$x)'
> Where $x I changed from 1-150.
>
> as mentioned on this
> post:http:
Hi
all,
I posted this on the novice mailing list and as yet had no response,
hopefully someone here can help.
While we where trying to do a vacuum / pg_dump we encountered the
following error:
[EMAIL PROTECTED]:~$ pg_dumpall -d > dump.pg
pg_dump: dumpClasses(): SQL command failed
pg_dump:
20 matches
Mail list logo