Carlo Curatolo wrote:
> Yes I ran pg_dumpall, create a new cluster and import.
Ok, cool.
> Everything seems fine now.
>
> How can I prevent that ?
Prevent data corruption?
Have good hardware, run the latest PostgreSQL fixes...
Most of all, have a good backup so that you can recover.
Yours,
L
Yes I ran pg_dumpall, create a new cluster and import.
Everything seems fine now.
How can I prevent that ?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Invalid-Page-Header-Error-tp1925403p5774728.html
Sent from the PostgreSQL - general mailing list archive at Nabbl
Carlo Curatolo wrote:
> SELECT oid, relname, relkind FROM pg_class WHERE relfilenode = 599662; -->
> returns nothing.
Maybe the wrong database?
Try to find out which object this file belongs to (maybe with oid2name).
> No crash occurs, I have tested the hardware (memory, harddisks, RAID5,
> stabi
Thanks for the help.
SELECT oid, relname, relkind FROM pg_class WHERE relfilenode = 599662; -->
returns nothing.
No crash occurs, I have tested the hardware (memory, harddisks, RAID5,
stability test...)
I have made a little program to read all the LargeObject of my tables, they
are all readable.
Carlo Curatolo wrote:
> When I lauch a vacuumdb, I have an error : ERREUR: en-tête de page invalide
> dans le bloc 39639 de la relation base/16384/599662
>
> With a
> SELECT * FROM pg_catalog.pg_largeobject
>
> Result is
> ERREUR: en-tête de page invalide dans le bloc 39639 de la relation
> bas
I have quite the same problem.
When I lauch a vacuumdb, I have an error : ERREUR: en-tête de page invalide
dans le bloc 39639 de la relation base/16384/599662
With a
SELECT * FROM pg_catalog.pg_largeobject
Result is
ERREUR: en-tête de page invalide dans le bloc 39639 de la relation
base/163
Hi all,
A customer's database has started whining about a busted block:
postgresql-8.4-main.log:2012-10-02 18:51:33 EST ERROR: invalid page header in
block 8429809 of relation base/807305056/950827614
postgresql-8.4-main.log:2012-10-02 18:56:52 EST ERROR: invalid page header in
block 8429809
Hello Richard,
Just to keep you informed...
Richard Huxton a écrit :
>> We had a server crash and when restarting postgres it works, except some
>> "Invalid Page Header Error" :
>
> Data corrupted on disk. Either:
> 1. You have bad hardware
> 2. You have disks lying about fsync
> 3. You have fsy
Denis BUCHER wrote:
> Hello,
>
> We had a server crash and when restarting postgres it works, except some
> "Invalid Page Header Error" :
Data corrupted on disk. Either:
1. You have bad hardware
2. You have disks lying about fsync
3. You have fsync turned off.
> I already try VACUUM / FULL / ANA
Hello,
We had a server crash and when restarting postgres it works, except some
"Invalid Page Header Error" :
I already try VACUUM / FULL / ANALYSE but same error
Even when doing a pg_dumpall, we have this problem.
$ pg_dumpall >/dev/null
pg_dump: ERREUR: en-tête de page invalide dans le bloc
Hi,
Markus Schiltknecht wrote:
I've done that (zeroing out the pg_toast table page) and hope
> the running pg_dump goes through fine.
Unfortunately, pg_dump didn't go through. I already did some REINDEXing
and VACUUMing. Vacuum fixed something (sorry, don't I recall the
message), but SELECTi
Hi,
Tom Lane wrote:
Hm, looks suspiciously ASCII-like. If you examine the page as text,
is it recognizable?
Doh! Yup, is recognizable. It looks like some PHP serialized output:
png%";i:84;s:24:"%InfoToolIconActive.png%";i:85;s:29:"%InfoToolIconHighlighted.png%";i:86;s:26:"%InfoToolIconInact
Markus Schiltknecht <[EMAIL PROTECTED]> writes:
>> Block 58591
>> -
>> Block Offset: 0x1c9be000 Offsets: Lower12858 (0x323a)
>> Block: Size 28160 Version 73Upper14900 (0x3a34)
>> LSN: logid 627535472 recof
Hi,
I'm in the unfortunate position of having "invalid page header(s) in
block 58591 of relation "pg_toast_302599". I'm well aware that the
hardware in question isn't the most reliable one. None the less, I'd
like to restore as much of the data as possible.
A pg_filedump analysis of the file
and re-index on the affected table.
Sorry, was to fast
Poul
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
I had a similar problem and overcame it by temporarily setting
zero_damaged_pages, then doing a full vacuum and re-index on the affected table.
Thanks, I suppose I need to reindex the table afterwards, or they can
point to non existent data ?
Poul
---(end of
Hi,
I had a similar problem and overcame it by temporarily setting
zero_damaged_pages, then doing a full vacuum and re-index on the affected table.
The rows contained in the corrupted page were lost but the rest of the table
was OK after this.
Regards // Mike
-Original Message-
From:
During some time I have had more problems with invalid data in different
parts of a PostgreSQL database.
Until now it has been pointers to non present clog files and an index
file, but now it's in a data file.
I'm getting this error when doing a backup:
invalid page header in block 5377 of rel
"Ed L." <[EMAIL PROTECTED]> writes:
> The truncate showed no errors. The vacuum analyze showed the
> same error in block 110 of the pg_statistic table.
Really!? Hm, I wonder if you have a reproducible problem. Would it be
possible for you to send me the physical pg_statistic file (off-list)?
I
On Wednesday February 7 2007 9:01 am, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > How do I fix this 7.4.6 issue short of initdb?
> > invalid page header in block 110 of relation "pg_statistic"
> > I looked at the block via pg_filedump (included below), and
> > it does not appear t
On Wednesday February 7 2007 9:01 am, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > How do I fix this 7.4.6 issue short of initdb?
> > invalid page header in block 110 of relation "pg_statistic"
> > I looked at the block via pg_filedump (included below), and
> > it does not appear t
"Ed L." <[EMAIL PROTECTED]> writes:
> How do I fix this 7.4.6 issue short of initdb?
> invalid page header in block 110 of relation "pg_statistic"
> I looked at the block via pg_filedump (included below), and it
> does not appear to me to be corrupted, so not sure what I would
> zero out, i
On Wed, Feb 07, 2007 at 03:00:20AM -0700, Ed L. wrote:
> How do I fix this 7.4.6 issue short of initdb?
>
> invalid page header in block 110 of relation "pg_statistic"
Take a copy of the file, then you should be able to truncate it.
There's also the zero_damaged_pages option, though I don't
How do I fix this 7.4.6 issue short of initdb?
invalid page header in block 110 of relation "pg_statistic"
I looked at the block via pg_filedump (included below), and it
does not appear to me to be corrupted, so not sure what I would
zero out, if anything.
TIA.
Ed
***
Just a little followup on this problem.
We've moved the database to another server where it ran without problems.
HP just released new raid controller drivers for Suse and a firmware
update for the controller itself.
Until now the problem hasn't occurred anymore.
Thanks!
Jo.
Chris Travers w
Jo De Haes wrote:
OK. The saga continues, everything is a little bit more clear, but at
the same time a lot more confusing.
Today i wanted to reproduce the problem again. And guess what? A
vacuum of the database went thru without any problems.
I dump the block i was having problems with y
Ok, So we reran everything and got the same error message again, now
i'm able to reproduce it.
2006-03-28 12:05:18.638 CESTERROR: XX001: invalid page header in block
39248 of relation "dunn_main"
2006-03-28 12:05:18.638 CESTLOCATION: ReadBuffer, bufmgr.c:257
2006-03-28 12:05:18.638 CESTSTAT
OK. The saga continues, everything is a little bit more clear, but at
the same time a lot more confusing.
Today i wanted to reproduce the problem again. And guess what? A vacuum
of the database went thru without any problems.
I dump the block i was having problems with yesterday. It doesn'
Jo De Haes <[EMAIL PROTECTED]> writes:
> I asked the developper to delete all imported data again an restart the
> import. This import crashed again with the same error but this time on
> another block.
> 2006-03-27 00:15:25.458 CESTERROR: XX001: invalid page header in block
> 48068 of relati
Tom Lane wrote:
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
"Jo De Haes" <[EMAIL PROTECTED]> wrote
CETERROR: XX001: invalid page header in block 22182 of relation
"dunn_main"
I suppose there is no system error happens during the period (like lost
power). Can you attach the gdb at "b bu
Hi All,
We are evaluating Postgresql as a db platform for one of our future
applications. Some tables in the database will contain more than
10.000.000 records, which as i understand it, should be no problem with
postgresql.
We have been trying to find the most effective/fastest way to mani
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> "Jo De Haes" <[EMAIL PROTECTED]> wrote
>> CETERROR: XX001: invalid page header in block 22182 of relation
> "dunn_main"
> I suppose there is no system error happens during the period (like lost
> power). Can you attach the gdb at "b bufmgr.c:257" and
"Jo De Haes" <[EMAIL PROTECTED]> wrote
>
> CETERROR: XX001: invalid page header in block 22182 of relation
"dunn_main"
>
> My main question is: why is this occuring?
>
I suppose there is no system error happens during the period (like lost
power). Can you attach the gdb at "b bufmgr.c:257" and p
On Thu, Nov 24, 2005 at 02:59:28PM -0500, Qingqing Zhou wrote:
>
> "Tom Lane" <[EMAIL PROTECTED]> wrote
> >
> > At this point I think there's no question that your filesystem is
> > dropping blocks :-(.
>
> It is very interesting to follow this thread. But at this point, can you
> explain more w
On 26/11/05 4:48 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> I deleted the two datasets in mba_data_base that were affected by the empty
>> pages, I also deleted the relevant two rows in measured_bioassay_base... But
>> maybe it didn't do the right thing
Adam Witney <[EMAIL PROTECTED]> writes:
> I deleted the two datasets in mba_data_base that were affected by the empty
> pages, I also deleted the relevant two rows in measured_bioassay_base... But
> maybe it didn't do the right thing with the toast table for these two rows?
Evidently the missing d
On 26/11/05 4:14 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> pg_dump: ERROR: unexpected chunk number 5153 (expected 21) for toast value
>> 245334402
>
>> measured_bioassay_base is always inserted at the same time as mba_data_base
>> (the table where I h
Adam Witney <[EMAIL PROTECTED]> writes:
> pg_dump: ERROR: unexpected chunk number 5153 (expected 21) for toast value
> 245334402
> measured_bioassay_base is always inserted at the same time as mba_data_base
> (the table where I had the problem before) and it has a text field which is
> very large
Could it be faulty hardware?
Run memtest86? Test your drives?
At 10:49 AM 11/26/2005 +, Adam Witney wrote:
Any ideas what is going on here?
Thanks again for any help
Adam
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 24/11/05 5:27 pm, "Adam Witney" <[EMAIL PROTECTED]> wrote:
> On 24/11/05 5:28 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>
>> Adam Witney <[EMAIL PROTECTED]> writes:
>>> Does this help identifying what went wrong?
>>
>> At this point I think there's no question that your filesystem is
>> dropp
"Tom Lane" <[EMAIL PROTECTED]> wrote
>
> At this point I think there's no question that your filesystem is
> dropping blocks :-(.
It is very interesting to follow this thread. But at this point, can you
explain more why "there is no question" is file system's fault?
Thanks,
Qingqing
---
On 24/11/05 5:28 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> Does this help identifying what went wrong?
>
> At this point I think there's no question that your filesystem is
> dropping blocks :-(. Might want to check for available kernel updates,
> or
Adam Witney <[EMAIL PROTECTED]> writes:
> Does this help identifying what went wrong?
At this point I think there's no question that your filesystem is
dropping blocks :-(. Might want to check for available kernel updates,
or contemplate changing to a different filesystem.
On 24/11/05 4:42 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> On 24/11/05 4:19 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>>> The question is, can you tell whether any data is actually missing?
>
>> Well each of these datasets are about 20,000 rows each...
Adam Witney <[EMAIL PROTECTED]> writes:
> On 24/11/05 4:19 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> The question is, can you tell whether any data is actually missing?
> Well each of these datasets are about 20,000 rows each... So I can tell
> which one is in (640792,12) and in (640799,1), the
On 24/11/05 4:19 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> If you mean by that, this:
>
>> select * from mba_data_base where ctid = '(640792,12)';
>> select * from mba_data_base where ctid = '(640799,1)';
>
>> Then the data looks normal... Of course e
Adam Witney <[EMAIL PROTECTED]> writes:
> If you mean by that, this:
> select * from mba_data_base where ctid = '(640792,12)';
> select * from mba_data_base where ctid = '(640799,1)';
> Then the data looks normal... Of course everything in between that is now
> blank.
The question is, can you te
On 24/11/05 3:52 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> bugasbase2=# vacuum;
>> WARNING: relation "mba_data_base" page 597621 is uninitialized --- fixing
>
> This is the expected result of what you did.
>
>> WARNING: relation "mba_data_base" page
Adam Witney <[EMAIL PROTECTED]> writes:
> bugasbase2=# vacuum;
> WARNING: relation "mba_data_base" page 597621 is uninitialized --- fixing
This is the expected result of what you did.
> WARNING: relation "mba_data_base" page 640793 is uninitialized --- fixing
> WARNING: relation "mba_data_base
On 24/11/05 2:48 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> Just wanted to clarify, should this not be
>> dd bs=8k seek=7 count=1 conv=notrunc if=/dev/zero of=134401991.4
>
> Looks reasonable.
>
> regards, tom lane
Excellent thanks. I have run it
Adam Witney <[EMAIL PROTECTED]> writes:
> Just wanted to clarify, should this not be
> dd bs=8k seek=7 count=1 conv=notrunc if=/dev/zero of=134401991.4
Looks reasonable.
regards, tom lane
---(end of broadcast)---
TIP 4:
On 23/11/05 10:20 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> Whats the best way to zero the bad block?
>
> Probably dd from /dev/zero, along the lines of
>
> dd bs=8k seek=597621 count=1 conv=notrunc if=/dev/zero of=relation
>
> (check this before you
Adam Witney <[EMAIL PROTECTED]> writes:
> Whats the best way to zero the bad block?
Probably dd from /dev/zero, along the lines of
dd bs=8k seek=597621 count=1 conv=notrunc if=/dev/zero of=relation
(check this before you apply it ;-)). You probably should stop the
postmaster while doing
On 23/11/05 9:55 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> This table is only ever COPY'd to from data files, no updates or deletes, if
>> I could find out which data file this bit comes from I could just reupload
>> that file... Is it possible to tell
Adam Witney <[EMAIL PROTECTED]> writes:
> This table is only ever COPY'd to from data files, no updates or deletes, if
> I could find out which data file this bit comes from I could just reupload
> that file... Is it possible to tell what the data actually is from the data
> I sent?
You might try
On 23/11/05 9:36 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> Thanks for the help Here is the output:
>
>> [EMAIL PROTECTED]:/opt$ dd bs=8k skip=7 count=1 if=134401991.4 | od -x
>> 000
>> *
>> 001 1d
Adam Witney <[EMAIL PROTECTED]> writes:
> Thanks for the help Here is the output:
> [EMAIL PROTECTED]:/opt$ dd bs=8k skip=7 count=1 if=134401991.4 | od -x
> 000
> *
> 001 1d9e 201c 0fa0 0010 000b
> 0010020 0ca6 19fb 1797 0ab4 0
On 23/11/05 8:55 pm, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Adam Witney <[EMAIL PROTECTED]> writes:
>> bugasbase2=# SELECT count(*) from mba_data_base;
>> ERROR: invalid page header in block 597621 of relation "mba_data_base"
>
> Sounds like a data corruption problem :-(. Do you want to pull o
Adam Witney <[EMAIL PROTECTED]> writes:
> bugasbase2=# SELECT count(*) from mba_data_base;
> ERROR: invalid page header in block 597621 of relation "mba_data_base"
Sounds like a data corruption problem :-(. Do you want to pull out that
page and see what's in it? Something like
dd bs=8k
Hi,
I just had this error in my database:
bugasbase2=# SELECT count(*) from mba_data_base;
ERROR: invalid page header in block 597621 of relation "mba_data_base"
Any ideas whats going on? Am a bit worried as this is my production
database.
Thanks for any assistance
Adam
--
This message ha
Hi Tom,
Enabling the zero_damaged_pages solved the problem. I
am in the process of dumping & restoring.
Thanks for the help.
Gokul.
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> gokulnathbabu manoharan <[EMAIL PROTECTED]>
> writes:
> > In my sample databases the relfilenode for
> pg_class
> > was 1
gokulnathbabu manoharan <[EMAIL PROTECTED]> writes:
> In my sample databases the relfilenode for pg_class
> was 1259. So I checked the block number 190805 of the
> 1259 file. Since the block size is 8K, 1259 was in
> two files 1259 & 1259.1. The block number 190805
> falls in the second file who
On Friday 12 November 2004 7:54 am, Martijn van Oosterhout wrote:
> On Thu, Nov 11, 2004 at 04:29:38PM -0700, Steve Crawford wrote:
> > True. I hadn't come up with a good time to get past that 7.4.1 ->
> > 7.4.2 initdb requirement. I guess I'll have to go with the manual
> > method.
>
> IIRC, the i
On Thu, Nov 11, 2004 at 04:29:38PM -0700, Steve Crawford wrote:
> True. I hadn't come up with a good time to get past that 7.4.1 ->
> 7.4.2 initdb requirement. I guess I'll have to go with the manual
> method.
IIRC, the initdb is recommended, but not required. It can be done
without an initdb t
Steve Crawford <[EMAIL PROTECTED]> writes:
>> Could you get a hex dump of that page?
> What is the best method to do this?
There's always "od -x" ... however, if you prefer you can use
pg_filedump from http://sources.redhat.com/rhdb/.
> Also, can I safely drop that table
Not unless you want to
On Thursday 11 November 2004 3:14 pm, Tom Lane wrote:
> Steve Crawford <[EMAIL PROTECTED]> writes:
> > This morning I got bitten by the "SELECT INTO" / "CREATE TABLE
> > AS" from tables without OIDs bug in 7.4.1.
>
> On a production server, you really ought to track bug-fix releases
> a bit more en
Steve Crawford <[EMAIL PROTECTED]> writes:
> This morning I got bitten by the "SELECT INTO" / "CREATE TABLE AS"
> from tables without OIDs bug in 7.4.1.
On a production server, you really ought to track bug-fix releases a
bit more enthusiastically than that :-(. However, I don't see anything
in
This morning I got bitten by the "SELECT INTO" / "CREATE TABLE AS"
from tables without OIDs bug in 7.4.1.
Postmaster killed all the backends and restarted - pg was down for 2
seconds.
This happened two times within a few minute period.
Now I am getting 'invalid page header in block 52979 of re
On Wednesday October 20 2004 10:43, Ed L. wrote:
> On Wednesday October 20 2004 10:12, Ed L. wrote:
> > On Wednesday October 20 2004 10:00, Tom Lane wrote:
> > > "Ed L." <[EMAIL PROTECTED]> writes:
> > > > In other words, how do I calculate which bytes to zero to simulate
> > > > zero_damaged_pages
On Wednesday October 20 2004 10:00, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > In other words, how do I calculate which bytes to zero to simulate
> > zero_damaged_pages??
>
> Why simulate it, when you can just turn it on? But anyway, the answer
> is "the whole page".
Old 7.3.4 inst
"Ed L." <[EMAIL PROTECTED]> writes:
> In other words, how do I calculate which bytes to zero to simulate
> zero_damaged_pages??
Why simulate it, when you can just turn it on? But anyway, the answer
is "the whole page".
regards, tom lane
---(end o
On Wednesday October 20 2004 10:12, Ed L. wrote:
> On Wednesday October 20 2004 10:00, Tom Lane wrote:
> > "Ed L." <[EMAIL PROTECTED]> writes:
> > > In other words, how do I calculate which bytes to zero to simulate
> > > zero_damaged_pages??
> >
> > Why simulate it, when you can just turn it on?
On Wednesday October 20 2004 5:34, Ed L. wrote:
> I have 5 corrupted page headers as evidenced by these errors:
>
> ERROR: Invalid page header in block 13947 of ...
>
> The corruption is causing numerous queries to abort. First option is to
> try to salvage data before attempt restore from
I have 5 corrupted page headers as evidenced by these errors:
ERROR: Invalid page header in block 13947 of ...
The corruption is causing numerous queries to abort. First option is to try
to salvage data before attempt restore from backup. I want to try to edit
the file to zero out t
Looks bad. Have you got backups. Seriously!
REINDEX works on system indexes but you have to drop to single user mode in
postgres to do it. Check out the -P option in the manpage.
Good luck!
Hope this helps,
On Thu, Dec 04, 2003 at 12:54:07PM -0700, Ed L. wrote:
> I have a server with 20 pgsql c
Hello:
I am a new user of postgresql. I am using postgresql 7.3.4 and I had
inserted about 1.7 million records to a table. When I vacuum / select * from
table, it gets a error message of : Invalid page header in block xxx of
TableA. I check that I still can insert records to the table. But I am
There are a bunch of these in PG's log:
ERROR: Invalid page header in block 14 of
start_items_key
What does it mean? PG seems to be working fine -
anything I need to fix/adjust/worry about?
TIA,
CSN
__
Do you Yahoo!?
The New Yahoo! Shopping - with improved prod
77 matches
Mail list logo