Platform:
CentOS release 4.3 (Final) (Linux
2.6.9-34.EL)
Database version:
PostgreSQL 8.1.3 on i686-redhat-linux-gnu, compiled
by GCC gcc (GCC) 3.4.5 20051201 (Red Hat 3.4.5-2)
Description:
One of cca 100 tables is partially corrupted.
An attempt to read or dump the data from the table is sometimes successful,
sometimes crashes.
After upgrade to 8.1.4 the behaviour remained
unchanged (using the cluster created with 8.1.3). Unfortunately, I am not able
to reproduce the error from the beginning using ONLY 8.1.4.
Note:
After successful dump/reload everything is OK. The
problem is not in the data content itself, but in the binary database cluster.
This is why I would like to send you the whole cluster instead of the database
dump as an attachment. The problem is in the file size (30M). Please tell me
where or how to send it. For simplicity, I removed all other objects from the
database. There is only one table with several indexes, the table
contains 56621 rows.
Here are some examples of the
bevaviour:
[EMAIL PROTECTED] tmp]# pg_dumpall -p5447 -U postgres
> pgdump.sql
pg_dump: ERROR: invalid memory alloc request size 4294967290 pg_dump: SQL command to dump the contents of table "fct" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: invalid memory alloc request size 4294967290 pg_dump: The command was: COPY dwhdata_salemc.fct (time_id, company_id, customer_id, product_id, flagsprod_id, flagssale_id, account_id, accttime_id, invcustomer_id, salesperson_id, vendor_id, inv_cost_amt, inv_base_amt, inv_amt, inv_qty, inv_wght, ret_cost_amt, ret_base_amt, ret_amt, ret_qty, ret_wght, unret_cost_amt, unret_base_amt, unret_amt, unret_qty, unret_wght, bonus_forecast, bonus_final, stamp_code) TO stdout; pg_dumpall: pg_dump failed on database "dwhdb", exiting dwhdb=# create temp table t_fct as select * from
dwhdata_salemc.fct;
SELECT dwhdb=# create temp table t_fct as select * from
dwhdata_salemc.fct;
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> dwhdb=# create temp table t_fct as select * from
dwhdata_salemc.fct;
ERROR: row is too big: size 119264, maximum size 8136 dwhdb=# create temp table t_fct as select * from
dwhdata_salemc.fct;
ERROR: row is too big: size 38788, maximum size 8136 AFTER UPGRADE TO 8.1.4:
dwhdb=# create temp table t_fct as select * from dwhdata_salemc.fct; ERROR: row is too big: size 52892, maximum size 8136 I noticed one more problem when executing
vacuum:
dwhdb=# vacuum full;
WARNING: relation "pg_attribute" page 113 is uninitialized --- fixing VACUUM The "vacuum" problem has happend only once.
Regards
Fililp Hrbek
|
- [BUGS] Partially corrupted table Filip Hrbek
- Re: [BUGS] Partially corrupted table Tom Lane
- Re: [BUGS] Partially corrupted table Filip Hrbek
- Re: [BUGS] Partially corrupted table Tom Lane
- Re: [BUGS] Partially corrupted table Tom Lane
- Re: [BUGS] Partially corrupted table Alvaro Herrera
- Re: [BUGS] Partially corrupted t... Tom Lane
- Re: [BUGS] Partially corrupt... Filip Hrbek
- Re: [BUGS] Partially corrupted table Filip Hrbek
- Re: [BUGS] Partially corrupted t... Bruno Wolff III