[BUGS] Bug #780: Timestamp shifted by one minute

2002-09-26 Thread pgsql-bugs

Rob Abbot ([EMAIL PROTECTED]) reports a bug with a severity of 1
The lower the number the more severe it is.

Short Description
Timestamp shifted by one minute

Long Description
The output of the code below is:
25/09/02 23:59:00   *NOT*   26/09/02 00:00:00
which is timeshifted backward by 1 minute from the inserted timestamp. In this case 
the timestamp is shifted backward to the previous day!!!

Details of set-up:
Dell 1650 PowerEdge, 1.13Mhz Pentium III, 512k cache, 512 MB RAM
RedHat Linux 7.1

Result of SELECT version();
"PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96"

Details of build:
(Comments  below from my notes)
Installed postgresql 7.2.1 from source:
the default install is to /usr/local/pgsql/
unpacked in rob, got rob/postgresql-7.2.1 as 'rob'
./configure
gmake (takes forever)
gmake check
su to root and get back to /home/rob/postgresql-7.2.1
gmake install
gmake install-all-headers

This bug appears similar to:
http://archives.postgresql.org/pgsql-bugs/2001-04/msg00072.php

I have another similar setup which works perfectly: the binary was built with the same 
source tree and options, however the differences between the two setups are:
OS: Suse 7.1
Compiler: GCC 2.95.2
CPU: AMD Athlon 533MHz, 256MB RAM

I am also running PostgreSQL on a Compaq Proliant, RedHat 6.0, PostgreSQL 7.0.3 
compiled gcc egcs-2.91.66 again this does not show the timestamp problem.

Best wishes,
Rob


Sample Code
create table timetester (timefield timestamp);
insert into timetester values ('2002-09-26 00:00:00.00');
select * from timetester;

No file was uploaded with this report


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Bug #780: Timestamp shifted by one minute

2002-09-26 Thread Tom Lane

[EMAIL PROTECTED] writes:
> Timestamp shifted by one minute [ in locally-built PG ]

Do you have CFLAGS set in your environment, and if so how?

This sounds quite a bit like the known problems with -ffast-math,
though I have not heard this exact complaint before.  If you built
with -ffast-math then try without.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] FW: 7.3b1 : DROP DOMAIN CASCADE CAN LEAVE A TABLE WITH NO COLUMNS

2002-09-26 Thread Tim Knowles

Hello,

I have found it is possible for a user with create table permission to crash
the 7.3b1 backend.  The crash occurs because it is possible to have a table
with no columns after a DROP DOMAIN CASCADE.   Create a table with one
column (with that columns type specified as a domain) then issue the command
to DROP DOMAIN ... CASCADE.  The column will be dropped from the table,
leaving the table with no columns.  It is then possible (not surprisngly) to
crash the backend by querying that table using a wildcard.

Running the SQL listed at the bottom twice will cause a crash with the
following log enteries:

WARNING:  ShmemInitStruct: ShmemIndex entry size is wrong
FATAL:  LockMethodTableInit: couldn't initialize LockTable

Upon restarting the server the following message appears in the log, each
time with a different offset:

LOG:  ReadRecord: unexpected pageaddr 0/BA36A000 in log file 0, segment 191,
offset 3579904

I am assuming this is a consequence of the abnormal termination but I
thought it worth mentioning
for completeness. It also only appears if the SQL below is wrapped up in a
transaction.

To recreate the problem enter the following SQL in psql:-

BEGIN;

CREATE DOMAIN d1 int;

CREATE TABLE t1 (col_a d1);

-- IF YOU DROP DOMAIN d1 CASCADE then col_a WILL BE DROPPED AND THE TABLE t1
WILL HAVE NO COLUMNS

DROP DOMAIN d1 CASCADE;

-- TABLE t1 NOW HAS NO COLUMNS
-- THIS PROBLEM CAN ALSO BE CREATED BY DROP SCHEMA .. CASCADE AS WELL (AS
LONG AS THE TABLE IS NOT IN THE SCHEMA BEING DROPPED AND THEREFORE NOT
DROPPED AS PART OF THE CASCADE).

-- THE FOLLOWING SELECT WILL CRASH THE BACKEND

SELECT t1.* FROM t1


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[BUGS] Bug #781: Broken Indices in 7.3 Beta

2002-09-26 Thread pgsql-bugs

Christoph Nelles ([EMAIL PROTECTED]) reports a bug with a severity of 2
The lower the number the more severe it is.

Short Description
Broken Indices in 7.3 Beta

Long Description
Hello everybody!


I am currently using 7.3 Beta (my data is not so important ;)) and it
breaks frequently one index during UPDATEs. It´s an unique index, and
there are only 500 records in the table which are updated every 6
minutes.  But at some time, the update does not work anymore as
Postgresql says that it "cannot insert a duplicate key in to a unique
index". After reindexing the index everything will run smoothly again
for a few hours.
I will attach the Query, error message and the table definition below.

It´s not much information i give you, as i don´t know what is relevant
to you.  With the Version 7.2.1 this error never occured, so
it must be a bug within this beta release. Please tell me what
information might be relevant to you or if you already know of this
bug.

Please email directly to me, as i am currently not subscribed to this
particular list.


I tried to send this message to the mailing-list, but somehow it never showed up there 
:(
In the mean time i probably found the source of the problem. Probably the UPDATE 
colidates with an VACUUM ANALYZE of the DB which runs often at the same, as both are 
crontab jobs (UPDATE every 6 minutes, VACUUM every 60 minutes.)

Christoph Nelles

Sample Code
Log excerpt and query :

ERROR:  Cannot insert a duplicate key into unique index bnt_planets_pkey
LOG:  statement: UPDATE bnt_planets SET organics=GREATEST(organics + (LEAST(colo
nists, 1) * 0.005 * 0.5 * prod_organics / 100.0 * 3.004502250375) - (LEA
ST(colonists, 1) * 0.005 * 0.05 * 3.004502250375), 0),ore=ore + (LEAST(c
olonists, 1) * 0.005) * 0.25 * prod_ore / 100.0 * 3.004502250375,goods=g
oods + (LEAST(colonists, 1) * 0.005) * 0.25 * prod_goods / 100.0 * 3.004
502250375,energy=energy + (LEAST(colonists, 1) * 0.005) * 0.5 * prod_ene
rgy / 100.0 * 3.004502250375,colonists= LEAST((colonists + (colonists * 0.0005 *
 3.004502250375)), 1),credits=credits * 1.001500750125 + (LEAST(colonist
s, 1) * 0.005) * 3 * (100.0 - prod_organics - prod_ore - prod_goods - pr
od_energy - prod_fighters - prod_torp) / 100.0 * 3.004502250375 WHERE (organics
+ (LEAST(colonists, 1) * 0.005 * 0.5 * prod_organics / 100.0 * 3.0045022
50375) - (LEAST(colonists, 1) * 0.005 * 0.05 * 3.004502250375) >= 0)
ERROR:  current transaction is aborted, queries ignored until end of transaction
 block

table schema :

blacknova=# \d bnt_planets
   Table "public.bnt_planets"
Column |  Type  |   Modifiers

---++---

 planet_id | integer| not null default nextval('"bnt_planets
_planet_id_seq"'::text)
 sector_id | integer| not null default '0'
 name  | character varying(256) |
 organics  | bigint | not null default '0'
 ore   | bigint | not null default '0'
 goods | bigint | not null default '0'
 energy| bigint | not null default '0'
 colonists | bigint | not null default '0'
 credits   | bigint | not null default '0'
 fighters  | bigint | not null default '0'
 torps | bigint | not null default '0'
 owner | integer| not null default '0'
 corp  | integer| not null default '0'
 base  | character(1)   | not null default 'N'
 sells | character(1)   | not null default 'N'
 prod_organics | real   | not null default '20.0'
 prod_ore  | real   | not null default '20.0'
 prod_goods| real   | not null default '20.0'
 prod_energy   | real   | not null default '20.0'
 prod_fighters | real   | not null default '10.0'
 prod_torp | real   | not null default '10.0'
 defeated  | character(1)   | not null default 'N'
Indexes: bnt_planets_pkey primary key btree (planet_id),
 bnt_planets_corp_idx btree (corp),
 bnt_planets_owner_idx btree ("owner")
Check constraints: "$1" ((base = 'Y'::bpchar) OR (base = 'N'::bpchar))
   "$2" ((sells = 'Y'::bpchar) OR (sells = 'N'::bpchar))
   "$3" ((defeated = 'Y'::bpchar) OR (defeated = 'N'::bpchar))


No file was uploaded with this report


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[BUGS] Bug #782: clog files disappear after 'vacuum;'

2002-09-26 Thread pgsql-bugs

Steven Wilton ([EMAIL PROTECTED]) reports a bug with a severity of 2
The lower the number the more severe it is.

Short Description
clog files disappear after 'vacuum;'

Long Description
We are having a problem with our pg_clog files disappearing after a vacuum command is 
run.  We have a database called "traffic" which has tables created every half hour, 
and after each table is older than 8 hours, we drop them.  The 'postgres' user does 
not have any access to the traffic database, and the 'collector' user only has access 
to the traffic database.

If either the 'collector' or 'postgres' user runs a vacuum command, most of the files 
in the pg_clog directory disappear.   After the vacuum, if you try to drop certain 
tables, you get the following error:

traffic=> drop table raw_traf_573808;
FATAL 2:  open of /var/lib/postgres/data/pg_clog/0005 failed: No such file or directory
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.


If I disconnect from postgres, recreate the pg_clog file (from /dev/zero), I get the 
following error:

traffic=> drop table raw_traf_573808;
ERROR:  DeleteTypeTuple: type "raw_traf_573808" does not exist

traffic=> \d raw_traf_573808
FATAL 2:  open of /var/lib/postgres/data/pg_clog/ failed: No such file or directory
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.

At this stage, I need to create all clog files (ie from  to 0005).  Once I create 
the clog files, I get the following output:

traffic=> \d raw_traf_573808
 Table "raw_traf_573808"
Column |   Type   |  Modifiers   
---+--+--
 start_time| integer  | not null
 stop_time | integer  | not null
 source_ip | integer  | not null
 dest_ip   | integer  | not null
 source_port   | smallint | not null
 dest_port | smallint | not null
 router_ip | integer  | not null
 input_interface_type  | smallint | not null
 output_interface_type | smallint | not null
 bytes | integer  | not null
 swap  | boolean  | not null default '0'
 flags | smallint | not null

traffic=> drop table raw_traf_573808;
ERROR:  DeleteTypeTuple: type "raw_traf_573808" does not exist

If I delete the postgres installation, and re-create all tables, the system works fine 
for a while (ie the tables get created, dropped without a problem).  After the 
postgres system has been running for a while, we will eventually lose one of the clog 
files, and end up with tables in the database that we can not drop.

We are running postgres 7.2.1 under a debian 3.0(woody) linux system with a 2.4.18 
kernel.

any help would be appreaciated.

thanks

Steven

Sample Code


No file was uploaded with this report


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] Bug #782: clog files disappear after 'vacuum;'

2002-09-26 Thread Tom Lane

[EMAIL PROTECTED] writes:
> We are having a problem with our pg_clog files disappearing after a
> vacuum command is run.

VACUUM is supposed to remove no-longer-needed pg_clog files; space
reclamation is what it's all about, after all.  Your problem is more
correctly thought of as "there shouldn't be any more references in my
database to old transaction numbers, once VACUUM is done".

We just today identified and fixed a problem that might allow old
transaction references to escape, but that bug could only be tickled if
a database crash occurs shortly after a VACUUM FULL.  Unless you are
crashing your server on a routine basis, it seems like that doesn't
describe your problem.

Can you provide a self-contained script to provoke the problem you are
seeing?  Or let someone into your system to examine things in detail?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[BUGS] Bug #783: WINDOWS 2000: invalid primary checkpoint record

2002-09-26 Thread pgsql-bugs

Le Quang Anh Ngoc ([EMAIL PROTECTED]) reports a bug with a severity of 2
The lower the number the more severe it is.

Short Description
WINDOWS 2000: invalid primary checkpoint record

Long Description
Hello,
I'm using Windows 2000, PostgreSQL 7.2.1-win32.

While System was be running, the power was interrupted and rebooted. Then I run 
pgsvmgr.exe again, from client machine I try to run psql.exe and login but I cannot. 

Please, help me to restore the data.
Thanks alot.



Sample Code
*The content of file pg.errors :

DEBUG:  database system was interrupted at 2002-09-20 21:03:14 JST
DEBUG:  ReadRecord: out-of-sequence SUI 2 (after 3) in log file 1, segment 108, offset 
8863744
DEBUG:  invalid primary checkpoint record
DEBUG:  open of C:\postgresql-7.2.1-win32\bin\..\data\pg_xlog/00010021 (log 
file 1, segment 33) failed: No such file or directory
DEBUG:  invalid secondary checkpoint record
FATAL 2:  unable to locate a valid checkpoint record


No file was uploaded with this report


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html