[GENERAL] Constraint on an aggregate? (need help writing trigger, i think..)

2006-07-26 Thread Isak Hansen

I have the following two tables:

create table a (
 id serial primary key,
);
create table b (
 id serial primary key,
 a_id int4 references a (id),
 amount decimal(16, 2)
);

and would like a constraint to guarantee that "sum(b.amount) = 0 group
by b.a_id".



From my testing so far, and this thread:

,
i think a trigger is the way to go.

Problem is, i have no idea where to go from here. Getting the model
nearly to to 3NF and writing some simple queries is about the extent
of my db-related skillset..

Anyone feel the calling..? ;)


Also, how would this kind of check affect performance? Table 'b' is
our ledger table, busiest one in the app i assume, while 'a' groups
related transactions and holds common info. We inherited the term
voucher for 'a', anyone know if that is(n't) appropriate?

If someone are interested, the actual tables are here:
a: http://trac.lodo.no/wiki/vouchers
b: http://trac.lodo.no/wiki/voucher_lines


Any feedback appreciated,
Isak

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] ECPG. Badly stuck

2006-07-26 Thread Joachim Wieland
On Tue, Jul 25, 2006 at 02:46:02PM -0400, Jasbinder Bali wrote:
> Tried writing the follwoing code but doesn't give me any result.
> Don't know if i have to do somethin else apart from what i've already done.

There are many ways your program can fail before actually reaching the
select line but you won't notice because your program fails to check for
errors. Do at least insert a

exec sql whenever sqlerror do sqlprint();

before connecting to the server. For more information on error handling,
see

http://www.postgresql.org/docs/8.1/interactive/ecpg-errors.html


While developing you can also switch on debug output with the ECPGdebug
function described here:

http://www.postgresql.org/docs/8.1/interactive/ecpg-library.html


Joachim

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ECPG. Badly stuck

2006-07-26 Thread Michael Meskes
On Tue, Jul 25, 2006 at 02:46:02PM -0400, Jasbinder Bali wrote:
> Forget about cursors. Singleton queries are also not working.

Great statement. 

> Tried writing the follwoing code but doesn't give me any result.
> Don't know if i have to do somethin else apart from what i've already done.

How about giving us at least the table definition and the system
information? Yes, I know I asked for those in my last email too. And
please tell us what makes you sure it doesn't work? Just no output?
Or empty output?

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Constraint on an aggregate? (need help writing trigger,

2006-07-26 Thread Kenneth Downs

Isak Hansen wrote:

Hello Isak!  I was speaking to you about this on comp.databases, glad to 
see you here on the Postgres group.


What you want to do is easy enough in the single case, but can get 
complicated if you do a lot of it.



We have a framework that is freely available that does exactly what you 
are trying to do, and works against PostgreSQL and is written in PHP.  
It writes all of the triggers for you to save the hassle and prevent 
mistakes. We even have on our website an example of the kind of 
constraint you are doing:


http://docs.secdat.com/index.php?gp_page=x_docview&gppn=Customer+Credit+Limit


If you wish to code this by hand, here is what you must do:

1)  Add column "amount" to table A
2)  Add insert, update, and delete triggers to B that increase and 
decrease the value of A.amount
3)  Add an update trigger to A (insert and delete not necessary) that 
enforces your constraint, or just do it as a check constraint (i 
personally prefer triggers)


BTW, is table A supposed to be a GL batch summary table or something?  
Why must it always be zero?  If it is a GL batch table, can it be out of 
balance while people are actually entering the data?  Should the 
constraint only be enforced when the batch is closed?




create table a (
 id serial primary key,
);
create table b (
 id serial primary key,
 a_id int4 references a (id),
 amount decimal(16, 2)
);

and would like a constraint to guarantee that "sum(b.amount) = 0 group
by b.a_id".


From my testing so far, and this thread:
, 


i think a trigger is the way to go.

Problem is, i have no idea where to go from here. Getting the model
nearly to to 3NF and writing some simple queries is about the extent
of my db-related skillset..

Anyone feel the calling..? ;)


Also, how would this kind of check affect performance? Table 'b' is
our ledger table, busiest one in the app i assume, while 'a' groups
related transactions and holds common info. We inherited the term
voucher for 'a', anyone know if that is(n't) appropriate?

If someone are interested, the actual tables are here:
a: http://trac.lodo.no/wiki/vouchers
b: http://trac.lodo.no/wiki/voucher_lines


Any feedback appreciated,
Isak

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] sequences vs oids as primary keys

2006-07-26 Thread Merlin Moncure

On 7/25/06, Kenneth Downs <[EMAIL PROTECTED]> wrote:

craigp wrote:

>1) does it make sense (and would it be possible) to make a rule which would,
>say, somehow write into the oid field of a tuple to be returned by lastoid? i'm
>assuming here that the database would not have oid's enabled.
>
>
>
We do this in a trigger.  We assign the NEXTVAL to a variable, write
that to the row, then raise its value as a notice.  Then we just
retrieve the notice.


another way to to this is make a dynamic plpgsql function that takes
an insert statement and sequence name as parameters and returns the
currval on the way out.  to the op I would suggest that you can inline
currval into insert statements following the original insert e.g.

insert into master default values;
insert into detail(currval('master_id_seq'), foo, bar);

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Database Restore errors

2006-07-26 Thread Tom Kinard








I am attempting to restore a database.

When I run pg_restore I get the following error:

Pg_restore:  [archiver (db)] error returened by
PQendcopy: ERROR: invalid byte sequence for encoding “UTF8” : 0x92

The dump was generated on a Mac and I am doing a restore on
a Fedora Core 4 box.

The dump was done using a tar format.

The restore command I used is a follows:

pg_restore –d bugs –U bugs –Ft bugs-backup

I am running version 8.1.4.

The output of psql –U bugs –c ‘show
client_encoding’ is

UTF8

 

Local give items like

LANG=:en_US, UTF-8”

LC_COLLATE=”en_US, UTF-8”

 

Any help will be greatly appreciated.

 

Tom Kinard

 








[GENERAL] loop with circular updates

2006-07-26 Thread Jessica M Salmon

Hi All.

I'm writing a plpgsql function that creates a table and loops over the
items in that table, using a FOR ... IN EXECUTE ... loop. The thing is, on
each iteration I update the table over which I am looping. Sometimes, the
result of this update makes it no longer necessary/desirable to process
subsequent records in the loop.

My results are confusing me, and I was wondering if it is possible that my
method doesn't work considering how postgres holds query results in memory
for loops. I read somewhere that it caches several records at a time,
retrieving more as needed. If this is the case (is it?), and I updated a
field of an item that is already cached, would I then see that change when
I retrieved that item in the loop?

I feel like this is a difficult concept to illustrate with words. Perhaps
my pl/pgsql vocabulary is still too small for asking this question, but I'm
hoping that someone here can see my concern and tell me whether or not I am
barking up the right tree, so to speak.

Thanks,
Meghan


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Error using OLD and NEW records

2006-07-26 Thread Germán Hüttemann Arza

Hi, I'm writting because I get an error when I try to use OLD and NEW records in a trigger procedure. The exact error messages were:

NEW used in quere that is not in a rule
PL/pgSQL function "audit_persona" line 6 at SQL statement

OLD used in quere that is not in a rule
PL/pgSQL function "audit_persona" line 3 at SQL statement

I receive them when the function is executed. It is called by a trigger:

CREATE TRIGGER audit_persona_all
AFTER INSERT OR DELETE OR UPDATE
ON persona
FOR EACH ROW
EXECUTE PROCEDURE audit_persona();

My function "audit_persona" is the follow:

CREATE FUNCTION audit_persona() RETURNS TRIGGER AS 
'BEGIN
	IF (TG_OP = ''DELETE'') THEN
		INSERT INTO audit_persona SELECT OLD.*, user, ''D'', now();
		RETURN OLD;
	ELSE
		INSERT INTO audit_persona SELECT NEW.*, user, substr(TG_OP, 1, 1), now();
		RETURN NEW;
	END IF;
	RETURN NULL;
END;'
LANGUAGE plpgsql;

It is very similar to the example for auditing a table shown in the documentation of the pgAdminIII, but it doesn't work at all.

Could you help me with this problem?

Thanks,

-- 
Germán Hüttemann Arza
CNC - Centro Nacional de Computación
UNA - Universidad Nacional de Asunción
Campus Universitario, San Lorenzo - Paraguay
http://www.cnc.una.py  - Tel.: 595 21 585550



Re: [GENERAL] "Ghost" colmumn with primary key

2006-07-26 Thread chris997
Yes, you were right, adding "" solved the problem
many thanx!


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] wrong timestamp

2006-07-26 Thread xyzyx

Hello,
I use Postgresql 8.1.4 on Debian,
when I run command 'select current timestamp' or 'select now'  in 
pgAdmin, I everytime get "2000-01-01 03:02:16.105976+03".

When I run 'select TIMEOFDAY()' I get right time.

Whats wrong?
--
xyzyx.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] loop with circular updates

2006-07-26 Thread Kenneth Downs

Jessica M Salmon wrote:


Hi All.

I'm writing a plpgsql function that creates a table and loops over the
items in that table, using a FOR ... IN EXECUTE ... loop. The thing is, on
each iteration I update the table over which I am looping. Sometimes, the
result of this update makes it no longer necessary/desirable to process
subsequent records in the loop.
 

Can you tell us more about the app?  Sounds like an ERP allocation or 
something like that.


begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Constraint on an aggregate? (need help writing trigger,

2006-07-26 Thread Kenneth Downs

Isak Hansen wrote:


Each entry in 'A' belongs to a single 'business event'. E.g.
registering a phone bill modifies your accounts payable, phone
expenses and vat paid accounts. Those transactions better balance out.

There's no 'A' table in the system we base ours on, you'd just have X
lines with an equal marker field, but it seemed like a good target for
normalization as each batch of lines had a lot of common data.


The journal entries are always balanced.

Ideally we would store the data somewhere else during entry, and only
let users save their data when they balanced out, but today we save on
every submit (web app) and use some wonky heuristics to balance them
out. (not a technical issue..)

Either way, the db should reject any commit which doesn't sum to zero.


A simple way to do this without a lot of tables is as follows:

1)  Add a column "closed char(1)" to table A
2)  Do not enforce the constraint if closed="N".  This allows data entry 
of individual lines.

3)  Do not allow closed="Y" unless total=0
4)  Once closed="Y", disallow all updates (prevents changes to closed batch)


In the "shameless plug" department, our website also has an example of 
how to do this with our tool, email me off-list if you want more info on 
that.
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Mapping/DB Migration tool

2006-07-26 Thread Reece Hart
On Wed, 2006-07-26 at 15:18 +0200, Karsten Hilbert wrote:
> > For some kinds of changes, and especially those that make
> destructive
> > in-place changes that might require debugging, I've written views
> which
> > generate the SQL statements to execute.
> Would you mind giving a small example ? That sounds really
> useful. 

I don't have any examples of the destructive kind available, but here's
a non-destructive one.

I once discovered that deleting a primary key was taking forever.  I
finally tracked this down to the lack of an index on one of the many
tables which contained FK references to that PK.  The pg_* views contain
all of the necessary data to identify these cases.  I wrote such views
to select all FK-PK pairs with index status, and another to show those
without indexes on the FK.  For example:

[EMAIL PROTECTED]> select * from pgtools.foreign_keys;
 fk_namespace |  fk_relation  |   fk_column   | fk_indexed | 
pk_namespace | pk_relation |   pk_column   | pk_indexed | ud | c_namespace |
 c_name

--+---+---++--+-+---+++-+-
 unison   | p2gblatalnhsp | p2gblathsp_id | t  | unison 
  | p2gblathsp  | p2gblathsp_id | t  | cc | unison  | 
p2gblathsp_id_exists
 unison   | p2gblatalnhsp | p2gblataln_id | t  | unison 
  | p2gblataln  | p2gblataln_id | t  | cc | unison  | 
p2gblataln_id_exists
 unison   | p2gblathsp| pseq_id   | t  | unison 
  | pseq| pseq_id   | t  | cc | unison  | pseq_id_exists

[EMAIL PROTECTED]> select * from pgtools.foreign_keys_missing_indexes 
limit 5;
 fk_namespace | fk_relation |  fk_column  | fk_indexed | pk_namespace | 
pk_relation |  pk_column  | pk_indexed | ud | c_namespace | c_name

--+-+-++--+-+-+++-+-
 gong | node| alias_id| f  | gong | 
alias   | alias_id| t  | cn | gong| alias_id_exists
 taxonomy | node| division_id | f  | taxonomy | 
division| division_id | t  | cc | taxonomy| $1
 mukhyala | pao | tax_id  | f  | mukhyala | 
mytax   | tax_id  | t  | cr | mukhyala| pao_tax_id_fkey


Then. something like this:
[EMAIL PROTECTED]> select 'create index 
'||fk_relation||'_'||fk_column||'_idx on '||fk_relation||'('||fk_column||');' 
from pgtools.foreign_keys_missing_indexes ;
  ?column?

-
 create index node_alias_id_idx on node(alias_id);
 create index node_division_id_idx on node(division_id);
 create index pao_tax_id_idx on pao(tax_id);

Finally, I used psql to generate the script and execute it:
$ psql -Atc 'select ' | psql -Xa
(I'm skipping the quoting hassle, which you could circumvent by creating
a view to build the script.)


In case your interested in these "pgtools" views, I've uploaded them to 
http://harts.net/reece/pgtools/ .

(Note: I created these views a long time ago with the intent to release
them, but I never did so.  I think there's now a pgtools or pg_tools
package on sourceforge, but that's unrelated.)


-Reece
-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] wrong timestamp

2006-07-26 Thread Michael Fuhr
On Tue, Jul 25, 2006 at 05:54:00PM +0400, xyzyx wrote:
> I use Postgresql 8.1.4 on Debian,
> when I run command 'select current timestamp' or 'select now'  in 
> pgAdmin, I everytime get "2000-01-01 03:02:16.105976+03".
> When I run 'select TIMEOFDAY()' I get right time.

Are you getting the actual value "2000-01-01 03:02:16.105976+03"
or is that just a hypothetical example to show that the value doesn't
change?  If the latter then see the comments about current_timestamp
in the documentation:

http://www.postgresql.org/docs/8.1/interactive/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT

"It is important to know that CURRENT_TIMESTAMP and related functions
return the start time of the current transaction; their values do
not change during the transaction.  This is considered a feature:
the intent is to allow a single transaction to have a consistent
notion of the 'current' time, so that multiple modifications within
the same transaction bear the same time stamp."

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Database Restore errors

2006-07-26 Thread Woody Woodring



I am running into this issue as well.  I am 
upgrading from 7.4.X to 8.1.4.  My databases were "ASCII" encoded in 
7.4.X
 
I am editing the dumps to remove the non-UTF8 
characters, but you could also create your new db with "ASCII" encoding and it 
should import fine.
 
Hope this helps,
 
Woody
IGLASS Networks
 


From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Tom 
KinardSent: Wednesday, July 26, 2006 9:50 AMTo: 
pgsql-general@postgresql.orgSubject: [GENERAL] Database Restore 
errors


I am attempting to restore a 
database.
When I run pg_restore I get the 
following error:
Pg_restore:  [archiver (db)] 
error returened by PQendcopy: ERROR: invalid byte sequence for encoding “UTF8” : 
0x92
The dump was generated on a Mac and 
I am doing a restore on a Fedora Core 4 box.
The dump was done using a tar 
format.
The restore command I used is a 
follows:
pg_restore –d bugs –U bugs –Ft 
bugs-backup
I am running version 
8.1.4.
The output of psql –U bugs –c ‘show 
client_encoding’ is
UTF8
 
Local give items 
like
LANG=:en_US, 
UTF-8”
LC_COLLATE=”en_US, 
UTF-8”
 
Any help will be greatly 
appreciated.
 
Tom 
Kinard
 


[GENERAL] Database Design Theory - PostgreSQL Custom Types

2006-07-26 Thread Redefined Horizons

Merlin has helped me understand custom data types in PostgreSQL a lot
better, but I have some more questions I was hoping the list might be
able to help with.

I'm trying to come up with some good rules of thumb that can help me
determine when it is practical to implement a custom type on
PostgreSQL.

After talking to Merlin, I've come up with my first rule:

If the object or entity you are modeling could be represented by more
than one table in the database, don't use a custom data type.
Represent the object with multiple tables and link the tables together
with relationships via foreign keys.

Here is another situation that has me puzzled. I need to store
information about angles in my database. I want to store the angles in
Surveyors units, namely degrees, minutes, and seconds. So none of the
built in numeric types will work, unless I want to do a conversion,
which I don't.

I know I could represent angles in a table with 4 columns. A bigserial
or serial column for the primary key, a degrees column, a minutes
column, and a seconds column.

However, the rule of thumb I mentioned above doesn't apply, because I
can store all the information about an angle in one table. I know I
could make a custom data type for angles. Then I can just store the
angles in a column of any other table, instead of using a foreign key
ties to an angles table.

Both methods will work, but which one is the best? Why is that method the best?

Thanks for any discussion or information.

Scott Huey

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] copy losing information

2006-07-26 Thread Silvela, Jaime \(Exchange\)








This is the first time I post to the list. I’ve done a
brief search and didn’t find my issue treated already, so here it goes. Apologies
if this has been reported before.

 

I have a pretty big file, around 2 million rows, in
tab-separated format, with 4 columns, that I read into a table in Postgres
using the copy command.

I’ve started to notice missing info sometimes. I’ll
truncate the table, read from the file, and notice that sometimes there are
less rows in the table than in the file.

This is not well reproducible. If I truncate again, and
reread, I may get all the lines, or I may get a different amount of missing
lines.

 

I concluded that there was a bug in the copy command, and
wrote a replacement in Ruby, using the pure-ruby Postgres-pr library.

I run into the same issue. Some lines seem to be dropped,
but no exceptions nor SQL errors are reported by the program.

In order to improve throughput, in my ruby program I connect
to the server just once, and send the INSERT statements to the server in batches
of 2000.

 

I’ve checked that the file doesn’t contain any
SQL escape sequences or anything else that would invalidate an INSERT.

 

The version running in the server is 8.1.3 on Linux 2.6.5 on
an Intel platform.

 

The imports are being run from windows machines in the same
network.

 

Has somebody seen this before?

 

Thanks

Jaime








***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] copy losing information

2006-07-26 Thread Tom Lane
"Silvela, Jaime \(Exchange\)" <[EMAIL PROTECTED]> writes:
> I've started to notice missing info sometimes. I'll truncate the table,
> read from the file, and notice that sometimes there are less rows in the
> table than in the file.

Have you made any attempt to determine *which* rows are missing?
I'm wondering about sloppy quoting allowing lines to get joined,
or some such.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] copy losing information

2006-07-26 Thread Silvela, Jaime \(Exchange\)
No lines contain quotes. And the same file will sometimes be fully
imported, and sometimes lose data. I'm thinking that under heavy loads,
the database is discarding INSERTS.

thanks

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 26, 2006 4:14 PM
To: Silvela, Jaime (Exchange)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] copy losing information 

"Silvela, Jaime \(Exchange\)" <[EMAIL PROTECTED]> writes:
> I've started to notice missing info sometimes. I'll truncate the
table,
> read from the file, and notice that sometimes there are less rows in
the
> table than in the file.

Have you made any attempt to determine *which* rows are missing?
I'm wondering about sloppy quoting allowing lines to get joined,
or some such.

regards, tom lane



***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Database Design Theory - PostgreSQL Custom Types

2006-07-26 Thread Martijn van Oosterhout
On Wed, Jul 26, 2006 at 11:09:22AM -0700, Redefined Horizons wrote:
> I'm trying to come up with some good rules of thumb that can help me
> determine when it is practical to implement a custom type on
> PostgreSQL.

To me a custom datatype is useful when there is a set of values where
you have operations that map to other values in the set.

So numbers and strings are good because you have operations on them
that produce more numbers and strings. Your example with angles is good
also, since you can add and subtract them. Timestamps, intervals and
geometric types are also good.

Most things in the world don't work that way: you can't take two
customers and make a third. I'm sure there's a mathematical way to
express this better but I hope this helps,

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] copy losing information

2006-07-26 Thread Alvaro Herrera
Silvela, Jaime (Exchange) wrote:
> No lines contain quotes. And the same file will sometimes be fully
> imported, and sometimes lose data. I'm thinking that under heavy loads,
> the database is discarding INSERTS.

I don't think that's very likely.

How are you checking that the data is there?  Do you check the whole
table by way of a big, fat, unconstrained SELECT, or do you extract some
rows one by one?  My guess would be that maybe some insertions are not
making it into an index that may be later used for getting the data
during the examination.

If there are no indexes involved (no primary key either) then this
theory can be discarded quickly.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] copy losing information

2006-07-26 Thread Reece Hart




On Wed, 2006-07-26 at 12:48 -0400, Silvela, Jaime (Exchange) wrote:

This is the first time I post to the list. I’ve done a brief search and didn’t find my issue treated already, so here it goes. Apologies if this has been reported before.


What PG version and environment?  How about sending the output of  'select version()'  ?

copy or \copy? What arguments? 

If you can release the data and get it to me (e.g., compressed email attachment, http, ftp), I volunteer to try a run on 8.1.4 on suse 10.0/x86_64.

-Reece





-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0








Re: [GENERAL] copy losing information

2006-07-26 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Silvela, Jaime (Exchange) wrote:
>> No lines contain quotes. And the same file will sometimes be fully
>> imported, and sometimes lose data. I'm thinking that under heavy loads,
>> the database is discarding INSERTS.

> I don't think that's very likely.

Especially not since he says he's using COPY --- any sort of error would
be all-or-nothing.

Personally I'm wondering about individual rows getting dropped on the
client side.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Table Inheritence...Just Columns?

2006-07-26 Thread Redefined Horizons

Does a child table just inherit the columns from a parent table, or
are other table objects like column constraints and triggers
inherited?

Thanks,

Scott Huey

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] What's special about 1916-10-01 02:25:20? Odd jump in internal timestamptz representation

2006-07-26 Thread Alistair Bayley

(forwarded from pgsql-interfaces because no response there; can
anybody tell me if I really have a bug, or am just a bit dim?)

Hello,

Below is a test C program, which fetches some timestamp literals and
prints their internal representation, which is the number of seconds
after 2000-01-01, stored as a double. I wrote this program on windows,
so the imports might look unfamiliar to unix users, but it shouldn't
take much effort to port, if you like. My server is 8.1 on Windows XP.

The output from this program is:

-2627158159.00
-2627156080.00
-2627156079.00
-2627156079.00

which corresponds to timestamps:
1916-10-01 02:25:20  with timezone
1916-10-01 02:25:20  sans timezone
1916-10-01 02:25:21  with timezone
1916-10-01 02:25:21  sans timezone

The first line of output puzzles me: why is '1916-10-01 02:25:20'
2627158159 seconds before 2000-01-01, while '1916-10-01 02:25:21' is
2627156080 before; a difference of 2080 seconds, or 34m:40s.

Is this correct? I don't think so, but there might be some subtlety of
timezone and date arithmetic which I've missed.

Thanks,
Alistair



#include 
#include 
/* for ntohl/htonl
#include 
#include 
*/
#include "libpq-fe.h"


static void exit_nicely(PGconn *conn)
{
   PQfinish(conn);
   exit(1);
}


void check_sql(PGconn *conn, PGresult *res, ExecStatusType expected)
{
   if (PQresultStatus(res) != expected)
   {
   fprintf(stderr, "SQL failed: %s", PQerrorMessage(conn));
   PQclear(res);
   exit_nicely(conn);
   }
}

void check_cmd(PGconn *conn, PGresult *res)
{
   check_sql(conn, res, PGRES_COMMAND_OK);
}

void check_qry(PGconn *conn, PGresult *res)
{
   check_sql(conn, res, PGRES_TUPLES_OK);
}

void revbytes2(int n, char *pfrom, char *pto)
{
   if (n == 0) return;
   *pto = *pfrom;
   revbytes2(--n, ++pfrom, --pto);
}

void revbytes(int n, void *pfrom, void *pto)
{
   revbytes2(n, (char*)pfrom, ((char*)pto)+n-1);
}


void printColOne(PGresult *res)
{
   double t, *tptr;
   tptr = (double *) PQgetvalue(res, 0, 0);
   revbytes(8, tptr, &t);
   /* t = ntohl(*tptr);  -- this doesn't work!? must be me... */
   printf("%f\n", t);
}


int main(int argc, char **argv)
{
   const char *conninfo;
   PGconn *conn;
   PGresult   *res;
   double t, *tptr;

   /*
* If the user supplies a parameter on the command line, use it as the
* conninfo string; otherwise default to setting dbname=postgres and using
* environment variables or defaults for all other connection parameters.
*/
   if (argc > 1)
   conninfo = argv[1];
   else
   conninfo = "user = postgres";

   /* Make a connection to the database */
   conn = PQconnectdb(conninfo);

   /* Check to see that the backend connection was successfully made */
   if (PQstatus(conn) != CONNECTION_OK)
   {
   fprintf(stderr, "Connection to database failed: %s",
PQerrorMessage(conn));
   exit_nicely(conn);
   }

   res = PQexecParams(conn, "select timestamp with time zone
'1916-10-01 02:25:20'"
   , 0, NULL, NULL, NULL, NULL, 1 );
   check_qry(conn, res);
   printColOne(res);
   PQclear(res);

   res = PQexecParams(conn, "select timestamp without time zone
'1916-10-01 02:25:20'"
   , 0, NULL, NULL, NULL, NULL, 1 );
   check_qry(conn, res);
   printColOne(res);
   PQclear(res);

   res = PQexecParams(conn, "select timestamp with time zone
'1916-10-01 02:25:21'"
   , 0, NULL, NULL, NULL, NULL, 1 );
   check_qry(conn, res);
   printColOne(res);
   PQclear(res);

   res = PQexecParams(conn, "select timestamp without time zone
'1916-10-01 02:25:21'"
   , 0, NULL, NULL, NULL, NULL, 1 );
   check_qry(conn, res);
   printColOne(res);
   PQclear(res);

   /* close the connection to the database and cleanup */
   PQfinish(conn);

   return 0;
}

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] What's special about 1916-10-01 02:25:20? Odd jump in internal timestamptz representation

2006-07-26 Thread Tom Lane
"Alistair Bayley" <[EMAIL PROTECTED]> writes:
> The first line of output puzzles me: why is '1916-10-01 02:25:20'
> 2627158159 seconds before 2000-01-01, while '1916-10-01 02:25:21' is
> 2627156080 before; a difference of 2080 seconds, or 34m:40s.

What timezone are you testing in?

Perusing the zic database makes me think it might be Europe/Dublin,
because there's a DST rule with a related breakpoint:

ZoneEurope/Dublin   -0:25:00 -  LMT 1880 Aug  2
-0:25:21 -  DMT 1916 May 21 2:00
-0:25:21 1:00   IST 1916 Oct  1 2:00s
 0:00   GB-Eire %s  1921 Dec  6 # independence
 0:00   GB-Eire GMT/IST 1940 Feb 25 2:00
 0:00   1:00IST 1946 Oct  6 2:00
 0:00   -   GMT 1947 Mar 16 2:00
 0:00   1:00IST 1947 Nov  2 2:00
 0:00   -   GMT 1948 Apr 18 2:00
 0:00   GB-Eire GMT/IST 1968 Oct 27
 1:00   -   IST 1971 Oct 31 2:00u
 0:00   GB-Eire GMT/IST 1996
 0:00   EU  GMT/IST

There's a whole raft of comments before that about where the zic people
got their info, so if you have doubts about this, take a look:
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/timezone/data/europe

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Generating unique session ids

2006-07-26 Thread Antimon
Hi, I need to generate sessions for logged in users to my website which
uses pgsql. So i decided to write a function which is this:

---
CREATE OR REPLACE FUNCTION session_createsession(int4, "varchar")
RETURNS text AS
$BODY$
DECLARE
sid TEXT;
BEGIN
sid := md5(random());
INSERT INTO sessions (id, accountid, ipaddress) VALUES (sid, $1, $2);
return sid;
EXCEPTION
WHEN unique_violation THEN
return session_createsession($1, $2);
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
---

As the id field is primary key, it should generate a unique violation
if duplicate ids created, might be seen rarely but wanted to solve it
anyway. So i decided to check it by changing "sid := md5(random());" to
"sid := extract(minute from now());" When i run it, returns the minute
as session key and inserts an entryy for it, but if i call it again in
the same minute, it never ends execution. I expected it to return the
minute when system clock minute changes but it runs forever.

Am i doing something wrong? I mean, there might be some implemendation
to have now() to return same value in a trancastion or something but
could not be sure. If there is something like that, is that function
safe to create session ids? Because if it goes into an infinite loop
like it does with the minute thing, it'd drain all system resources.

Thanks.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Generating unique session ids

2006-07-26 Thread Tom Lane
"Antimon" <[EMAIL PROTECTED]> writes:
> As the id field is primary key, it should generate a unique violation
> if duplicate ids created, might be seen rarely but wanted to solve it
> anyway.

Why don't you just use a serial generator?

 So i decided to check it by changing "sid := md5(random());" to
> "sid := extract(minute from now());" When i run it, returns the minute
> as session key and inserts an entryy for it, but if i call it again in
> the same minute, it never ends execution. I expected it to return the
> minute when system clock minute changes but it runs forever.

Yup, see
http://www.postgresql.org/docs/8.1/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Table Inheritence...Just Columns?

2006-07-26 Thread Shoaib Mir
>From PostgreSQL 8.1 documentation (http://www.postgresql.org/docs/8.1/static/ddl-inherit.html)"Check constraints can be defined on tables within an inheritance
hierarchy. All check constraints on a parent table are automatically
inherited by all of its children. Other types of constraints are not
inherited, however"Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 7/27/06, 
Redefined Horizons <[EMAIL PROTECTED]> wrote:
Does a child table just inherit the columns from a parent table, orare other table objects like column constraints and triggersinherited?Thanks,Scott Huey---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate   subscribe-nomail command to [EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly



[GENERAL] Database corruption with Postgre 7.4.2 on FreeBSD 6.1?

2006-07-26 Thread aurora

Hello,

We are stressing testing our application. It adds and deletes a lot of  
rows. Within 24 hours we ran into some sort of database corruption  
problem. We got this error when trying to insert into the users table.



ERROR XX001: invalid page header in block 2552 of relation  
"users_name_lower_idx"



The schema looks like this


create table users (
userid integer  not null,
name text unique not null,
...
PRIMARY KEY (userid)
);

create index users_name_lower_idx on users (lower(name));


Then the box froze up. We suspect it might ran out of IO bufs.

We are using Postgre 7.4.2 running on FreeBSD 6.1 and Dell 2850.

Thanks for any help,

wy


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Understanding VARHDRSZ

2006-07-26 Thread Redefined Horizons

Page 524 of the PostgreSQL 8.1 Manual, PDF Edition, has a code snippit
that makes use of the VARHDRSZ macro. I'm trying to understand the
purpose of this macro.

Here is the code from the manual:

#include "postgres.h"
...
char buffer[40]; /* our source data */
...
text *destination = (text *) palloc(VARHDRSZ + 40);
destination->length = VARHDRSZ + 40;
memcpy(destination->data, buffer, 40);
...

The manual also says, just below the code snippit:

"VARHDRSZ is the same as sizeof(int4), but it's considered good style
to use the macro VARHDRSZ to refer to the size of the overhead for a
variable-length type."

Does this mean that we are simply allocating 44 bytes for the data
type with these calls to the VARHDRSX macro?

Previous to the code snippit the manual says this:

"All variable-length types must begin with a length field of exactly 4
bytes, and all data to be stored within that type must be located in
the memory immediately following that length field."

If this is the case, why even have the VARHDRSZ macro? Why not just
add 4 bytes to the length of the data when you use palloc?

I know I could just take this on faith, but I'm trying to understand
the underlying concepts of what is being done in this code. Is the
macro used becuase of different operating system implementations of
the C programming language, or is the Macro used to perserve backward
compatibility if the "length" indicator is increased to more than 4
bytes in the future?

Thanks,

Scott Huey

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database corruption with Postgre 7.4.2 on FreeBSD 6.1?

2006-07-26 Thread Shoaib Mir
Try doing a REINDEX and see if you can recover all data blocks as it appears to me you have some data blocks messed up. If possible try taking the backup for your database as well.Thanks,-- Shoaib Mir
EnterpriseDB (www.enterprisedb.com)On 7/27/06, aurora <[EMAIL PROTECTED]
> wrote:Hello,We are stressing testing our application. It adds and deletes a lot of
rows. Within 24 hours we ran into some sort of database corruptionproblem. We got this error when trying to insert into the users table.ERROR XX001: invalid page header in block 2552 of relation"users_name_lower_idx"
The schema looks like thiscreate table users ( userid integer  not null, name text unique not null, ... PRIMARY KEY (userid));create index users_name_lower_idx on users (lower(name));
Then the box froze up. We suspect it might ran out of IO bufs.We are using Postgre 7.4.2 running on FreeBSD 6.1 and Dell 2850.Thanks for any help,wy---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] CREATE DATABASE question.

2006-07-26 Thread Karen Hill
I have an sql file that doesn' t work properly when I do: psql <
mysql.sql .  I cannot get it to connect to the database.

Here what I'd like it to do:

CREATE DATABASE testdb;
\c testdb;
CREATE TABLE tableTest(var varchar);

But I get an error on the second line about an invalid character.  Is
it even possible to connect to a different db when giving an sql file
to psql to process?

regards,


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Database corruption with Postgre 7.4.2 on FreeBSD 6.1?

2006-07-26 Thread aurora
From your experience do you expect the database would run into this from  
time to time that requires DBA's interventions? Is so it would become a  
problem for our customers because our product is a standalone system. We  
don't intend to expose the Postgre database underneath.


wy

Try doing a REINDEX and see if you can recover all data blocks as it  
appears
to me you have some data blocks messed up. If possible try taking the  
backup

for your database as well.

Thanks,





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] CREATE DATABASE question.

2006-07-26 Thread Eric Faulhaber
Karen Hill wrote:
> I have an sql file that doesn' t work properly when I do: psql <
> mysql.sql .  I cannot get it to connect to the database.
> 
> Here what I'd like it to do:
> 
> CREATE DATABASE testdb;
> \c testdb;
> CREATE TABLE tableTest(var varchar);
> 
> But I get an error on the second line about an invalid character.  Is
> it even possible to connect to a different db when giving an sql file
> to psql to process?

Sure, just lose the trailing semi-colon from your connect command, as in:

CREATE DATABASE testdb;
\c testdb
CREATE TABLE tableTest(var varchar);

This lets psql interpret your connect request as a psql command instead
of as an SQL statement.

Regards,
Eric Faulhaber


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Database corruption with Postgre 7.4.2 on FreeBSD 6.1?

2006-07-26 Thread Aaron Glenn

On 7/26/06, aurora <[EMAIL PROTECTED]> wrote:

 From your experience do you expect the database would run into this from
time to time that requires DBA's interventions? Is so it would become a
problem for our customers because our product is a standalone system. We
don't intend to expose the Postgre database underneath.

wy


Is there a particular reason you're using 7.4.2? At the very least, I
would be using 7.4.13.

regards,
aaron.glenn

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Performance Postgresql en HP-UX 11.x

2006-07-26 Thread ing_enriquebarrios


tenemos instalado en la empresa postgresql en una PC 900mhz con linux y 512 de ram y otro postgres en un HP-UX con 1 gb de ram y con 2 procesadores de 470 mhz con unix.
Hice correr una query en ambos equipos y la PC resolvio la consulta hasta 3 veces más rápido que el servidor.
¿Cúal puede ser el inconveniente?, las configuraciones están bien, puede ser que los procesadores cisc resuelven más rápidas las consultas que los proc. risc. O algún problema de compilación del postgresql ya que el postgresql nativamente compilado en gcc que es nativo de linux se desempeñe mejor en linux que en unix.
 
saludos. 

Conectate a Internet Gratis de Argentina.Com








Re: [GENERAL] Performance Postgresql en HP-UX 11.x

2006-07-26 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> 

> 

> tenemos instalado en la empresa postgresql en una PC 900mhz 
> con linux y 512 de ram y otro postgres en un 
> HP-UX con 1 gb de ram y con 2 procesadores de 470 mhz con 
> unix.

> Hice correr una query en ambos equipos y la PC resolvio la 
> consulta hasta 3 veces más rápido que el servidor.


Lo primero que debes hacer es examinar si los planes de ejecucion son
similares.  Prueba ejecutando EXPLAIN ANALYZE  en el PC y el
HP.  Si no entiendes lo que emite, pegalo aca para que te ayudemos a
interpretarlo.

PS -- por favor no escribas en HTML.  Este mail lo lei porque creo que
es el primero que envias a la lista, pero si el siguiente tambien viene
asi simplemente lo voy a borrar sin leerlo.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Problem with table slowing down - Help with EXPLAIN reqd

2006-07-26 Thread Peter Watling

I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help

here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...

I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.

What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?


Any help gratfully received.

Peter Watling
New Zealand




transMET-MGU=# explain select * from pwdelete_temppaths;
 QUERY PLAN
---
Seq Scan on pwdelete_temppaths  (cost=0.00..11.40 rows=140 width=515)
(1 row)

transMET-MGU=# explain select * from dataprocessor_path;
   QUERY PLAN
---
Seq Scan on dataprocessor_path  (cost=0.00..6900.17 rows=284617 width=92)
(1 row)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Problem with table slowing down - Help with EXPLAIN reqd

2006-07-26 Thread Q

On 27/07/2006, at 2:00 PM, Peter Watling wrote:


I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help


When you update a row postgres will in effect create an entirely new  
row that reflects the changes and the old row is marked for deletion,  
however it still remains in the table until it is garbage collected  
by running vacuum. If you never run vacuum you would end up with a  
row for every single update you have ever made.


You need to run vacuum regularly to ensure that these discarded rows  
are recycled.  If you are running 8.0+, turning on autovacuum in your  
postgresql.conf is probably the easiest way to ensure this is done  
frequently.



here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...

I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.

What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?


You need to run 'ANALYZE ' to update the table statistics.  
Enabling autovacuum will take care of this for you also.



transMET-MGU=# explain select * from pwdelete_temppaths;
 QUERY PLAN
-- 
-

Seq Scan on pwdelete_temppaths  (cost=0.00..11.40 rows=140 width=515)
(1 row)

transMET-MGU=# explain select * from dataprocessor_path;
   QUERY PLAN
-- 
-
Seq Scan on dataprocessor_path  (cost=0.00..6900.17 rows=284617  
width=92)

(1 row)



Please try running 'analyze' on the tables first and then rerun these  
queries as 'explain analyze' instead so you can see the difference  
between what the planner expects compared to what it actually gets.


--
Seeya...Q

   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

  _  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /  /
 /__  /   _//  /Gold Coast, QLD, Australia
  __/  __/ __/ /   /   -  /Ph: +61 419 729 806
___  /
_\




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Understanding VARHDRSZ

2006-07-26 Thread Tom Lane
"Redefined Horizons" <[EMAIL PROTECTED]> writes:
> Page 524 of the PostgreSQL 8.1 Manual, PDF Edition, has a code snippit
> that makes use of the VARHDRSZ macro. I'm trying to understand the
> purpose of this macro.

It's just a symbol for the size of the length word at the start of the
stored value.

> If this is the case, why even have the VARHDRSZ macro?

Someday, we might want to change the size of the length word (eg,
move to int8, when no one uses mere 32-bit machines anymore).  That
will be a painful undertaking in any case, but everyplace that codes
the overhead space as "VARHDRSZ" rather than "4" will be one less
place to fix.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings