Re: [GENERAL] Uhm, so, yeah, speaking of /.

2007-05-30 Thread Brandon Aiken
Google.  And, yes, Google use a modified MySQL for its pigeons.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua D. Drake
Sent: Wednesday, May 30, 2007 2:41 PM
To: Jeff Davis
Cc: Scott Ribe; PostgreSQL general
Subject: Re: [GENERAL] Uhm, so, yeah, speaking of /.

Jeff Davis wrote:
> On Wed, 2007-05-30 at 12:18 -0600, Scott Ribe wrote:
>>> I thought it had to do with letting a form sit around too long and
>>> then /. timing out the state.
>>>
>>> That's probably not good anyway: it should at least give you a real
>>> error message. However, they might not consider that a bug.
>> I didn't let the form sit around at all--didn't think to mention that
>> before. It may well not be related to MySQL at all, the point is
simply that
>> although /. is well-known, gets a lot of hits, and works well enough
for its
>> intended purpose, it is buggy and is NOT an example of what would be
>> acceptable reliability for most "mission critical" applications.
>>
> 
> I was agreeing with you. 
> 
> I think that's what the "invalid form key" error is supposed to mean,
> but it probably happens for all kinds of other cases, too (which is
bad
> and causes confusion).
> 
> I agree that /. not a great example of stability or correctness.

Interesting statement. Question: What would be a great example of 
stability of correctness?

Joshua D. Drake



-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] server resetting

2006-09-18 Thread Brandon Aiken
Maybe I'm restating the obvious, but it looks to me like the procedural
trigger from the SQL query "INSERT INTO logs
(seq,level,event_code,event_date,event_time,city,province,user_id,est_ds
p_date,est_dsp_time,country,edilate,carr_code,notes,trac_notes,order_num
)
VALUES ('2','6','TAS','09/14/06','19:"... is the culprit, probably 3-4
IF (or other conditional) statements in.  Check this trigger to see if
it handles NULLs correctly.


Looking at the change logs
(http://www.postgresql.org/docs/7.4/interactive/release.html) it looks
like there were significant fixes in 7.4.8.  It's possible that this is
a known bug that has already been fixed.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Geoffrey
Sent: Monday, September 18, 2006 10:06 AM
To: PostgreSQL List
Subject: [GENERAL] server resetting 

Postgresql 7.4.7 (yes, I've been telling them we need to upgrade to the 
latest 7.4)
Red Hat Enterprise Linux ES release 3

We are having problems with the postgresql server resetting and dropping

all user connections.  There is a core file generated and I've attached 
a backtrace.  I'm about to dig into the source to see what I can find, 
but if anyone can put their finger on the problem, I would appreciate 
it.  I do realize that there is a call to exec_stmt() which appears to 
have a null value being passed, which I suspect is the issue.  Why a 
null is being passed is what I plan to look into.

Thanks for any info, here's the backtrace:

Using host libthread_db library "/lib/tls/libthread_db.so.1".
Core was generated by `postgres: bwoods exp [local] INSERT   '.
Program terminated with signal 11, Segmentation fault.
#0  exec_stmt (estate=0xfeff8a90, stmt=0x0) at pl_exec.c:928
 in pl_exec.c
#0  exec_stmt (estate=0xfeff8a90, stmt=0x0) at pl_exec.c:928
#1  0x0083f005 in exec_stmts (estate=0xfeff8a90, stmts=0x90fa9e0)
 at pl_exec.c:903
#2  0x0083f4f2 in exec_stmt_if (estate=0xfeff8a90, stmt=0x90fab78)
 at pl_exec.c:1139
#3  0x0083f0ca in exec_stmt (estate=0xfeff8a90, stmt=0x90fab78)
 at pl_exec.c:947
#4  0x0083f005 in exec_stmts (estate=0xfeff8a90, stmts=0x90fab90)
 at pl_exec.c:903
#5  0x0083f4f2 in exec_stmt_if (estate=0xfeff8a90, stmt=0x90fad20)
 at pl_exec.c:1139
#6  0x0083f0ca in exec_stmt (estate=0xfeff8a90, stmt=0x90fad20)
 at pl_exec.c:947
#7  0x0083f005 in exec_stmts (estate=0xfeff8a90, stmts=0x9133e60)
 at pl_exec.c:903
#8  0x0083f4f2 in exec_stmt_if (estate=0xfeff8a90, stmt=0x90d97b8)
 at pl_exec.c:1139
#9  0x0083f0ca in exec_stmt (estate=0xfeff8a90, stmt=0x90d97b8)
 at pl_exec.c:947
#10 0x0083f005 in exec_stmts (estate=0xfeff8a90, stmts=0x9118408)
 at pl_exec.c:903
#11 0x0083ee15 in exec_stmt_block (estate=0xfeff8a90, block=0x90d97e8)
 at pl_exec.c:859
#12 0x0083e77a in plpgsql_exec_trigger (func=0x9149ae0,
trigdata=0xfeff8ca0)
 at pl_exec.c:645
#13 0x0083b053 in plpgsql_call_handler (fcinfo=0xfeff8b50) at 
pl_handler.c:121
#14 0x080f1c8e in ExecCallTriggerFunc (trigdata=0xfeff8ca0,
finfo=0x935e260,
 per_tuple_context=0x0) at trigger.c:1150
#15 0x080f2be7 in DeferredTriggerExecute (event=0x92af050, itemno=0, 
rel=0x8,
 trigdesc=0x935daf0, finfo=0xfeff8a90, per_tuple_context=0x0)
 at trigger.c:1859
#16 0x080f2fee in deferredTriggerInvokeEvents (immediate_only=1 '\001')
 at trigger.c:2000
#17 0x080f314f in DeferredTriggerEndQuery () at trigger.c:2135
#18 0x08178ae8 in finish_xact_command () at postgres.c:1749
#19 0x08177816 in exec_simple_query (
 query_string=0x8fe2438 "INSERT INTO logs 
(seq,level,event_code,event_date,event_time,city,province,user_id,est_ds
p_date,est_dsp_time,country,edilate,carr_code,notes,trac_notes,order_num
) 
VALUES ('2','6','TAS','09/14/06','19:"...)
 at postgres.c:905
#20 0x08179f09 in PostgresMain (argc=4, argv=0x8f94b48,
 username=0x8f94ab8 "bwoods") at postgres.c:2871
#21 0x08153c90 in BackendFork (port=0x8fa6af0) at postmaster.c:2564
#22 0x08153683 in BackendStartup (port=0x8fa6af0) at postmaster.c:2207
#23 0x08151be8 in ServerLoop () at postmaster.c:1119
#24 0x081512ae in PostmasterMain (argc=5, argv=0x8f92688) at 
postmaster.c:897
#25 0x08121163 in main (argc=5, argv=0xfeff9e44) at main.c:214



-- 
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
  - Benjamin Franklin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Restore data from old installation

2006-09-18 Thread Brandon Aiken








Are you trying to connect to a remote
server?  By default, PostgreSQL only accepts connections from localhost or
127.0.0.1.  You will have to change the listen_addresses setting in postgresql.conf
if you wish to connect from a remote node.

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alberto Molteni
Sent: Monday, September 18, 2006
12:09 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Restore data
from old installation



 



 Hi everybody! 

When
I use pgAdmin III I cannot connect to the database..an error is 
presented: Connection Refused (0x274D/10061). 

I
have not found a solution to this problem and I reinstalled the 
package of Postgresql 8.0. I saved the directory of the old 
installation. 

Now,
I should need to have the data inside a database which was stored 
in that installation...is there a way through the directory /data? I 
tried something but i did not have success... 

Please
help me Thanks 

Albert











Re: [GENERAL] vista

2006-09-19 Thread Brandon Aiken
So...

If you're not a PostgreSQL Win32 port dev, and *don't know* what they're
up to as far as Vista, why respond to the Q?  Or why respond "fix it
yourself" instead of "ask this guy" or "nobody here will know yet" or
"post your query on -ports or -hackers".  

Otherwise it's as useful as saying "http://justgoogleit.com/"; or "check
the man pages".  While technically a correct response, it's not a very
useful one and certainly not what the poster was asking, yes?  It's like
a "SELECT * ..." statement returning a single row with an asterisk in
it.  Gee, thanks for the tautology.

Heck, even "check CVS change logs" would be more useful.  Presumably *a*
person on the dev team will handle it eventually.  And it's not
unreasonable to expect that somebody, somewhere has asked the same
question to the dev team and that they *might* know something about the
state of PG on that platform.

Frankly, I too could care less about PG on Vista.  Longhorn isn't due
until Vista SP1, so PG support has a long time to go before it's a real
concern.  But then I didn't try to answer the question.


--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Browne
Sent: Tuesday, September 19, 2006 12:16 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] vista

[EMAIL PROTECTED] (Naz Gassiep) writes:
>> It's the folks who think that non-Windows-using developers should
care
>> about Vista that bug me.  This is open-source code, people.  Scratch
>> your own itch.
>>
> The "scratch your own itch" line can only be pushed so far, if it is
> being said by a developer who works on a project that desires to be
> taken seriously by professionals in industry. For minor features, yes,
> perhaps it could be argued that the core team could ignore certain
> issues, and just wait for a patch. For something like Vista
> compatibility, if you want to be taken seriously by anyone who uses
> Windows (hands up anyone who knows a Windows user), "scratch your own
> itch" is not really going to cut it, IMHO. I'm used to getting that
> line when talking to 2 developer obscure projects that have a userbase
> of a half a dozen, but for a project like PostgreSQL, the "they tell
> you to do it yourself" brush is one we do NOT want to get tarred with.
>
> If we don't have the resources to cope with a Vista port immediately
> then so be it. If it's low priority, so be it. However, lets not
> appear to deride as unnecessary that which we cannot immediately
> provide a solution to. That's small time project mentality.

Well, the same issue has come up with the subproject that I work on,
namely Slony-I, and the nature of things seems much the same.

*I* don't use Windows, haven't got any relevant build environment,
and, organizationally, really couldn't care less if PostgreSQL or
Slony-I runs on Windows or not, as Windows isn't a relevant platform.
Asking me about Windows support in *any* context is pretty much
useless; as far as I'm concerned, Windows support requires finding
someone who has that particular itch.

It turns out that there are people with a Windows itch, and I haven't
turned away patches to provide Windows support due to its irrelevance
to me.  No, I'm pleased enough to see that come in.

But if you present Windows-related issues to me, I see nothing
improper in saying "scratch your own itch."  I'm *not* the right one
to help, and the community is large enough that I don't see any
problem with that.
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://linuxfinances.info/info/advocacy.html
Rules of  the Evil Overlord #196.  "I will hire an  expert marksman to
stand by the entrance to my  fortress. His job will be to shoot anyone
who rides up to challenge me."  <http://www.eviloverlord.com/>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Load a csv file into a pgsql table

2006-09-19 Thread Brandon Aiken
Define 'quick'.

You could write a script that would transform a .csv file into an INSERT
statement and save it to an .sql file.

Or I suppose you could do silly ODBC stuff with MS Access.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Emi Lu
Sent: Tuesday, September 19, 2006 2:15 PM
To: PgSQL General
Subject: [GENERAL] Load a csv file into a pgsql table

Greetings,


*Except* copy command, are there other quick ways to load data from a 
csv file into a pgsql table please?


Thanks a lot!




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Character fields and length

2006-09-21 Thread Brandon Aiken








I know that in PostgreSQL, there is no performance
difference between character and character varying.  

 

Is there any penalty in either performance or disk space for
not specifying a length on char and varchar fields?  Is it really just there only
as a basic check constraint for the logical model?

 

--



Brandon Aiken





CS/IT Systems Engineer





 





Confidentiality Notice





This email, including attachments, may include confidential
and/or proprietary information, and may be used only by the person or entity to
which it is addressed.  If the reader of this email is not the intended
recipient or his/her authorized agent, the reader is hereby notified that any
dissemination, distribution or copying of this email is prohibited.  If
you have received this email in error, please notify the sender by replying to
this message and deleting this email immediately.



 








Re: [GENERAL] serial column

2006-09-25 Thread Brandon Aiken
I would tend to agree with Tom.

A table is by definition an unordered set of records.  Forcing keys to
have meaning of this type implies that there is a relationship between
each record in the set.  That's information you should be storing as
part of the record.  If order is important, design the database so that
it knows that order relationship exists.  

An ordered list is just a hierarchal database wherein every record has
exactly one parent (or none if it's root) and exactly one child (or none
if it's end leaf), but the relational model does a rather poor job of
handling hierarchal relationships.  You might consider the two-way
linked list approach.  That is, each record knows the item before it and
the item after it, like so:

TABLE mainTable
{
id serial PRIMARY KEY,
foo text,
bar integer,
zen numeric
}

TABLE mainTableRelationships
{
parentID integer,
childID integer,
CONSTRAINT "mainTableRelationships_pkey" PRIMARY KEY
("parentID", "childID"),
CONSTRAINT "parentID_key" UNIQUE ("parentID"),
CONSTRAINT "childID_key" UNIQUE ("childID"),
CONSTRAINT "mainTable_parentID_fkey" FOREIGN KEY ("parentID")
  REFERENCES "mainTable" ("id"),
CONSTRAINT "mainTable_childID_fkey" FOREIGN KEY ("childID")
  REFERENCES "mainTable" ("id")
}

Of course, there's really little difference between doing things this
way and ordering by the SERIAL field and numbering them appropriately on
output, except that this above way is hideously more complex.

Another option would be to create a temporary table ordered correctly,
truncate the existing table, delete the sequence (or change the default
on the primary key), copy the data back, and then re-create the sequence
(or change default back to nextval) and then set nextval to MAX()+1.
This is rather ugly, however, since you're still forcing the database to
do relationships it doesn't know about, so you technically violate first
normal form by having a multi-valued field (it identifies uniqueness and
order).

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Sunday, September 24, 2006 7:31 PM
To: Bob Pawley
Cc: Ragnar; Postgresql
Subject: Re: [GENERAL] serial column 

Bob Pawley <[EMAIL PROTECTED]> writes:
> I am using the numbers to identify devices.
> If a device is deleted or replaced with another type of device I want
the 
> numbering to still be sequential.

It sounds to me like you oughtn't be storing these numbers in the
database at all.  You just want to attach them at display time --- they
are certainly utterly meaningless as keys if they can change at any
moment.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Timestamp with timezone query

2006-09-25 Thread Brandon Aiken
Use the AT TIME ZONE construct:

http://www.postgresql.org/docs/8.1/interactive/functions-datetime.html#F
UNCTIONS-DATETIME-ZONECONVERT

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Harry Hehl
Sent: Monday, September 25, 2006 11:06 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Timestamp with timezone query

Hello,

I have a table with TIMESTAMP WITH TIMEZONE column. I would like to
query for a timestamp using a different timezone.

For example if a column contains '2006-02-11 00:30:00-05' 
select * from table where column='2006-02-10 19:30:00+00' would return
the column containing '2006-02-11 00:30:00-05'.

From section 8.5.1.3 "To ensure that a literal is treated as timestamp
with time zone, give it the correct explicit type: TIMESTAMP WITH TIME
ZONE '2004-10-19 10:23:54+02'"

So I tried:

select * from table where column = TIMESTAMP WITH TIME ZONE '2006-02-10
19:30:00+00' 

But it did not returned the desired result.

I am getting the timestamp as UTC and want to use to build a query, but
I don't want Postgres convert the timestamp. Is there anyway to do this?

Thanks




---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Timestamp with timezone query

2006-09-25 Thread Brandon Aiken
Hm?  Works for me:

postgres=# select time with time zone '00:30:00-05' at time zone 'utc';
  timezone
-
 05:30:00+00
(1 row)

What are you trying to do with the query?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Harry Hehl
Sent: Monday, September 25, 2006 5:34 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Timestamp with timezone query

Thanks, that does it. 
select * from table where column = '2006-02-10 19:30:00' AT TIME ZONE
'utc';

I also have a TIME WITH TIMEZONE column that I have to do the same thing
with but AT TIME ZONE can't be used directly. I tried several approaches
but I either get incorrect results or syntax errors. Is there a way to
do the same thing with TIME columns?

  

-Original Message-
From: Brandon Aiken [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006 11:39 AM
To: Harry Hehl; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Timestamp with timezone query

Use the AT TIME ZONE construct:

http://www.postgresql.org/docs/8.1/interactive/functions-datetime.html#F
UNCTIONS-DATETIME-ZONECONVERT

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Harry Hehl
Sent: Monday, September 25, 2006 11:06 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Timestamp with timezone query

Hello,

I have a table with TIMESTAMP WITH TIMEZONE column. I would like to
query for a timestamp using a different timezone.

For example if a column contains '2006-02-11 00:30:00-05' 
select * from table where column='2006-02-10 19:30:00+00' would return
the column containing '2006-02-11 00:30:00-05'.

From section 8.5.1.3 "To ensure that a literal is treated as timestamp
with time zone, give it the correct explicit type: TIMESTAMP WITH TIME
ZONE '2004-10-19 10:23:54+02'"

So I tried:

select * from table where column = TIMESTAMP WITH TIME ZONE '2006-02-10
19:30:00+00' 

But it did not returned the desired result.

I am getting the timestamp as UTC and want to use to build a query, but
I don't want Postgres convert the timestamp. Is there anyway to do this?

Thanks




---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Timestamp with timezone query

2006-09-25 Thread Brandon Aiken
Title: RE: [GENERAL] Timestamp with timezone query







I'm not at my dev station to check, but what about:
SELECT myTime AT TIME ZONE 'UTC' FROM theTable;

Then try:
SELECT myTime AT TIME ZONE 'UTC' FROM theTable WHERE myTime = '19:30:00-00';

Or:
SELECT myTime AT TIME ZONE 'UTC' FROM theTable WHERE myTime = TIME WITH TIME ZONE '19:30:00-00';

If that doesn't work you might try extracting epoch to convert the time to an integer:
SELECT myDate, myTime FROM theTable where EXTRACT(EPOCH FROM myTime) = EXTRACT(EPOCH FROM TIME WITH TIME ZONE '19:30:00-00');


Brandon Aiken

-Original Message-
From: Harry Hehl [mailto:[EMAIL PROTECTED]]
Sent: Mon 9/25/2006 9:21 PM
To: Brandon Aiken
Subject: RE: [GENERAL] Timestamp with timezone query

Hi Brandon,

>>postgres=# select time with time zone '00:30:00-05' at time zone 'utc';
This is not quite when I am doing. The time I get is already in UTC.

This is what I have...

date   | timestamp(6) with time zone |
time   | time(6) with time zone  |


select date,time  from test where date = '2000-02-10 19:30:00' at time zone 'utc';
  date  |    time
+-
 2000-02-11 00:30:00-05 | 00:30:00-05


I get date and time from a remote client in UTC. In the above case (which works) '2000-02-10 19:30:00' is UTC, so the query returns the desired result.  I have to do the same thing with time.

So I tried...

select date,time  from test where time  = '19:30:00' at time zone 'utc';
ERROR:  invalid input syntax for type timestamp with time zone: "19:30:00"


select date,time  from test where time  = time with time zone '19:30:00' at time zone 'utc';
select date,time  from test where time  = time with time zone '19:30:00-00' at time zone 'utc';

Both return...
 date | time
--+--
(0 rows)


This get close (I think)
select  cast( ('2000-01-01 19:30:00' at time zone 'utc') as time );
 timezone
--
 00:30:00

but when used in a table query...

select  date,time from test where time = cast( ('2000-01-01 19:30:00' at time zone 'utc') as time );
 date | time
--+--
(0 rows)


Any ideas would be much appreciated.

Thanks
Harry





-Original Message-
From: Brandon Aiken [mailto:[EMAIL PROTECTED]]
Sent: Mon 9/25/2006 5:42 PM
To: Harry Hehl
Subject: RE: [GENERAL] Timestamp with timezone query

Hm?  Works for me:

postgres=# select time with time zone '00:30:00-05' at time zone 'utc';
  timezone
-
 05:30:00+00
(1 row)

What are you trying to do with the query?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Harry Hehl
Sent: Monday, September 25, 2006 5:34 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Timestamp with timezone query

Thanks, that does it.
select * from table where column = '2006-02-10 19:30:00' AT TIME ZONE
'utc';

I also have a TIME WITH TIMEZONE column that I have to do the same thing
with but AT TIME ZONE can't be used directly. I tried several approaches
but I either get incorrect results or syntax errors. Is there a way to
do the same thing with TIME columns?

 

-Original Message-
From: Brandon Aiken [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 25, 2006 11:39 AM
To: Harry Hehl; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Timestamp with timezone query

Use the AT TIME ZONE construct:

http://www.postgresql.org/docs/8.1/interactive/functions-datetime.html#F
UNCTIONS-DATETIME-ZONECONVERT

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Harry Hehl
Sent: Monday, September 25, 2006 11:06 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Timestamp with timezone query

Hello,

I have a table with TIMESTAMP WITH TIMEZONE column. I would like to
query for a timestamp using a different timezone.

For example if a column contains '2006-02-11 00:30:00-05'
select * from table where column='2006-02-10 19:30:00+00' would return
the column containing '2006-02-11 00:30:00-05'.

>From section 8.5.1.3 "To ensure that a literal is treated as timestamp
with time zone, give it the correct explicit type: TIMESTAMP WITH TIME
ZONE '2004-10-19 10:23:54+02'"

So I tried:

select * from table where column = TIMESTAMP WITH TIME ZONE '2006-02-10
19:30:00+00'

But it did not returned the desired result.

I am getting the timestamp as UTC and want to use to build a query, but
I don't want Postgres convert the timestamp. Is there anyway to do this?

Thanks




---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster










Re: [GENERAL] change the order of FROM selection to make query work

2006-09-26 Thread Brandon Aiken
I think you're doing different join types.  SQLite is probably doing
CROSS JOINs.  PostgreSQL is probably doing at least one INNER JOIN.

From http://www.sqlite.org/lang_select.html:
"If multiple tables names are separated by commas, then the query is
against the cross join of the various tables."

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Thomas Peter
Sent: Tuesday, September 26, 2006 9:15 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] change the order of FROM selection to make query work

hi,
posted this twice allready, but didn't seem to make it to the list.
so one more try:

i support a trac [1] installation and migrated the backend from sqlite
to
postgres 8.1.4, which worked fine, but:
the following sql stopped working with postgres, and the fix of this
problem seems strange to me.
first the old sql, that worked with sqlite:

SELECT
p.value AS __color__,
id AS ticket, summary, status, priority ,component, t.type AS type,
time AS created,
changetime AS _changetime, description AS _description,
reporter AS _reporter
FROM ticket as t, permission as perm, enum as p
WHERE status IN ('new', 'assigned', 'reopened') AND perm.action =
'mf'
and p.name = t.priority AND p.type='priority'
ORDER BY priority, type, time

and the fix was, to put the 'ticket as t' at the end in the FROM
statement.
changing
FROM ticket as t, permission as perm, enum as p
to
FROM permission as perm, enum as p, ticket as t
works like expected!

so is this a bug, or do i get soemthing wrong (a hint where to rtfm
would
be very welcome in this case)

thanx,
thomas

[1] trac.edgewall.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] serial column

2006-09-26 Thread Brandon Aiken
The problem here is that you're trying to make the relational model do
something it was exactly designed *not* to do.  Rows are supposed to be
wholly independent of each other, but in this table, if you update row
200 of 700, you suddenly make 500 rows wrong.  The implications of that
are really bad.  It means whenever you do an INSERT, UPDATE, or DELETE,
you need to lock the whole table.  And since SELECT statements would be
accessing bad data during the table rebuild process, you have to go as
far as to lock the whole table from SELECT, too.  So you have to do an
ACCESS EXCLUSIVE table lock.

The linked list approach I mentioned is not that bad.  You can easily
find the beginning of the list (OUTER JOIN WHERE ParentID IS NULL) and
the end of the list (OUTER JOIN WHERE ChildID IS NULL).  You can easily
INSERT/UPDATE anywhere (insert record, new record becomes parent of
parent's old child and child of parent) and DELETE anywhere (parent
becomes parent of child, delete record).  The only problem is if you
need to say "show me the 264th item in the list" because you have to
iterate through the list.

You could use numeric IDs, I suppose, instead of integers.  Then you
just pick a number between the two items around it and use that.  So if
you need to insert an item between 1 and 2, you add in 1.5.  If you need
one between 1.5 and 2, you pick 1.75, etc.  Deletes are transparent.
You'll only get into trouble if your values get smaller than 10^-1000,
which, of course, they eventually will without reordering things
periodically. 

It circles back to what you're trying to do with this sequence.  Why are
gaps bad?  Why must the database handle order instead of control code or
view code?  What is the significance of the order to the data model?

In any case, I would not use the order key as a primary key.  It should
be unique, to be sure, but primary keys should be very stable.  You may
wish to use a serial field as the primary key just for that sake.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: Bob Pawley [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006 12:00 PM
To: Brandon Aiken; pgsql-general@postgresql.org
Subject: Re: [GENERAL] serial column

Actually, I am not trying to "force keys" nor, I don't beleive, am I
trying 
to force an hierarchal structure within the database.

The numbers I want to assign to devices are nothing more than merely
another 
attribute of the device - perhaps akin to a number in a street address.
The 
problem, from my viewpoint, is that this attribute needs to always start
at 
1 and be sequential without gaps.

(I am however, partly relying on an hierarchal order within the
database. 
When I assign numbers to devices, the lowest number is assigned, 
sequentially, to the device that has the lowest serial ID number. )

Thanks for your comments - everything helps at my stage.

Bob Pawley





- Original Message - 
From: "Brandon Aiken" <[EMAIL PROTECTED]>
To: 
Sent: Monday, September 25, 2006 7:47 AM
Subject: Re: [GENERAL] serial column


I would tend to agree with Tom.

A table is by definition an unordered set of records.  Forcing keys to
have meaning of this type implies that there is a relationship between
each record in the set.  That's information you should be storing as
part of the record.  If order is important, design the database so that
it knows that order relationship exists.

An ordered list is just a hierarchal database wherein every record has
exactly one parent (or none if it's root) and exactly one child (or none
if it's end leaf), but the relational model does a rather poor job of
handling hierarchal relationships.  You might consider the two-way
linked list approach.  That is, each record knows the item before it and
the item after it, like so:

TABLE mainTable
{
id serial PRIMARY KEY,
foo text,
bar integer,
zen numeric
}

TABLE mainTableRelationships
{
parentID integer,
childID integer,
CONSTRAINT "mainTableRelationships_pkey" PRIMARY KEY
("parentID", "childID"),
CONSTRAINT "parentID_key" UNIQUE ("parentID"),
CONSTRAINT "childID_key" UNIQUE ("childID"),
CONSTRAINT "mainTable_parentID_fkey" FOREIGN KEY ("parentID")
  REFERENCES "mainTable" ("id"),
CONSTRAINT "mainTable_childID_fkey" FOREIGN KEY ("childID")
  REFERENCES "mainTable" ("id")
}

Of course, there's really little difference between doing things this
way and ordering by the SERIAL field and numbering them appropriately on
output, except that this above way is hideously more complex.

Another option would be to create a temporary table ordered correctly,
truncate the existing table, delete the sequence (or change the default
on the primary key), copy the data back, and then re-create the sequence
(or change default back to nex

Re: [GENERAL] serial column

2006-09-27 Thread Brandon Aiken
Title: RE: [GENERAL] serial column






Yes, but if I tear down the house at 245 Main St, we don't renumber 247 Main St and on down the line, do we?

The problem here is that even if you get the sequencing to work, your table is dangerously unstable.  If you have 700 rows and you delete row #200, suddenly rows 201-700 are wrong.  That means you can't just lock the row you're working on.  You'll need to lock the whole table from INSERT/UPDATE/SELECT/DELETE until you've rebuilt the table because the whole thing is suddenly false.

I still believe the best method is going to be the linked list method I suggested, and it's the only one I can think of that meets  relational model requrements.  The problem with it is that while finding the first item (the one with NULL parent) and last item (the one with NULL child) are easy, and deleting any item is easy (parent becomes parent of child, delete record), and even inserting an item anywhere is easy (insert new record, new record becomes child of parent and parent of parent's child), it's more difficult to ask for item #4 in the order or item #261 in the order.  You need an index for your linked list, which I'm guessing is precisely the problem.  :)  External indices to linked lists is another thing an SQL database doesn't precisely handle very well, since it's all metadata and that adds to physical overhead.

So we return to the question: what purpose does this sequential order serve?  Why are gaps bad?  What problems are gaps causing?  Why does the database need to know the exact order?  Why can't your control code be aware of it instead?

You're asking the RDBMS to do something it was exactly designed *not* to do.  Rows are supposed to be unrelated objects or entries.  A table is *not* a tuple or an array.  Ordering them relates them, and makes your data less independent and your database less normalized.

In any case, I strongly recommend against using the ordering field as the primary key simply because you're planning to change them so much.  Make it a unique key to enforce the constraint, but primary keys should generally be very stable fields.

Brandon Aiken

-Original Message-
From: Bob Pawley [mailto:[EMAIL PROTECTED]]
Sent: Mon 9/25/2006 11:59 AM
To: Brandon Aiken; pgsql-general@postgresql.org
Subject: Re: [GENERAL] serial column

Actually, I am not trying to "force keys" nor, I don't beleive, am I trying
to force an hierarchal structure within the database.

The numbers I want to assign to devices are nothing more than merely another
attribute of the device - perhaps akin to a number in a street address. The
problem, from my viewpoint, is that this attribute needs to always start at
1 and be sequential without gaps.

(I am however, partly relying on an hierarchal order within the database.
When I assign numbers to devices, the lowest number is assigned,
sequentially, to the device that has the lowest serial ID number. )

Thanks for your comments - everything helps at my stage.

Bob Pawley





- Original Message -
From: "Brandon Aiken" <[EMAIL PROTECTED]>
To: 
Sent: Monday, September 25, 2006 7:47 AM
Subject: Re: [GENERAL] serial column


I would tend to agree with Tom.

A table is by definition an unordered set of records.  Forcing keys to
have meaning of this type implies that there is a relationship between
each record in the set.  That's information you should be storing as
part of the record.  If order is important, design the database so that
it knows that order relationship exists.

An ordered list is just a hierarchal database wherein every record has
exactly one parent (or none if it's root) and exactly one child (or none
if it's end leaf), but the relational model does a rather poor job of
handling hierarchal relationships.  You might consider the two-way
linked list approach.  That is, each record knows the item before it and
the item after it, like so:

TABLE mainTable
{
id serial PRIMARY KEY,
foo text,
bar integer,
zen numeric
}

TABLE mainTableRelationships
{
parentID integer,
childID integer,
CONSTRAINT "mainTableRelationships_pkey" PRIMARY KEY
("parentID", "childID"),
CONSTRAINT "parentID_key" UNIQUE ("parentID"),
CONSTRAINT "childID_key" UNIQUE ("childID"),
CONSTRAINT "mainTable_parentID_fkey" FOREIGN KEY ("parentID")
  REFERENCES "mainTable" ("id"),
CONSTRAINT "mainTable_childID_fkey" FOREIGN KEY ("childID")
  REFERENCES "mainTable" ("id")
}

Of course, there's really little difference between doing things this
way and ordering by the SERIAL field and numbering them appropriately on
output, except that this above way is hideously more complex.

Another option would be to create a temporary table ordered correctly,
truncate the exist

Re: [GENERAL] [NOVICE] Do non-sequential primary keys slow performance significantly??

2006-09-29 Thread Brandon Aiken
I would expect no performance difference at all.  All primary keys
automatically get an index, and the index is effectively an optimized
dictionary, hash, two-dimensional array, or list of tuples of the key
values and the address of the record for that key.  Indexes are designed
to eliminate the physical performance penalty from arbitrarily large and
variable data sets.

My only trepidation is using unpredictable values for primary keys.
Certainly they're candidate keys and should be unique in the table, but
I wouldn't be comfortable using an unpredictable value as a primary key.
A surrogate key combined with a unique constraint on your random field
seems like a better choice here, but that's entirely a subjective
opinion.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Damian C
Sent: Friday, September 29, 2006 1:29 AM
To: [EMAIL PROTECTED]
Subject: [NOVICE] Do non-sequential primary keys slow performance
significantly??

Hello,
The most difficult part of this question is justifying WHY we would
want to use random primary keys!  There is a very strong reason for
doing so, although not quite compelling.

We are Java developers developing desktop applications that persist
data in postgres. This is a pretty "low spec" database as it will only
servicing a few PCs.  We do this via Hibernate so our SQL & Postrges
skills and insights are relatively lacking.  I certainly don't really
understand the gory internal details of postgres.

We have an internal proposal to use what are virtually random 128 bit
numbers for our primary keys.  These are not truley random in any
mathematical sense, and they will be unique, but they are certainly
NOT sequential.

In my ignorant bliss I would suspect that postgres will run more
slowly using random primary keys. Can anyone provide any "rules of
thumb" for how this may effect performance??  Is it a plain dumb
idea?? Or maybe it would have only modest impact??

Any comments, insights, pointers are very much appreciated,

Thanks,
-Damian

---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Advantages of postgresql

2006-10-02 Thread Brandon Aiken








I’ve recently done the same thing.

 

Basically, it boils down to philosophy. 
MySQL’s primary goal is speed.  Speed over features, and even speed
over data integrity.  PostgreSQL’s (and most RDBMS system’s) primary
goal is to present the complete relational model and maintain ACID compliance.

 

If you’re using MySQL 4.x or
earlier, you’ve got a terrible DBMS.  Prior to MySQL 5, non-integer math
was always inaccurate.  There was no precise datatype.  Additionally,
MySQL 4 lacked a number of features like views, triggers, and stored
procedures.  MySQL 5 adds these features.  Even then, however, the
default engine for MySQL, MyISAM, is *not*
a transactional engine so updates are not atomic.  MyISAM also doesn’t
support foreign key constraints, which, if your schema is even remotely
complex, is nightmarish.  You must use the InnoDB engine in MySQL to get
the benefits of transactions.

 

Essentially, it boils down to this:

1. If you have a very simple database of 1
or two unrelated tables for a shopping cart or a guest book, MySQL is fine.  (But
so is SQLite.)

2. If all you care about is speed and aren’t
terribly concerned if some of your records break or don’t commit, MySQL
is also fine.  This is why some sites (Slashdot, Digg) use MySQL databases. 
It’s no big deal if one of the forums loses some random guy’s
anti-MS rant.

3. If you plan to do all your data
checking in your control code and not enforce referential integrity, MySQL is
fine.  This method is generally considered poor design, however.

 

Part of the problem many DBAs have with
MySQL is that the primary developer is a bit… strange.  Early
versions of the MySQL documentation called foreign keys tools for weak
developers, and said that ACID compliance could be emulated in your application
code so it wasn’t necessary in the database.  It should be pointed
out that no credible DBA (and, I should hope, no credible app devs) would agree
with these statements.  Essentially, instead of properly citing
limitations of the database, early MySQL docs simply said not only that every
other DBMS in the world had it wrong, but that the relational model itself was essentially
not useful.  To DBAs, MySQL came to be seen as one step above the MBA who makes
his department use a central Excel spreadsheet as a “database”. 
This reputation continues to stick with MySQL in spite of the strides it has
made with MySQL 5.  Another huge problem with MySQL is that it silently
truncates data.  If you have a DECIMAL(5) field and try to INSERT 10
or 100 or what have you, instead of throwing an error MySQL instead inserts
9 (the maximum value).  That’s just… bad.  An RDBMS
should do exactly everything you tell it and complain *loudly* when it can’t.

 

If you’re used to MySQL, the
problems with PostgreSQL are basically that it’s not quite as friendly as
MySQL.  The command line for Postgre, psql, is less user-friendly.  The
Windows GUI app, pgAdmin III, is also less user-friendly.  Additionally,
the default install for PostgreSQL on nearly every Linux system I’ve seen
is configured to run at minimal requirements.  So you’ll have to
edit the configuration file in order to get the database to perform correctly. 
Also, since PostgreSQL has so many more features than MySQL, it can be a bit
daunting to get started.  It’s like you’ve worked with Notepad
for years and years, and now you’re starting to use Word or EMACS.

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Iulian Manea
Sent: Monday, October 02, 2006
3:38 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Advantages of
postgresql



 

Hello everybody,

 

So far I have only been working with MySQL. Today I was
talking to a friend and he was suggesting I migrated to postgreSQL, as it is
way better …

 

My question is … why?

I mean could someone pls tell me some advantages and
disadvantages of working with postgresql?

 

Thanks in advance,

Iulian!

 








Re: [GENERAL] Performance and Generic Config after install

2006-10-02 Thread Brandon Aiken
I think the problem would be partly mitigated be better or more obvious
documentation that makes it clear that a) PostgreSQL is probably not
configured optimally, and b) where exactly to go to get server
optimization information.  Even basic docs on postgresql.conf seem
lacking.  The fact that something like these exist: 

http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html

http://www.varlena.com/GeneralBits/Tidbits/perf.html#conf

Should be a giant red flag that documentation is a tad sparse for the
config file.  Those docs would take hours of work, they're the only
thing I've really found, *and they're still 3 years out of date*.

It took me a lot of digging to find the docs on optimizing PostgreSQL
from postgresql.org.  It's in the documentation area, but it's not
documented in the manual at all that I could find (which is highly
counter-intuitive).  Instead, it's listed under 'Technical
Documentation' (making the manual casual documentation?  I thought all
Postgre docs were technical.) then under 'Community Guides and Docs',
and finally under the subheading Optimizing (note that the above links
are listed here):

http://www.postgresql.org/docs/techdocs.2

Either the server installer or the (preferably) the manual needs to make
it very clear about this documentation.  If nobody can find it nobody
will use it, and it's very well hidden at the moment.

The manual gets updated with every release, but more and more I realize
that the manual isn't comprehensive.  The manual explains the SQL syntax
and how PostgreSQL interprets the relational model, but it has very
little information on how to really *use* PostgreSQL as a server.  The
manual is all app dev and no sysadmin.  For example, *what* compile time
options are available?  I know they exist, but I've never seen them
listed.

For another example, take a look at this so-called detailed guide to
installing PG on Fedora, which is linked from the 'Technical
Documentation' area of postgresql.org:

http://dancameron.org/pages/how-to-install-and-setup-postgresql-for-fedo
ralinux/

Now, really, this 'guide' is little more than what yum command to run
and which config lines to edit to limit remote TCP connections.

Now take a look at the first comment:
"Thanks for the advice. For an Oracle DBA this really helped me in
comming up to speed on Postgres administration."

There should be an Administration Guide companion to the Manual.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Davis
Sent: Monday, October 02, 2006 2:58 PM
To: Oisin Glynn
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance and Generic Config after install

On Mon, 2006-10-02 at 14:40 -0400, Oisin Glynn wrote:
> As an aside to the "[GENERAL] Advantages of PostgreSQL" thread going
on 
> today, I have wondered why the initial on install config of PostgreSQL

> is (according to most posts) very conservative.  I can see how this 
> would be a plus when people may be getting PostgreSQL as part of an OS

> in the Linux world who may never/rarely use it.
> 
> I know that in reality tuning and sizing all of the parameters is a
very 
> database specific thing, but it would seem that if some default
choices 
> would be available it would give those testing/evaluation and trying
to 
> get started a shot at quicker progress. Obviously they would still
need 
> to tune to your own application.
> 
> Some dreadfully named, possibly pointless options?
> 
> Unoptimized / low performance -  Low load on Server   (The current out

> of the box)

Keep in mind that PostgreSQL doesn't really restrict itself as a whole.
If you set the settings too low, and throw costly queries at it, the
load on the server will be very high. We don't want to imply that
PostgreSQL's settings restrict it's cpu, memory, or disk usage as a
whole.

> Production Non Dedicated   - PostgreSQL is one of the apps sharing 
> server but is important.
> Production Dedicated Server   - The only purpose of this box is to run

> PostgreSQL
> 

In my opinion, this is more the job of distributions packaging it.
Distributions have these advantages when they are choosing the settings:

(1) They have more information about the target computer
(2) They have more information about the intended use of the system as a
whole
(3) They have more ability to ask questions of the user

PostgreSQL itself can't easily do those things in a portable way. If
someone is compiling from source, it is more reasonable to expect them
to know what settings to use.

However, that said, I think that distributions certainly do take a cue
from the default settings in the source distribution. That's why lately
the default settings have been growing more aggressive w

Re: [GENERAL] performace review

2006-10-07 Thread Brandon Aiken
Title: [GENERAL] performace review



It wouldn't surprise me if their bashing were correct, but I doubt that it's PostgreSQL's fault.
 
I download the db source (inside opencrx-1.9.1-core.postgresql-8.zip) and executed their three schema files, dbcreate-indexes.sql, dbcreate-views.sql, dbcreate-tables.sql.  Each of the 118 tables has a three-field composite primary key of 'PRIMARY KEY (object_rid, object_oid, object_idx)'.  object_rid and object_oid are both VARCHAR(200).  There are *no* foreign key constraints.  Each table has between 15 and 50 fields, with 25 looking about average.
 
Gee, why to table joins take so long?  Maybe because a blind monkey created the schema?  Normalized databases do tend to perform better, so I hear.
 
Brandon Aiken


From: [EMAIL PROTECTED] on behalf of Tomi NASent: Sat 10/7/2006 2:06 PMTo: PgSQL GeneralSubject: [GENERAL] performace review

I was just reading http://www.opencrx.org/faq.htm where RDBMS enginesare one of the questions and see pgsql bashed sentence after sentence.Can anyone offer any insight as to weather it's fact or FUD?t.n.a.---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster

Re: [GENERAL] performace review

2006-10-07 Thread Brandon Aiken
AS "p$$assigned_to_title"   FROM kernel_activity act   JOIN kernel_activityparty p0 ON p0.object_idx = 0 AND p0."p$$object_parent__rid"::text = act.object_rid::text AND p0."p$$object_parent__oid"::text = act.object_oid::text;
That's *horrible*.  There's typecasting on the join conditions to convert things to text!  If there's a join on it, not only should they already be the same type, there ought to be a foreign key constraint on it (even if it is a self-referencing table).  The silly thing UNIONs the exact same query four times in order to create a heirarchy!  And it has the gloriously descriptive name 'kernel_view_027' (all 19 views share this naming convention).
 
While creating the tables, I got three errors about an unknown datatype.  Yeah, that's great design.
 
110 of the 118 tables have these fields:
  object_rid varchar(200) NOT NULL,  "p$$object_parent__rid" varchar(200),  object_oid varchar(200) NOT NULL,  "p$$object_parent__oid" varchar(200),  object_idx int4 NOT NULL,  object__class varchar(200),  modified_at varchar(20),  created_at varchar(20),  modified_by varchar(20),  created_by varchar(20),  "owner" varchar(50),  access_level_browse int4,  access_level_update int4,  access_level_delete int4,
 
And this key:
CONSTRAINT kernel_media_pkey PRIMARY KEY (object_rid, object_oid, object_idx)
 
Wow, yeah. "modified_at" and "created_at".  Those should definitely not be timestamps.  "owner".  Great field name, that.  The only keys that don't allow NULLs are the primaries.  Clearly each of the 110 tables will need all these fields, then.
 
Looking at the indexes, the vast majority of them are against one of these 14 universal fields, but only exist on some of the tables.
 
It's *badly designed*.  I've been at this for less than three months and *I* can see it.  They're using a relational database as an object database.  No wonder their performance sucks!
 
Brandon Aiken


From: Alexander Staubo [mailto:[EMAIL PROTECTED]Sent: Sat 10/7/2006 5:59 PMTo: Brandon AikenCc: Tomi NA; pgsql-general@postgresql.orgSubject: Re: [GENERAL] performace review

On Oct 7, 2006, at 23:44 , Brandon Aiken wrote:> I download the db source (inside opencrx-1.9.1-> core.postgresql-8.zip) and executed their three schema files, > dbcreate-indexes.sql, dbcreate-views.sql, dbcreate-tables.sql.  > Each of the 118 tables has a three-field composite primary key of > 'PRIMARY KEY (object_rid, object_oid, object_idx)'.  object_rid and > object_oid are both VARCHAR(200).  There are *no* foreign key > constraints.  Each table has between 15 and 50 fields, with 25 > looking about average.To be fair, there are a bunch of indexes, but the number of indexes seems low compared to the number of fields.> Gee, why to table joins take so long?  Maybe because a blind monkey > created the schema?  Normalized databases do tend to perform > better, so I hear.*De*normalization is the traditional hack to speed up queries, because it reduces the need for joins.Alexander.

Re: [GENERAL] more anti-postgresql FUD

2006-10-10 Thread Brandon Aiken
MySQL 3.23.29 is pre-InnoDB
(http://dev.mysql.com/doc/refman/4.1/en/innodb-in-mysql-3-23.html), so
this database is not transactional, not ACIDic, and does not support
row-level locking or foreign key referential integrity.  At this point,
MySQL lacked support for subqueries, UNIONs, VIEWs, and nearly
everything else beyond basic CRUD.

I bet I can design a program that interfaces flat data files so fast it
makes any RDBMS pale in comparison.  SQLite does that, and it's ACID
compliant!  Performance is not the only motivation for using an RDBMS.
Data integrity and relational modeling are also big considerations.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Tuesday, October 10, 2006 1:56 PM
To: PgSQL General
Subject: [GENERAL] more anti-postgresql FUD

http://www.zabbix.com/manual/v1.1/install.php

in section PostgreSQL vs MySQL :
[quoting]
Regarding the choice between PostgreSQL and MySQL, MySQL is
recommended for several reasons:

* MySQL is faster

  recent benchmarks using ZABBIX clearly show that PostgreSQL
(7.1.x) is at least 10 times slower than MySQL (3.23.29)

  Note: These results are predictable. ZABBIX server processes use
simple SQL statements like single row INSERT, UPDATE and simple SELECT
operators. In such environment, use of advanced SQL engine (like
PostgreSQL) is overkill.
* no need to constantly run resource-hungry command "vacuum" for
MySQL
* MySQL is used as a primary development platform.

If you do use PostgreSQL, zabbix_server will periodically (defined in
HousekeepingFrequency) execute command vacuum analyze.
[done]

anybody know these guys?  this is right off the mysql anti-postgresql
advocacy page.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Anyone using "POSIX" time zone offset capability?

2006-10-16 Thread Brandon Aiken
What about time zones like Tehran (GMT+3:30), Kabul (GMT+4:30), Katmandu
(GMT+5:45) and other non-cardinal-hour GMT offsets?  Is this handled in
some *documented* way already?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Monday, October 16, 2006 6:06 PM
To: pgsql-hackers@postgreSQL.org; pgsql-general@postgreSQL.org
Subject: [GENERAL] Anyone using "POSIX" time zone offset capability?

While trying to clean up ParseDateTime so it works reliably with full
timezone names, I found out about a "feature" that so far as I can tell
has never been documented except in comments in datetime.c.  The
datetime input code tries to recognize what it calls "POSIX time zones",
which are timezone abbreviations followed by an additional hour/minute
offset:

/* DecodePosixTimezone()
 * Interpret string as a POSIX-compatible timezone:
 *  PST-hh:mm
 *  PST+h
 *  PST
 * - thomas 2000-03-15

However this doesn't actually work in all cases:

regression=# select '12:34:00 PDT+00:30'::timetz;
 timetz

 12:34:00-07:30
(1 row)

regression=# select '12:34:00 PDT-00:30'::timetz;
ERROR:  invalid input syntax for type time with time zone: "12:34:00
PDT-00:30"

(The behavior varies depending on which PG release you try it with, but
I can't find any that produce the expected result for a negative
fractional-hour offset.)

This syntax is ambiguous against some full timezone names present in the
zic database, such as "GMT+0", and it's also responsible for a number of
really ugly special cases in the datetime parser.  In view of the fact
that it's never entirely worked and never been documented, I'm inclined
to take it out.  Comments?  Is anyone actually using this?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Upgrade 7.4 to 8.1 or 8.2?

2006-10-20 Thread Brandon Aiken
From what I've seen 8.2 is going to offer several nice new features, but
I would move to 8.1.5 for now.

Honestly, I would probably wait until 8.2.1 is available before moving
to that subversion.  No offense against the PG team, but I've been
burned by zeroes once too many times to go live with 8.2.0.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John
Sidney-Woollett
Sent: Friday, October 20, 2006 11:10 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Upgrade 7.4 to 8.1 or 8.2?

We're looking to upgrade from 7.4 - should we go to 8.1.5 or 8.2?

We have two databases; 7.4.6 and 7.4.11 in a master slave config using 
Slony. Both databases use the C locale with UTF-8 encoding on unix.

We've dumped and loaded the data into an 8.1.4 database and have seen no

problems with invalid UTF-8 sequences. So we're fairly happy that we can

upgrade to 8.1.5 pretty easily using Slony.

We're really looking for some extra performance right now.

Are the differences between 8.1.5 and 8.2 significant? Is 8.2 more about

speed or new features?

John

---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem with 8.1.5 RPMs for Redhat AS 4

2006-10-25 Thread Brandon Aiken

The SRPM appears to be available:

http://www.postgresql.org/ftp/binary/v8.1.5/linux/srpms/redhat/rhel-as-4
/


--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Will Reese
Sent: Wednesday, October 25, 2006 2:42 PM
To: Postgres General
Subject: [GENERAL] Problem with 8.1.5 RPMs for Redhat AS 4

I tried installing the 8.1.5 RPMs, and I got a dependency error on  
the postgresql-server-8.1.5-2PGDG.i686.rpm file.  The 8.1.4 rpms  
installed without a problem. I googled around and found this post...

http://www.dbtalk.net/mailing-database-pgsql-bugs/bugs-8-1-5-rpms-338777
.html

It looks like the postgresql-server-8.1.5-3PGDG.i686.rpm is available  
for ES 4, but not AS 4.  Any idea when the new RPMs might be  
available for AS 4?

Thanks guys.

-- Will Reese

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Brandon Aiken
It also might be a big/little endian problem, although I always thought that 
was platform specific, not locale specific.

Try the UCS-2-INTERNAL and UCS-4-INTERNAL codepages in iconv, which should use 
the two-byte or four-byte versions of UCS encoding using the system's default 
endian setting.

There's many Unicode codepage formats that iconv supports:
UTF-8
ISO-10646-UCS-2 UCS-2 CSUNICODE
UCS-2BE UNICODE-1-1 UNICODEBIG CSUNICODE11
UCS-2LE UNICODELITTLE
ISO-10646-UCS-4 UCS-4 CSUCS4
UCS-4BE
UCS-4LE
UTF-16
UTF-16BE
UTF-16LE
UTF-32
UTF-32BE
UTF-32LE
UNICODE-1-1-UTF-7 UTF-7 CSUNICODE11UTF7
UCS-2-INTERNAL
UCS-2-SWAPPED
UCS-4-INTERNAL
UCS-4-SWAPPED

Gee, didn't Unicode just so simplify this codepage mess?  Remember when it was 
just ASCII, EBCDIC, ANSI, and localized codepages?

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Arnaud Lesauvage
Sent: Wednesday, November 22, 2006 12:38 PM
To: Arnaud Lesauvage; General
Subject: Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

Alvaro Herrera a écrit :
> Arnaud Lesauvage wrote:
>> Alvaro Herrera a écrit :
>> >Arnaud Lesauvage wrote:
>> >
>> >>mydb=# SET client_encoding TO LATIN9;
>> >>SET
>> >>mydb=# COPY statistiques.detailrecherche (log_gid, 
>> >>champrecherche, valeurrecherche) FROM 
>> >>'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;
>> >>ERROR:  invalid byte sequence for encoding "LATIN9": 0x00
>> >>HINT:  This error can also happen if the byte sequence does 
>> >>not match the encoding expected by the server, which is 
>> >>controlled by "client_encoding".
>> >
>> >Huh, why do you have a "0x00" byte in there?  That's certainly not
>> >Latin9 (nor UTF8 as far as I know).
>> >
>> >Is the file actually Latin-something or did you convert it to something
>> >else at some point?
>> 
>> This is the file generated by DTS with "ANSI" encoding. It 
>> was not altered in any way after that !
>> The doc states that ANSI exports with the local codepage 
>> (which is Win1252). That's all I know. :(
> 
> I thought Win1252 was supposed to be almost the same as Latin1.  While
> I'd expect certain differences, I wouldn't expect it to use 0x00 as
> data!
> 
> Maybe you could have DTS export Unicode, which would presumably be
> UTF-16, then recode that to something else (possibly UTF-8) with GNU
> iconv.

UTF-16 ! That's something I haven't tried !
I'll try an iconv conversion tomorrow from UTF16 to UTF8 !

--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] IN clause

2006-11-24 Thread Brandon Aiken
Hasn't it been said enough?  Don't allow NULLs in your database.
Databases are for storing data, not a lack of it.  The only time NULL
should appear is during outer joins.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Martijn van
Oosterhout
Sent: Friday, November 24, 2006 7:20 AM
To: surabhi.ahuja
Cc: A. Kretschmer; pgsql-general@postgresql.org
Subject: Re: [GENERAL] IN clause

On Fri, Nov 24, 2006 at 05:31:07PM +0530, surabhi.ahuja wrote:
> That is fine 
> but what I was actually expecting is this
> if 
> select * from table where col_name in (null, 'a', 'b');
> 
> to return those rows where col_name is null or if it = a or if it is =
b
>  
> But i think in does not not support null queries , am i right?

You'll need to check the standard, but IN() treats NULL specially, I
think it returns NULL if any of the elements is null, or something like
that. It certainly doesn't work the way you think it does.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability
to litigate.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] indexes

2006-11-24 Thread Brandon Aiken
I asked this question here awhile ago.  It's a fairly common question,
and it's known as the surrogate vs natural key debate.

Using a natural key has the advantage of performance.  With a surrogate
key, most RDBMS systems will have to maintain two indexes.  Natural keys
can also make your database more readable, and can eliminate the need to
do joins for foreign keys in some cases.

Surrogate keys are useful because you can very easily change your data
structure with a bit less SQL magick.  A lot of Object Relational
Mappers always create surrogate keys, too, although I suspect that's
mostly a way to make the ORM more portable by guaranteeing that a
primary key will exist.

The only other time surrogate keys are very useful is when all your
candidate keys have values that change fairly often, since the primary
key ought to be as stable as possible.

Some developers also feel more comfortable with an id field.  Having
that metadata feels like a safety net for some reason.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Allison
Sent: Friday, November 24, 2006 9:54 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] indexes

I notice a lot of places where people use the approach of creating an
index and 
a unique key like:

CREATE TABLE foo (
   idx SERIAL PRIMARY KEY,
   name varchar(32) UNIQUE NOT NULL
)

instead of
CREATE TABLE foo (
   name varchar(32) PRIMARY KEY
)

If the name is NEVER going to change, is there any advantage to doing
this?
If there are many-to-many reference tables (like name-to-friends) is
this any 
different?

I've seen this a lot, but I've always assumed that with the condition
that 
'name' would NEVER change, there was no advantage.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-27 Thread Brandon Aiken
Simply put, it doesn't scale as well.

If a table already has candidate keys, then you've presumably got unique
indices on them.  A surrogate primary key adds another segment of data
to pass through I/O and another index to maintain.  Under high loads,
those extra cycles will cost you transactions per minute.

If you're able to throw hardware at the problem to compensate for
performance and data size issues, it's not a problem.  Most databases
are run on systems that are overkill already.  If, OTOH, you're running
a system that needs to be able to process billions of transactions with
exabytes data (say, for example, a comprehensive multi-national health
record database) then you're going to be as interested in SQL tuning as
it's possible to be because no amount of hardware will be enough.

The other argument is that it's redundant data with no real meaning to
the domain, meaning using surrogate keys technically violates low-order
normal forms.

As far as data changing, if you're using foreign key constraints
properly you should never need to issue more than one UPDATE command.
ON UPDATE CASCADE is your friend.

It is always possible to design a domain model which perfectly captures
business logic.  However, it is *not* always possible to actually
implement that domain in a computerized RDBMS, nor is it always
practical.  Just as the domain model represents an estimated
implementation of the real world information, an RDBMS is just an
estimated implementation of the relational model.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John McCawley
Sent: Monday, November 27, 2006 1:53 PM
To: Ron Johnson
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] IS it a good practice to use SERIAL as Primary
Key?


>Yes, that's the point.  They are legacy designs, and that portion of
>the design is wrong.
>
>  
>
I'll weigh in my my .02 on this subject.  After much pain and agony in 
the real world, I have taken the stance that every table in my database 
must have an arbitrary, numeric primary key (generally autogenerated).  
I have found that this gets me into a lot of arguments with other 
database guys, but never seems to cause any problems for me.  
Conversely, I have seen innumerable problems in the real world caused by

the usage of actual data as primary keys.

Perhaps I am amazingly ignorant, but I have yet to find a case where my 
approach causes any real problems.  What does using "real" data as a 
primary key buy you?  The only real advantages I can see are that an 
individual record's data will be somewhat more human-readable without 
joining to other tables, and that your search queries can be simpler 
because they don't have to join against other tables.

On the (many) occasions that I have worked on databases with "real" data

as primary keys, I just saw so many problems arise.  In the real world, 
data changes, even supposedly unchangeable data.  When using arbitrary 
primary keys, all you have to do is change the data in the one table 
where it lives.  If you are using real data as your keys, you have write

complex queries or code to "fix" your data when the supposedly 
unchangeable data changes.

Anyway, I'm sure this is a huge argument, but that's my 0.2



---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-27 Thread Brandon Aiken
It's an arbitrary identifier that only has meaning within the context of
the database.  The domain model isn't supposed to model data in a
database.  It's supposed to model data which coincidentally is going to
be stored in a database.

As far as your bank's poor software design, I can't help you there.
That's simply poor planning. 

Look, I'm not denying the benefits of surrogate keys.  There are many
cases where it makes the most sense to use them.  My only point is that
it *does* violate the relational model.  The fact is that's nothing
special or new for a DBA.  The SQL standard itself violates the
relational model by allowing you to create tables without primary keys.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: David Morton [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 27, 2006 2:30 PM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] IS it a good practice to use SERIAL as Primary
Key?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Nov 27, 2006, at 1:21 PM, Brandon Aiken wrote:
> The other argument is that it's redundant data with no real meaning to
> the domain, meaning using surrogate keys technically violates low- 
> order
> normal forms.

It has real meaning in the sense that it is an internal identifier  
that doesn't change.   My bank set my online login to a stupid 5  
letters of my name plus last four digits of SSN, and they "can not  
change" it.   Most likely, it is the primary key used for as a  
foreign key to all the financial data.   Dumb, dumb, dumb.

If, OTOH, they would go with an internal id, it would be trivial to  
change the login id.

David Morton
Maia Mailguard http://www.maiamailguard.com
[EMAIL PROTECTED]



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (Darwin)

iD8DBQFFazzQUy30ODPkzl0RAs/sAJ9rBTbXPNN/T4eQ9zjJFMAKFpfrPACdHcLj
pVtAZhjxk24vgRm/ScNfuyw=
=mLTC
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-27 Thread Brandon Aiken
Shenanigans!

That problem occurs regardless of whether or not you use surrogate keys.
You have exceeded the scope of the example.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Ribe
Sent: Monday, November 27, 2006 4:01 PM
To: Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re: [GENERAL] IS it a good practice to use SERIAL as Primary
Key?

> You would update the address, the address id wouldn't change. If you
> want to keep track of old addresses you would keep an archive table
> associated with the user.id.

But what about historical data that referenced the address? If you move
today, I still want to know where I shipped last week's orders.


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] n00b RAID + wal hot standby question

2006-12-04 Thread Brandon Aiken
Unless you can separate PGDATA and the WAL destination to be on wholly
independent physical disks and not just different partitions of the same
hardware array, the physical limitations will still be present.

I believe the recommended method is to use RAID 5 or RAID 10 data
partitions and then use RAID 1 for transaction logs.  Additionally,
you're supposed to match the stripes size of the arrays to the block
sizes of your database, but I can never remember the math involved to do
it.

Database guides like this are still a bit beyond what I can understand:
http://www.lc.leidenuniv.nl/awcourse/oracle/server.920/a96520/hardware.h
tm

This one is a bit easier:
http://www.dbazine.com/oracle/or-articles/ault1

This is the best RAID primer I've seen:
http://www.pcguide.com/ref/hdd/perf/raid/index.htm



I'm not convinced I/O is your problem, though.  High CPU and memory
usage is indicative of many different problems, and poor disk I/O is
usually not one of them.  In a modern system, I'd expect to see poor
disk I/O causing *low* CPU usage combined with poor SELECT and awful
INSERT/UPDATE/DELETE performance.  Maybe it's caching the database state
in memory while it's waiting for writing, though.

It seems more likely that the database is either pushing more
transactions per minute, pushing more complex transactions, dealing with
larger queries and result sets, maintaining more indexes, or running
more complex pl/SQL procedures, triggers, and constraints.

Additionally, if my understanding is right then running with autovacuum
disabled and no batch process vacuum strategy on a database with lots of
INSERTs and DELETEs is essentially like running without indexes.


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Anton Melser
Sent: Monday, December 04, 2006 4:11 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] n00b RAID + wal hot standby question

Hi,
I am just starting at a company and we are inheriting a previously
built solution. It looks pretty good but my previous experience with
pg is seriously small-time compared with this...
I am very new at the job, and don't know what hd config we have but it
will be RAID-something I imagine (hey I was working with desktop
"servers" before this!). If that is very important I can find out. We
seem to be saving our WAL to the same partition as PGDATA, and I
notice that we are maxing out a reasonable looking server. The db is
not very big (~4gig, 400meg pgdump), and though I can't see any vacuum
strategy (autovacuum on a 8.1.4 is disabled), we didn't have as much
consistent CPU usage at my old job (with a 6 gig db and MUCH less CPU
and RAM, and probably as many connections), and my vacuum strategy was
also pitiful!  Sure, completely different environments, but I am
thinking that WAL replication could be a factor.
So my question... being in complete ignorance of how RAID works (the
performance details)... would it be better to try and separate the WAL
destination from PGDATA? How much of a difference could it make?
Should we wait till the customer starts complaining (no explosion in
traffic/db size realistic for the foreseeable future...)?
Any abuse welcome.
Cheers
Antoine

---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 8.2 contrib. "Full Disjunction"

2006-12-06 Thread Brandon Aiken
Download the contrib module .tgz from PGFoundry and check out the
readme.  Syntax is explained therein.  It's a multi-step process, it
appears.

I don't especially care for the term 'full disjunction' to describe this
operation, but it seems to be an understood relational operation.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard
Broersma Jr
Sent: Wednesday, December 06, 2006 1:30 AM
To: Richard Broersma Jr; General PostgreSQL List
Subject: Re: [GENERAL] 8.2 contrib. "Full Disjunction"

> 1) what is its application?
I guess I found part of my "laymans" answer:
http://www.cs.toronto.edu/~yaron/Presentations/pods2003.ppt#468,39,Examp
le

> 2) how is it used?
Maybe this question should have been, what is the syntax?

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] concatenation operator || with "null" array

2006-12-08 Thread Brandon Aiken
NULL concatenated to anything is NULL.  Try this:

UPDATE test SET myint = COALESCE(myint || ARRAY[123], ARRAY[123]) WHERE
id = 1;

Or:

UPDATE test SET myint = 
   CASE WHEN myint IS NULL THEN ARRAY[123]
   ELSE myint || ARRAY[123]
   END
WHERE id = 1;

An empty array can be displayed as ARRAY[NULL], but defaults to type
TEXT.  An explicit empty integer array would be ARRAY[NULL]::INTEGER[].
NULL arrays are not handled entirely consistently, though.  Sometimes it
acts like a NULL, and sometimes it acts like a container of NULL.



--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of stroncococcus
Sent: Wednesday, December 06, 2006 5:43 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] concatenation operator || with "null" array

Hello!

When I try to fill an array with the concatenation operator, like
UPDATE test SET myint = myint || ARRAY[123] WHERE id = 1
that before that statement was null, then it is also null after that
statement.
But if there is already something in that array and I execute that
statement, then everything works fine and one can find the 123 there,
too.
Is this the normal behavior? Is there a way to "concatenate" to null
arrays as well, or do I have to test this inside my script, and if it
is null fill it normal for the first time?

Best regards,
Kai


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Numeric or Integer for monetary values?

2006-12-11 Thread Brandon Aiken
postgres=# select (101::integer)/(2::integer);
 ?column?
--
   50

postgres=# select (1.01::numeric)/(2::numeric);
?column?

 0.5050

Rounding errors are something you will need to deal with whether you use 
INTEGER or NUMERIC fields.  You will need to determine what the business logic 
requirements are for the math.  That is, what do your clients expect to happen 
to fractional units of money?  When during manual math operations are dollar 
values rounded?  Make your application work the way your client expects, not 
the other way around.

I would use NUMERIC since it represents your data most correctly.  Using 
INTEGER for money invariably involves lots of excessive and possibly confusing 
math with powers of 10.  It's very easy to randomly be off by an order of 
magnitude.  With money, that's *bad*.  INTEGER math also forces you to always 
silently truncate fractional cents.  That may not be what you want.

--
Brandon Aiken
CS/IT Systems Engineer

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonatan Ben-Nes
Sent: Monday, December 11, 2006 10:51 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Numeric or Integer for monetary values?

Hi all,

I need to decide which data type should I make for monetary values, shall I use 
Numeric data type to hold values like "9.52" or is it better to keep it as an 
integer with value in cents like "952"? 

I know that at the manual it's written about the Numeric data type that "It is 
especially recommended for storing monetary amounts and other quantities where 
exactness is required.", but I'm wondering what will happen at cases when I got 
$1.01 to divide between 2 entities at 50% each, if both will get 51 cents or 50 
cents it will be a mistake. 

The calculation procedure will probably be made with PL/pgSQL, actually maybe 
it doesn't even matter what the data type is (Integer/Numeric) as long as I 
make enough validations for the result?

Cheers!
  Ben-Nes Yonatan 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] search_path when restoring to new db

2006-12-11 Thread Brandon Aiken
PostgreSQL is simply very granular about what it lets you dump.  The major 
point to realize is that pg_dump is useful for getting your data schema and 
data, and pg_dumpall will capture data schema, data, and most database instance 
configurations.  Pg_dumpall has access to *global* objects, such as roles, 
tablespaces, and the default search_path.

Pg_dump assumes you're planning on using it to restore to an existing database 
that's already correctly configured.  Pg_dumpall doesn't assume that.  However, 
it does assume that the new database server will be the same version of 
PostgreSQL, IIRC.  I believe it complains if you try to pg_dumpall a database 
with a different version of pg_dumpall.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of SunWuKung
Sent: Monday, December 11, 2006 3:56 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] search_path when restoring to new db

Thanks for your answers.

I guess it means that I have to issue Alter database after restore
than.
I find this a little scary because this is something that I just
realised now and wouldn't have thought beforehand. Seems to me that to
make an exact copy of a single db it isn't enough to use pg_backup and
pg_restore but need to issue other commands as well.

I think it would help a few users (it would certainly help me) if I
could find a complete example on how to backup a single database on one
server and restore it on another without having to worry about things
that might have been omitted - like this one.

I used to work with SQLServer and while I like Postgres a lot I don't
know enough to take advantage of the flexibility that this
backup-restore mechanism offers. In SQLServer if I had the proper
rights I could take a backup of a db, bring it to another server, say
restore and that's it (ok, I had to repopulate fulltext index) but I
could be sure that the db is in the same state than when I made the
backup. I must say that this was very convenient for me.

I think one should be able to create a backup and restore script that
would do the same with Postgres, but I didn't find one so far. Could
somebody show me an example? Or trying to simplify this is a wrong aim?

Thanks for your help.
Balázs


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] search_path when restoring to new db

2006-12-12 Thread Brandon Aiken
The standard method is to use a pg_dumpall for the initial copy and whenever 
globals or the schema changes, and use pg_dump when you just want to get the 
data from a single database.

Globals and schema should not change very often.  In fact, they should be fixed 
except between software revisions.  If they aren't, you might be doing 
something wrong.

What you can do is pg_dumpall --schema-only or pg_dumpall --globals-only (I'm 
not sure if one of those is a subset of the other) and then use pg_dump 
--data-only for the databases you want data for.  I'm not sure offhand which 
options you'll need to use to be sure of getting what you want.  Double check 
the docs and be sure to test it a few times.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of SunWuKung
Sent: Tuesday, December 12, 2006 3:19 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] search_path when restoring to new db

Ok, I understand this difference now.
Knowing that, what is the standard way to copy a single database to
another server?

- can I use pg_dumpall to dump a single db? - or
- I have to use pg_dump and there is a procedure to ensure that old and
new dbs are the same, like

1. create new db
2. check or create user
3. run pg_restore
4. do alter db

and is there a way to automate this? If there is could you point me
there?

If there isn't:
Do you think that automating the complete backup and restore of a
single database would be database specific or it could work with
different databases? Don't you think that this is something that would
be generally useful?

Thank you for the help.
Balázs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why DISTINCT ... DESC is slow?

2006-12-12 Thread Brandon Aiken
If you have, say, an index(x, y) then that index will often double as an
index(x).  It will generally not double as an index(y).

I'm not sure if that's how all RDBMSs work, but I'm pretty sure that's
how Oracle works.  It never surprises me when PostgreSQL mimics Oracle.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik Jones
Sent: Tuesday, December 12, 2006 11:33 AM
To: Ron Johnson
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Why DISTINCT ... DESC is slow?

Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 12/12/06 01:28, Anton wrote:
>   
>> Hi. With this table (about 800 000 rows):
>>
>> =# \d n_traffic
>> Table "public.n_traffic"
>>Column|Type |  Modifiers
>>
--+-+---
---
>> login_id | integer | not null
>> traftype_id  | integer | not null
>> collect_time | timestamp without time zone | not null default now()
>> bytes_in | bigint  | not null default
(0)::bigint
>> bytes_out| bigint  | not null default
(0)::bigint
>> Indexes:
>>"n_traffic_collect_time" btree (collect_time)
>>"n_traffic_login_id" btree (login_id)
>>"n_traffic_login_id_collect_time" btree (login_id, collect_time)
>> Foreign-key constraints:
>>"n_traffic_login_id_fkey" FOREIGN KEY (login_id) REFERENCES
>> n_logins(login_id) ON UPDATE CASCADE
>>"n_traffic_traftype_id_fkey" FOREIGN KEY (traftype_id) REFERENCES
>> n_traftypes(traftype_id) ON UPDATE CASCADE
>> 
>
> Why do you have indexes on both LOGIN_ID *and* LOGIN_ID +
COLLECT_TIME?
>
> ISTM that you can drop the LOGIN_ID index.
>   
Hmm...   Will queries that use only login_id and not collect_time use 
the (login_id, collect_time) index?

-- 
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG Admin

2006-12-13 Thread Brandon Aiken
The problem here is that you've defined in business logic a relationship
between the records in your table, then *not defined that relationship
in the database*.  Now you're complaining that the database doesn't do
what you want.  How can it?  You're not telling it everything.

Remember, in a relational database:
1. A relation (or table) is a collection of items with the same
properties.
2. The items are not related to each other directly.  This is why order
is not important.  Relations (tables) are unordered sets, not ordered
lists.  Here's the fundamental point: records in a table are, by
default, not related to each other!

What determines the ordering sequence here?  I have to think it's based
on time?  It can't be arbitrary, because SERIAL is an arbitrary
sequence.  If you weren't designing a database, how would you be
assigning and reassigning numbers?

You need a field in each record that *knows* this assigning order.  The
database needs to know how to figure out what the order is, since the
actual number you assign to it is exactly related to that order and only
that order.  All we know at the moment is that you need a field
"ordinal" such that for every record k, ordinal(k) = ordinal(k-1)+1 (and
with some fixed value for ordinal(0)).  What determines the relationship
between k and k-1?  Why is k-1 not k-2 or k+4?

Once you do this you simply need to re-sequence this ordinal field on
In/Up/De.  You can probably even construct a rather elaborate VIEW to
automatically calculate the ordinals declaratively, or use a separate
table to track the parent-child relationship of each object and use that
for numbering.

Brandon Aiken

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bob Pawley
Sent: Monday, December 04, 2006 10:40 PM
To: Berend Tober
Cc: pgsql general
Subject: Re: [GENERAL] PG Admin

I am talking about designing the control system.

No one makes a perfect design at first go. Devices are deleted and
others 
added. Until the end of the design stage the numbers need to be
sequential 
with no gaps. After the design the numbers of each device are static and
new 
devices are added to the sequence or fill in for abandoned devices - but

that is another, separate problem.

But that is beside the point. What I am looking for is a gapless
sequence 
generator which has the ability to justify for deletions as well as 
additions.

What I am looking for is a very simple adaptation of the serial
function. 
All that I need it to do is to justify for design changes and not care
that 
if it is reassinged to a different device. The fact that a particular
device 
may, by happenstance, change it's assigned number - once twice or
multiple 
times, during the design stage,  is of no consequence - as long as the 
totallity of numbers assigned are sequential and gapless.

Bob


- Original Message - 
From: "Berend Tober" <[EMAIL PROTECTED]>
To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "pgsql general" 
Sent: Monday, December 04, 2006 7:15 PM
Subject: Re: [GENERAL] PG Admin


> Bob Pawley wrote:
>> Your missing the point.
>>
>> I am creating a design system for industrial control.
>>
>> The control devices need to be numbered. The numbers need to be 
>> sequential. If the user deletes a device the numbers need to
regenerate 
>> to again become sequential and gapless.
> Could you explain what it is about industrial control that requires
the 
> reassignment of numbers? Seems to me to make for confusion because
over 
> time, you then have a particular instrument referred to by different 
> identifiers. So if you had other data, such as written logs, shop
floor 
> design diagrams, or other data not included in the data base, for
example, 
> you'ld have the problem of keeping track of which instruments were
really 
> being talked about because the names (identifying number, that is)
keep 
> changing.
>
> 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] INSERT INTO row value constructors

2006-12-13 Thread Brandon Aiken
This is just an odd question.  Previously RDBMSs (MySQL, SQLite, IIRC) I've 
used have allowed row value constructors (optional feature F641 in SQL'03) on 
INSERT statements.  That is:

INSERT INTO mytable
VALUES (0,'hello'),(1,'world');

Is essentially shorthand for:

INSERT INTO mytable VALUES (0,'hello');
INSERT INTO mytable VALUES (1,'world');

Under PostgreSQL, though, the parser thinks the row value constructors are 
arrays.  This isn't a problem, per se, except to make me realize that 
PostgreSQL is unlikely to adopt F641.

Is that correct, or is this behavior expected to change?

This is just a curiosity question.  I expect that PostgreSQL would behave in a 
similar manner performance wise inside a transaction.  It just struck me as odd 
when it didn't work.
 
--
Brandon Aiken
CS/IT Systems Engineer
 
Confidentiality Notice
This email, including attachments, may include confidential and/or proprietary 
information, and may be used only by the person or entity to which it is 
addressed.  If the reader of this email is not the intended recipient or 
his/her authorized agent, the reader is hereby notified that any dissemination, 
distribution or copying of this email is prohibited.  If you have received this 
email in error, please notify the sender by replying to this message and 
deleting this email immediately.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] INSERT INTO row value constructors

2006-12-13 Thread Brandon Aiken
Cool.  

Now I just have to wait for Debian backports, or figure it out for
myself if I can find the time.

*sigh* 8.2 isn't even in Portage yet.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 13, 2006 5:01 PM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] INSERT INTO row value constructors

On Wed, Dec 13, 2006 at 04:55:00PM -0500, Brandon Aiken wrote:
> This is just an odd question.  Previously RDBMSs (MySQL, SQLite, IIRC)
I've used have allowed row value constructors (optional feature F641 in
SQL'03) on INSERT statements.  That is:
> 
> INSERT INTO mytable
> VALUES (0,'hello'),(1,'world');



> Is that correct, or is this behavior expected to change?

It's in 8.2, see the documentation:

http://www.postgresql.org/docs/current/static/dml-insert.html

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability
to litigate.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] out of memory woes

2006-12-14 Thread Brandon Aiken
I wonder if this is related to the Linux memory overcommit problem:

http://www.postgresql.org/docs/current/interactive/kernel-resources.html
#AEN19361


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Angva
Sent: Wednesday, December 13, 2006 4:49 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] out of memory woes

Hi everyone,

First, this group has been good to me, and I thank you guys for the
valuable help I've found here. I come seeking help with another
problem. I am not even sure my problem lies in Postgres, but perhaps
someone here has had a similar problem and could point me in the right
direction.

As I've mentioned in a few other posts, I run a daily job that loads
large amounts of data into a Postgres database. It must run
efficiently, so one of the tricks I do is run table loads, and commands
such as cluster, in parallel. I am having a problem where my cluster
job consistently fails with an out of memory error. It did not always
do this, but at some point a memory leak or "something" built up to the
point where clustering would always fail on one of the two largest
tables. That is, four tables are clustered in parallel. The smaller of
the two finish successfully. The remaining two - the largest - run for
several minutes. Then one of the tables - not always the same one -
gets an out of memory error and fails.

So, suspecting a memory leak, I tried bouncing Postgres, and ran the
clusters again. No luck - failed in the same manner.

I don't know if it means anything, but swap never seems to be used by
the postgres processes (I stared at top and vmstat for a while), though
the VIRT column in top definitely shows something. Here are sample
lines from top while two processed are running:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
30692 postgres  25   0 1033m 854m 459m R  100 14.3   1:10.68 postgres:
secmaster dev_stage [local] SELECT
30691 postgres  23   0 1752m 1.6g 459m R  100 27.5   2:55.60 postgres:
secmaster dev_stage [local] SELECT

(You see SELECT instead of CLUSTER because I wrapped up my cluster
commands in functions. I call them from psql by selecting the
function.)

Sample vmstat output:

procs ---memory-- ---swap-- -io --system--
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us
sy id wa
 3  2724  18392   5052 559962400   915  23525 3 11
2 80  8

It seems that the postgres processes do not want to use swap -- swap
never increases as the processes run. Again I am not sure whether this
is significant.

If I run the clusters sequentially, there is no error - they just take
too long. I was out of ideas so I bounced the server entirely and ran
the clusters in parallel a few times - success! But I don't want to
have to bounce the server regularly.

So, I'm not sure what to do. I need to figure out why the server would
deteriorate in such a way. Any advice that can be offered would be much
appreciated. I can provide any additional information that might be
necessary.

I am running Postgres 8.1.2 on CentOS 4.4 64-bit.

Thanks a lot,
Mark


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] could not write to log -> PANIC -> System down

2006-12-14 Thread Brandon Aiken
That should not occur if NetBackup is at all a recent version and you're
on WinXP or Win2k3.  The backup client should be using Volume Shadow
Copy.  You should only have file locking issues on Windows 2000 or if
your partitions are FAT32 (which is a terrible idea).  

Of course, it's Windows.  "Should not" is often a suggestion, it seems.
As a port, postmaster.exe was presumably not written with VSS in mind,
so it might object to the shadow copy instantiation (which, again, it
*should* not be able to do).

No idea on the frequent autovacuuming.  Do you do a lot of deletes?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Thursday, December 14, 2006 4:00 PM
To: Scott Marlowe
Cc: dev; pgsql general
Subject: Re: [GENERAL] could not write to log -> PANIC -> System down

On 12/14/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
>
>
> On Thu, 2006-12-14 at 11:28, dev wrote:
>  > Hello friends,
>  >
>  > we have some strange problem, postmaster (pg 8.1 /win32)
>  > suddenly shutdown because of "no reason".
>  >
>  > The interesting thing is that this occurs always at
>  > almost same time (between 0.00 and 0.30h), in that time period is
>  > running system backup (Veristas backupexec agent) - starts at
23:30.
>  > The problem occurs randomly.
>  > In two of cases we have UPDATE/INSERT operation, but in third case
- no.
>  >
>  > P.S. Why "autovacuum" runs every minute almost? Is this related?
>  >
>  > Thanks in advanced!
>  >
>  >
>  >  LOG (2006-12-14) ==
>  > 2006-12-14 00:00:51 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:01:52 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:02:53 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:03:54 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:04:56 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:06:02 LOG:  autovacuum: processing database "mtdb"
>  >
>  > 2006-12-14 00:06:02 ERROR:  could not write block 14725 of relation
>  > 1663/16388/61387: Permission denied
>  >
>  > 2006-12-14 00:06:02 CONTEXT:  writing block 14725 of relation
>  > 1663/16388/61387

>  Is your backup agent (vertias backupexec) doing a file system backup
of
>  the $PGDATA directory?  This is probably not a good idea, especially
if
>  it changes perms / locks the files while it is doing a backup.
Either
>  way, a file system backup is not the proper way to backup a pgsql
>  instance, unless it is combined with PITR recovery.
>
>  Best answer is to not let the backup agent hit the $PGDATA directory,
>  but rather to run a backup with pg_dump or pg_dumpall to some other
>  directory and have your backup agent back that file up.
>

problem is veritas which has a special kernel driver which can lock
any file even if it is in use by an application.  obviously, you do
not want to do raw file system backup of your database folder.

I would check out eSilo (disclaimer: I work at this company) for
backup solutions that are specialized towards databases.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] could not write to log -> PANIC -> System down

2006-12-15 Thread Brandon Aiken
My understanding of VSS is that only one non-VSS aware app can access
the data at any one time.  All I meant was that if their NetBackup
version was old that they probably cannot benefit from VSS since I doubt
the Win32 PG port knows about it either.

Brandon Aiken

-Original Message-
From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 14, 2006 5:23 PM
To: Brandon Aiken
Cc: Merlin Moncure; pgsql-general@postgresql.org
Subject: Re: [GENERAL] could not write to log -> PANIC -> System down

On Thu, Dec 14, 2006 at 05:13:30PM -0500, Brandon Aiken wrote:
> Of course, it's Windows.  "Should not" is often a suggestion, it
seems.
> As a port, postmaster.exe was presumably not written with VSS in mind,
> so it might object to the shadow copy instantiation (which, again, it
> *should* not be able to do).

Any backup system that is not transparent to processes running on the
system seems to be flawed by design.

> No idea on the frequent autovacuuming.  Do you do a lot of deletes?

In those messages "processing" just means it woke up to see if there
was anything to do. It probably didn't do anything. Waking up every
minute is not that big a deal...

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability
to litigate.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] TIMESTAMP WITHOUT TIME ZONE

2006-12-15 Thread Brandon Aiken
You asked:
"I'd appreciate a clean yes/no;  From a Java application, throught PG in
both directions, the same timestamp comes back that was handed to the
JDBC
driver so long as it's stored in a "timestamp without time zone"
attribute, nomatter neither where on earth the insert/update originates,
nor where the select originates?"

No.  It returns the number of seconds since epoch, which is 1970-01-01
00:00 GMT.  If you insert '2006-12-15 20:00 EST', it basically inserts
the result of EXTRACT('epoch' FROM '2006-12-15 20:00 EST'), which is
1166230800.  It is a normal Universal Time Coordinate (UTC).

TIMESTAMP WITH TIME ZONE will precisely identify any point in time.  It
does not store the time zone information from the client.  When you the
later select the field, it returns a properly formatted string with the
time zone the server is configured for in postgresql.conf.  You can also
use AT TIME ZONE to specify a different zone if you wish.

If you want to store the time zone information the client used when it
stored the time (which is generally useless data) I suggest a varchar
field that stores the ISO acronymn for the relevant time zone or a
numeric field that stores the time adjustment in hours.

Examples (this server is EST, or GMT -05):

postgres=# SELECT '2006-12-15 20:00 PST'::TIMESTAMP WITH TIME ZONE;
  timestamptz

 2006-12-15 23:00:00-05
(1 row)

postgres=# select '2006-12-15 20:00 PST'::TIMESTAMP WITH TIME ZONE AT
TIME ZONE
'GMT';
  timezone
-
 2006-12-16 04:00:00
(1 row)



--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Troy
Sent: Friday, December 15, 2006 5:18 PM
To: Tom Lane
Cc: Richard Huxton; Randy Shelley; pgsql-general@postgresql.org
Subject: Re: [GENERAL] TIMESTAMP WITHOUT TIME ZONE 


> Richard Troy <[EMAIL PROTECTED]> writes:
> > See my post from a few minutes ago, but simply put, time/date is at
least
> > as challenging as money or multibyte character. And, simply put, the
> > Postgres implementation of timezone is INSUFFICIENT.
>
> Really?  We do all the things you have listed, and more.  AFAICS what
> you have described is an outside-the-database reinvention of PG's
> semantics for timestamp with time zone.
>
>   regards, tom lane

Hi Tom,

thanks for the prompt reply... Not much time - just a few moments to
reply
and then I have to get on with my customer's deliverables... ...ISTM I
took the meaning "TIMESTAMP WITH TIMEZONE" literally, while in reality
the
PG team has implemented the concept but "without timezone" in the
database
as a part of user data. I confess I never double checked the
implementation details thereof as it sounds obvious you're including
time
zone data in the data stored by the server. Also, of the two RDBMSes in
which I personally know the internal implementations of date/time, and
of
the ones I've talked with the engineers about, none of them get it right
or even begin to get it right, so it never occured to me that Postgres
would do so much better. Sounds like the PG team has once again thought
about the problem from a different perspective and came up with a better
answer.

That said, nobody has yet assured me that when I give a timestamp I get
it
back unmolested. As you correctly recall, yes, Science Tools supports
five
RDBMSes and need to do so as cleanly and consistently as we can, and
yes,
it's pretty hard to do all the testing, given all the permutations. And,
we're in the process of certifying both Ingres (which will make it, I'm
sure) and ANTS (which might not). So, seven RDBMS choices... -shrug-

I'd appreciate a clean yes/no;  From a Java application, throught PG in
both directions, the same timestamp comes back that was handed to the
JDBC
driver so long as it's stored in a "timestamp without time zone"
attribute, nomatter neither where on earth the insert/update originates,
nor where the select originates? Same bits, yes? Otherwise, "Houston,
we've got a problem."

Thanks again,
Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Autovacuum Improvements

2006-12-19 Thread Brandon Aiken
You're saying that the dirtyness of a table is proportional to when you
plan on vacuuming it next.  I don't see that connection at all.  The
only correlation I might see is if it happens to be 5:59 AM when your DB
decides your table is dirty, and your maintenance window closes at 6:00
AM.  Then you have to program the maintenance window to gracefully
unplug the vacuum.

Currently, autovacuum runs every minute and checks to see if any tables
meet the requirements for vacuuming.  Are the requirements the amount of
time a vacuum would take, or the raw number of dirty tuples?  One might
be a function of the other, for sure, but exactly what does the
autovacuumer use to decide when to clean?

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthew
O'Connor
Sent: Tuesday, December 19, 2006 5:37 PM
To: Glen Parker
Cc: Postgres general mailing list
Subject: Re: [GENERAL] Autovacuum Improvements

Glen Parker wrote:
> Erik Jones wrote:
>> Matthew O'Connor wrote:
>>> Glen Parker wrote:
>>>> If it isn't there somewhere already, I would ask to add:
>>>>  Expose a very easy way to discover autovacuum's opinion about a 
>>>> particular table, for example "table_needs_vacuum(oid)", ignoring 
>>>> any time constraints that may be in place.
>>>
>>> This might be a nice feature however in the presence of the much 
>>> talked about but not yet developed maintenance window concept, I'm 
>>> not sure how  this should work.  That is, during business hours the 
>>> table doesn't need vacuuming, but it will when the evening 
>>> maintenance window opens up.
>  >
>> Well, what he's saying is, "Not taking into account any 
>> time/maintenance windows, does this table need vacuuming?"
> 
> Correct.  IOW, "does it need it?", not "would you actually do it at
this 
> time?"...

I understand that, but it's a subjective question.  The only question 
autovacuum answers is "Am I going to vacuum this table now?", so in the 
current setup you probably could create a function that answers your 
question, I was just pointing out in the future when maintenance windows

  get implemented that this question becomes less clear.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

2006-12-21 Thread Brandon Aiken
'2006-12-20 00:00:00-02' and '2006-12-19 23:00:00-03' *are* the same
time.  You *did* preserve it.  Is your application unaware of timezone?

If you want the server to behave like it's in a different time zone that
where it actually is, configure the locale in postgresql.conf.

http://www.postgresql.org/docs/8.2/interactive/runtime-config-client.htm
l#GUC-TIMEZONE


--
Brandon Aiken
CS/IT Systems Engineer

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rodrigo Sakai
Sent: Thursday, December 21, 2006 12:33 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

Hi all,

I'm having some troubles with time zones! I have a database dump file
that have the date fields stored as '2006-12-20 00:00:00-02'! 
And I have to restore it in a database that has the time zone configured
as 'BRST' (-3 from GMT). So, when it is restored the value becomes
'2006-12-19 23:00:00-03'. Ok this is logic because the difference of
time zones.

But I have to restore it and maintain the same value of datetime! How
can I do it?

Thanks in advance! 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

2006-12-21 Thread Brandon Aiken
So you want to change the time?  I don't see why that's what you'd want
to do, but:

UPDATE mytable SET mydate = mydate + INTERVAL '1 hour';

That's the only way to make '2006-12-20 00:00:00-02' show up as
'2006-12-20 00:00:00-03' if GMT-03 is the timezone your server is in,
because those are completely different times.

You should not be storing the timezone information if you just want the
relative time of day and not the absolute time.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Rodrigo Sakai [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 21, 2006 1:37 PM
To: Brandon Aiken; 'Postgres general mailing list'
Subject: RES: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

  Actually I want the server to behave in the time zone 'BRST' (this is
already configured). But the machine where the dump was done was with a
time
zone that is -2 from GMT! So I need to restore this dump in my server,
but
maintain the 00:00:00 at the hour part! 

  Is there a way???

-Mensagem original-
De: Brandon Aiken [mailto:[EMAIL PROTECTED] 
Enviada em: quinta-feira, 21 de dezembro de 2006 14:00
Para: [EMAIL PROTECTED]; Postgres general mailing list
Assunto: RE: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

'2006-12-20 00:00:00-02' and '2006-12-19 23:00:00-03' *are* the same
time.  You *did* preserve it.  Is your application unaware of timezone?

If you want the server to behave like it's in a different time zone that
where it actually is, configure the locale in postgresql.conf.

http://www.postgresql.org/docs/8.2/interactive/runtime-config-client.htm
l#GUC-TIMEZONE


--
Brandon Aiken
CS/IT Systems Engineer

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rodrigo Sakai
Sent: Thursday, December 21, 2006 12:33 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] RESTORING A DATABASE WITH DIFFERENT TIMEZONES

Hi all,

I'm having some troubles with time zones! I have a database dump file
that have the date fields stored as '2006-12-20 00:00:00-02'! 
And I have to restore it in a database that has the time zone configured
as 'BRST' (-3 from GMT). So, when it is restored the value becomes
'2006-12-19 23:00:00-03'. Ok this is logic because the difference of
time zones.

But I have to restore it and maintain the same value of datetime! How
can I do it?

Thanks in advance! 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL) 8.1.4

2007-01-08 Thread Brandon Aiken
If you are suffering from this particular error, you will see an entry
in the event log.  Look for an error from Tcpip with an ID of 4226.  The
message will say "TCP/IP has reached the security limit imposed on the
number of concurrent (incomplete) TCP connect attempts.".  If you do not
see this message, you are not hitting this limit.

See:
http://www.microsoft.com/products/ee/transform.aspx?ProdName=Windows%20O
perating%20System&ProdVer=5.1.2600.2180&EvtID=4226&EvtSrc=Tcpip&FileVer=
5.1.2600.2180&FileName=xpsp2res.dll&EvtType=Warning&LCID=

The limit is 10 outbound half-open connections.  Typically, you will
only see this limit if you're running a server or using P2P apps.  The
other limit is a maximum of 10 connections to the Server component
(which does file and print sharing; people were using Win2k Pro as a
file and print server).  The only way to modify the limit is to manually
modify binary files.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shelby Cain
Sent: Monday, January 08, 2007 3:12 PM
To: Oisin Glynn; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Failure on Windows XP Pro psql
(PostgreSQL) 8.1.4

I'm just throwing this out here... but the usual suspects for strange
connection issues like these would be a software firewall or antivirus
(especially AV software that has anti-worm features).

Also, Windows XP Pro isn't really viable as a server OS as Microsoft has
intentionally limited certain features of the TCP/IP stack (e.g.: low
limits for the TCP connection backlog).  In theory, if you had a flood
of incoming connections only some of them would be serviced correctly
before the backlog limit was reached.

Regards,

Shelby Cain

- Original Message 
From: Oisin Glynn <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Sent: Monday, January 8, 2007 1:33:54 PM
Subject: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)
8.1.4

I am running postgres on Windows XP and have been for quite a while as

the database engine for our application.

On an ODBC connection the following error has started happening. The gap

in activity is normal for our application. but the connection failing is

a first we have this application live in approx 5 locations some for as 
long as 12 months (though only 2 live are on XP)

We did enable the system to check for Windows updates automatically 
recently not sure if this could be a factor.
There is another one of our apps connecting through the C++ API which is

not seeing any loss of service.  I suspect some sort of tcp-ip issue as 
I cannot connect with PgAdminIII locally on the server once the 
condition occurs. I have had this 3 times now and it seems to be 
happening very 2-3hours of operation. I am not getting any eventvwr or 
other Windows errors telling me I am hitting a limit or such?  And when 
I restarted the Postgresql service I got the following as part of my 
log(See attached logs)

2007-01-05 14:38:28 LOG:  pgpipe failed to bind: 10055i
2007-01-05 14:38:28 FATAL:  could not create pipe for statistics buffer:

No error

I am attaching my postgresql.conf (perhaps I can turn on some more 
logging to make things easier to spot) and the log files I have from the

first occurrence. These show my attempts to restart the postgresql 
service and the errors I received.

This is a snipit of my applications log file showing the error pop up at

14:11:07

01/05/07 13:58:57.453 2296 DEBUG  - Call No: 10131  SELECT   SQL = 
SELECT * FROM zfunc_access_mailbox_cleanup(10131)
01/05/07 13:58:57.468 2296 DEBUG  - Call No: 10131  SELECT Complete   
Return Code = $VC1
01/05/07 13:58:57.468 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131  DISCONNECT Free 
Statement Handle
01/05/07 13:58:57.484 2296 DEBUG  - Call No: 10131DISCONNECT OK
Voicemail
01/05/07 13:58:57.484 2296 DEBUG  - BX_SVR_TRP009 Sent TRP_IVR_ODBCOK 
for call no= 10131
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Allocate DBC

Handle Voicemail
01/05/07 14:11:07.734 2296 DEBUG  - Call No: 10132  CONNECT Connect 
Voicemail
01/05/07 14:11:07.750 2296 DEBUG  - Call No: 10132CONNECT SQLConnect 
failed   08001  could not connect to server: No buffer space available 
(0x2747/10055)
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
 Error Message = 165Voicemail

If there is anything I can do to generate better logging I am all ears,

Oisin






# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Com

Re: [GENERAL] Database Failure on Windows XP Pro psql (PostgreSQL)

2007-01-08 Thread Brandon Aiken
Try a 180-day Win2k3 trial to see if the issue persists.  Realistically,
though, if you can't afford the proprietary software don't develop with
it.

If it's a Windows XP bug (or "feature") then it's not anything we can
help with since PG is working correctly.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Oisin Glynn
Sent: Monday, January 08, 2007 3:54 PM
To: Shelby Cain
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Failure on Windows XP Pro psql
(PostgreSQL)

I have exceptions in the FW for postgresql.
I know XP Pro is not a server OS but it is a $$ question (I know *nix is

cheaper but it is not an option without a major rewrite). The Server is 
really more like a network appliance it is not expecting to have extra 
connections all over. In fact the DB is only accessible on 127.0.0.1

I have been checking for XP limits coming into play but the kicker is 
that there are no errors appearing (which other people are reporting 
lots of with SP2)
The 3 limits I am aware of are
a) There is a new limit on numbers of outbound connections being created

per second (people using e mule and torrents etc are having issues) We 
are not creating masses of outbound requests per second (at least AFAIK 
unless something in PG or something else is? without us noticing)
b)  There is a max open TCP/IP connections which is not physically 
enforced but may legally be inforced..(according to MS)
c) There is a hard limit on NAMED_PIPE and file shares that is allegedly

at 10. It is unclear exactly what is being counted here as I had 
NAMED_PIPES and 5 file shares from different PC's open earlier trying to

cause a failure.

We have run netstat to check for odd looking connections on each 
occurrence but never seen any?
Are there pieces of logging I should turn on in my .conf that would be 
helpful?

Oisin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Operator performance question

2007-01-09 Thread Brandon Aiken
Shouldn't these be using HAVING?

SELECT COUNT(max_persons) ...
GROUP BY NULL
HAVING max_persons >= 5 AND max_persons <= 8;

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alban Hertroys
Sent: Tuesday, January 09, 2007 11:07 AM
To: Postgres General
Subject: [GENERAL] Operator performance question

Hi all,

I need your help on a small performance problem.

I have a table of which I have to do a bunch of counts of various
conditions. The worst case scenario where I have to iterate over every
record in the table performs just a little bit too slow (800ms). That
particular query will be hit a lot (it will be on the index of our web
app).

PostgreSQL uses a sequential scan (it should IMO) - I think my
bottleneck is in the operators on the various columns.

My queries look like this:

SELECT COUNT(NULLIF(max_persons BETWEEN 5 AND 8, false)) AS "persons
5-8",
-- And other variations

COUNT(NULLIF(country_id = 74, false)) AS "LOCATION_NETHERLANDS",
-- Basically for every country in Europe

COUNT(NULLIF(specifications & '0100',
0::bit(32))) AS "washing machine",
-- And a bunch more of these; the bit mask is almost fully covered

COUNT(*) AS all
FROM table;

The plan is:
QUERY PLAN


---
 Aggregate  (cost=7371.23..7371.55 rows=1 width=18) (actual
time=803.374..803.376 rows=1 loops=1)
   ->  Seq Scan on fewo_property_location  (cost=0.00..828.84 rows=41538
width=18) (actual time=0.036..147.211 rows=41492 loops=1)
 Filter: ((location_id IS NOT NULL) AND (property_state_id = 3))
 Total runtime: 804.398 ms
(4 rows)

The table definition is like:
  Column   |   Type   |  Modifiers
---+--+--
 property_id   | integer  | not null
 property_state_id | integer  | not null
 location_id   | integer  |
 min_persons   | smallint | not null
 max_persons   | smallint | not null
 specifications| bit(32)  | default (0)::bit(32)
 country_id| integer  |
Indexes:
"fewo_property_location_pkey" PRIMARY KEY, btree (property_id)
"fewo_property_location_country_idx" btree (country_id) WHERE
location_id IS NOT NULL
"fewo_property_location_country_location_idx" btree (country_id,
location_id) CLUSTER
"fewo_property_location_location_online_idx" btree (location_id)
WHERE location_id IS NOT NULL AND property_state_id = 3
"fewo_property_location_property_location_idx" btree (property_id,
location_id) WHERE location_id IS NOT NULL AND property_state_id = 3
"fewo_property_location_specifications_idx" btree (specifications)
Foreign-key constraints:
"fewo_property_location_location_id_fkey" FOREIGN KEY (location_id)
REFERENCES fewo_location(location_id) MATCH FULL
"fewo_property_location_property_state_id_fkey" FOREIGN KEY
(property_state_id) REFERENCES fewo_property_state(property_state_id)
MATCH FULL

My conclusion is that this query time is mostly limited to the somewhat
complex COUNT expressions. Is there any way to do this more efficiently?

For the record, if I constrain this query to specific countries it
performs in about 80ms (10x as fast).

The hardware is a dual Opteron64x2, 4G RAM and some kind of RAID setup
(software, don't know what type) running in a Xen host - it's our
development DB-server.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Version 8.2 for HP-UX

2007-01-12 Thread Brandon Aiken
AFAIK, PostgreSQL is only supplied in Win32 and Fedora/Redhat flavors.
Debian, Gentoo, and FreeBSD maintain their own binary packages
themselves.

For HP-UX, you need to build from source.
http://www.postgresql.org/docs/faqs.FAQ_HPUX.html


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Al Balmer
Sent: Friday, January 12, 2007 11:20 AM
To: pgsql-general
Subject: [GENERAL] Version 8.2 for HP-UX

Does anyone know where to get postgreSQL 8.2 binaries or depot for
HP-UX? We have a problem because of non-conforming backslash handling,
and I understand that's been fixed in 8.2.

-- 
Al Balmer
Sun City, AZ

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Version 8.2 for HP-UX

2007-01-12 Thread Brandon Aiken
Right, but I assumed you checked with HP before coming here.  I should
have said "for PG 8.2.1 on HP-UX, you will need to build from source".

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Al Balmer [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 12, 2007 12:39 PM
To: Brandon Aiken
Subject: Re: [GENERAL] Version 8.2 for HP-UX

On Fri, 12 Jan 2007 12:23:38 -0500, "Brandon Aiken"
<[EMAIL PROTECTED]> wrote:

>AFAIK, PostgreSQL is only supplied in Win32 and Fedora/Redhat flavors.
>Debian, Gentoo, and FreeBSD maintain their own binary packages
>themselves.
>
>For HP-UX, you need to build from source.
>http://www.postgresql.org/docs/faqs.FAQ_HPUX.html
>
Actually, HP supplies a depot as part of the Internet Express package.
Unfortunately, it's back at 8.0, and I don't know when they'll update
it.

Ideally, HP will release an update, then we can tell our customers
it's HP-supported.
>
>--
>Brandon Aiken
>CS/IT Systems Engineer
>
>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED] On Behalf Of Al Balmer
>Sent: Friday, January 12, 2007 11:20 AM
>To: pgsql-general
>Subject: [GENERAL] Version 8.2 for HP-UX
>
>Does anyone know where to get postgreSQL 8.2 binaries or depot for
>HP-UX? We have a problem because of non-conforming backslash handling,
>and I understand that's been fixed in 8.2.

-- 
Al Balmer
Sun City, AZ

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] FK Constraint on index not PK

2007-01-12 Thread Brandon Aiken
Read the release notes.  7.4.8 and 7.4.11 require special considerations.  By 
all means upgrade, but it's not quite seamless.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Scott Marlowe
Sent: Friday, January 12, 2007 12:59 PM
To: Stéphane Schildknecht
Cc: pgsql general
Subject: Re: [GENERAL] FK Constraint on index not PK

On Fri, 2007-01-12 at 10:50, Stéphane Schildknecht wrote:
> Dear community members,
> 
> I'm having a quite strange behaviour while trying to drop some index.
> 
> We have some tables with two indexes on a primary key. The first one was
> automatically created by the primary constraint. The second one was
> manually created on the same column. Don't know why, but I would now
> want to suppress it.
> 
> The first index is : foo_pkey
> The second one : i_foo_pk
> The constraint on table bar is fk_bar_foo references foo(id)
> 
> But, when trying to drop the second index I get the following message :
> 
> NOTICE:  constraint fk_bar_foo on table t_foo depends on index i_foo_pk
> 
> The database server is 7.4.5 .
> 
> Having dumped database and restored it on a 8.2 server, I could drop the
> second index without any problem.
> 
> The fact is I could do that as I indded what to migrate all databases
> from 7.4 to 8.2. But I would prefer not to recreate every index before
> dropping the non necessary one. And duplicate indexes are surely
> unnecessary...
> 
> I have read in some thread that these troubles are known and have been
> corrected in versions > 7.4.5. But, droping them before migrating is an
> option I'd prefer to use.

Simple.  Backup your db just in case, then upgrade in place to the
latest 7.4 (7.4.15 or so) then you should be able to drop the indexes in
place.

patch upgrades do NOT require dump / reload, so 7.4.5 -> 7.4.15 should
only require a brief period where you shut down the db while you install
the new version.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Finding bogus dates

2007-01-18 Thread Brandon Aiken
Add a date field to the table.

Run UPDATE "foo" SET "newDate" = to_date("oldDate","MM/DD/").

Bad dates like 02/31/2006 will be converted to sane dates.  02/31/2006
--> 03/03/2006.

Now run SELECT * FROM "foo" WHERE to_char("newDate","MM/DD/") <>
"oldDate".  If a date got changed for sanity reasons, it'll be
different.

That should get most of 'em.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Ribe
Sent: Thursday, January 18, 2007 1:26 PM
To: pgsql-general postgresql.org
Subject: [GENERAL] Finding bogus dates

Suppose that I have a varchar column which contains dates, and some of
them
are bogus, such as '1968-02-31'. Is it possible to specify a query
condition
"cannot be cast to a valid date". (It's legacy data, and I'm trying to
clean
it up before importing into the final schema.) There probably aren't
that
many, and I can just keep hitting pg errors one row at a time until I
work
through this, but...

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Finding bogus dates

2007-01-18 Thread Brandon Aiken
Actually, now that I think about it a second you can find them really
easy just by doing:

SELECT * FROM "foo" 
WHERE to_char(to_date("oldDate",'MM/DD/'),'MM/DD/YYYY') <>
"oldDate";

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Scott Ribe [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 18, 2007 3:48 PM
To: Brandon Aiken; pgsql-general postgresql.org
Subject: Re: [GENERAL] Finding bogus dates

I didn't know to_date would do that. It's better anyway. I just
continued
with the "fix and try again" approach and they're only 2 bad dates out
94,000+, so I don't have a huge problem here. I can try to do some
research
and find the correct date, but failing that, the to_date approximation
is
probably no worse than using null.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Trouble creating database with proper encoding

2007-01-22 Thread Brandon Aiken
Strictly speaking, Unicode is a whole family of code pages.  Unicode
generally means multi-byte character encoding.  UTF-8 is the most common
encoding implementation of Unicode at the moment.  UTF-16 is also
popular, but very few systems need that many characters or wish to
devote that many bytes to each character.

As far as PostgreSQL is concerned, "Unicode" is an alias for "UTF8",
which is UTF-8 encoding.

See:
http://www.postgresql.org/docs/8.2/interactive/multibyte.html#MULTIBYTE-
CHARSET-SUPPORTED


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rob Tanner
Sent: Monday, January 22, 2007 5:17 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Trouble creating database with proper encoding

Hi,

This is my first venture into PostgreSQL.

I built and installer PostgreSQL 8.2.1 as part of a Xythos
installation.  I added a user called xythos and now I'm trying to add
the initial databases that the product requires.  From the command line,
I executed the commands:

createdb -U xythos -E UNICODE XythosDocumentStoreDB
createdb -U xythos -E UNICODE XythosGlobalDB

When I look at what I've done with psql -l, I get

  List of databases
 Name  |  Owner   | Encoding
---+--+--
 XythosDocumentStoreDB | xythos   | UTF8
 XythosGlobalDB| xythos   | UTF8


I dropped the databases and recreated them with unicode (lower case) but
got the same results.

I didn't see any special configure/compile options for character
encoding, so what am I missing.

Thanks.


-- 

Rob Tanner
UNIX Services Manager
Linfield College, McMinnville OR




** LEGAL DISCLAIMER **
Statements made in this email may or may not reflect the views and opinions of 
Wineman Technology, Inc.
This E-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of this message 
to the intended recipient(s), you are hereby notified that any dissemination, 
distribution or copying of this E-mail message is strictly prohibited. If you 
have received this message in error, please immediately notify the sender and 
delete this E-mail message from your computer.

QS Disclaimer Demo. Copyright (C) Pa-software.
Visit www.pa-software.com for more information.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] too many trigger records found for relation "item" -

2007-01-23 Thread Brandon Aiken
Nevertheless, the database should be able to handle any combination of
syntactically correct SQL statements without throwing errors and
maintaining the database in a consistent state.  If what you're saying
is right, the error thrown here is not a user configuration error, but
an RDBMS implementation error.  

A development database is still obviously an important role for
PostgreSQL to function in (as far as PostgreSQL's dev team is concerned,
a development database *is* a "production" use since once of *their*
end-users experiences the problem) and it needs to be able to handle
cases such as this with no problems.  And no matter how unlikely it is
to be in a production environment, *someone* will try to modify their
schema dynamically like this.

I'm wondering if there is a race condition in CREATE or DROP with
respect to triggers and foreign keys.  If that's the case, it's going to
affect someone eventually.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Csaba Nagy
Sent: Tuesday, January 23, 2007 4:42 AM
To: Lenorovitz, Joel
Cc: Postgres general mailing list
Subject: Re: [GENERAL] too many trigger records found for relation
"item" -

On Mon, 2007-01-22 at 20:56, Lenorovitz, Joel wrote:
[snip]
> ERROR:  too many trigger records found for relation "item"

I've got this error on a development data base where we were
continuously creating new child tables referencing the same parent
table. The responsible code is in src/backend/commands/trigger.c, and I
think it only happens if you manage to create/drop a new trigger (which
also could be a FK trigger created by a new foreign key referencing that
table, as in our case) exactly between that code gets the count of the
triggers and processes them. In any case it should be a transient error,
i.e. it should only happen when you heavily create/drop triggers... our
integration test case was actually heavily creating new child tables, so
that's how it happened for us.

In a production scenario I won't be creating all the time new triggers
in parallel with other heavy activities, so it doesn't bother me.

Cheers,
Csaba.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



** LEGAL DISCLAIMER **
Statements made in this email may or may not reflect the views and opinions of 
Wineman Technology, Inc.
This E-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of this message 
to the intended recipient(s), you are hereby notified that any dissemination, 
distribution or copying of this E-mail message is strictly prohibited. If you 
have received this message in error, please immediately notify the sender and 
delete this E-mail message from your computer.

QS Disclaimer Demo. Copyright (C) Pa-software.
Visit www.pa-software.com for more information.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgresql.conf

2007-01-23 Thread Brandon Aiken
Out of curiosity, has the COUNT(*) with no WHERE clause slowness been
fixed in 8.x?  Or is it still an issue of "there's no solution that
won't harm aggregates with WHERE clauses"?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of A. Kretschmer
Sent: Tuesday, January 23, 2007 6:17 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgresql.conf

am  Tue, dem 23.01.2007, um 12:11:40 +0100 mailte Laurent Manchon
folgendes:
> Hi,
> 
> I have a slow response of my PostgreSQL database 7.4 using this query
below
> on a table with 80 rows:
> 
> select count(*)from tbl;

How often do you want to ask the very same question?
You have enough answers, read this!


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



** LEGAL DISCLAIMER **
Statements made in this email may or may not reflect the views and opinions of 
Wineman Technology, Inc.
This E-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of this message 
to the intended recipient(s), you are hereby notified that any dissemination, 
distribution or copying of this E-mail message is strictly prohibited. If you 
have received this message in error, please immediately notify the sender and 
delete this E-mail message from your computer.

QS Disclaimer Demo. Copyright (C) Pa-software.
Visit www.pa-software.com for more information.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Any Plans for cross database queries on the same server?

2007-01-30 Thread Brandon Aiken
I always assumed the general argument is if you need to query different
databases on the same server with the same application, they ought not
to be separate databases because they're clearly related data.

It's kinda like "why isn't there a way to do an exactly one to exactly
one relationship between tables?".  Well, because if one A always means
one B and one B always means one A, shouldn't they ought to be in the
same table already?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Guido Neitzer
Sent: Tuesday, January 30, 2007 2:22 PM
To: Postgresql General
Subject: Re: [GENERAL] Any Plans for cross database queries on the same
server?

Am 30.01.2007 um 12:11 schrieb Tony Caduto:

> Why?  Seems to me if it was discussed that much it must be a very  
> sought after feature.
> How come it's not on the TO Do list for the future at least?
> Is it because of some limitation of the core engine or something?

http://www.postgresql.org/docs/faqs.FAQ.html#item4.17

I guess, nobody has a real interest on that because, if you really  
need that, there are work arounds ... E.g. I do a lot of cross  
database queries all the day with my apps. It's just handled by the  
app server ...

cug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



** LEGAL DISCLAIMER **
Statements made in this email may or may not reflect the views and opinions of 
Wineman Technology, Inc.
This E-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of this message 
to the intended recipient(s), you are hereby notified that any dissemination, 
distribution or copying of this E-mail message is strictly prohibited. If you 
have received this message in error, please immediately notify the sender and 
delete this E-mail message from your computer.

QS Disclaimer Demo. Copyright (C) Pa-software.
Visit www.pa-software.com for more information.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Ordering problem with varchar (DESC)

2007-01-31 Thread Brandon Aiken
As others have said, VARCHAR is the incorrect data type to be using
here.  You should either be using INTERVAL or TIMESTAMP depending on
what you want.  You can even combine date and time into a single
TIMESTAMP field.  Only use VARCHAR when no other data type will do.

"SELECT * from t1;" is not an ordered query and any consistency of order
is coincidental (typically it comes out in the same order it went in,
but there's no guarantee of that).

Try "SELECT * from t1 ORDER BY date, time;", and I suspect you will get:
date (date type)  time (varchar)  data
2007-01-17 8h40   d1
2007-01-3012h00   d3
2007-01-3013h45   d4
2007-01-3017h20   d5
2007-01-30 9h30   d2

To use your current schema, you need to zero-fill your hours, so 9h30
needs to be 09h30 and so forth.


--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alexandre
Leclerc
Sent: Wednesday, January 31, 2007 10:46 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Ordering problem with varchar (DESC)

Hi all,

We have a column (varchar) that has plain text time and it is indexed.
When I do a query with the index, all the data is in the right order,
but when I user ORDER BY .. DESC, the order is messed up. Example:

By index 1: (date, time, data)
SELECT * from t1;
date (date type)  time (varchar)  data
2007-01-17 8h40   d1
2007-01-30 9h30   d2
2007-01-3012h00   d3
2007-01-3013h45   d4
2007-01-3017h20   d5

SELECT * from t1 ORDER BY date, time DESC;
date (date type)  time (varchar)  data
2007-01-30 9h30   d2
2007-01-3017h20   d5
2007-01-3013h45   d4
2007-01-3012h00   d3
2007-01-17 8h40   d1

I don't know why, this is like if the 'time' varchar was trimmed then
used for the ordering.

How can I fix that so that the result is exactly like the first one but
perfectly reversed in it's order?

Best regards.

-- 
Alexandre Leclerc

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL/FireBird

2007-02-05 Thread Brandon Aiken
FireBird has a different target audience than PostgreSQL, which is why
it's not compared to PostgreSQL as often as MySQL is.

FireBird is a direct descendant of Borland InterBase 6.  Consequently,
much like Postgres inherited a lot of Ingres's weirdness (most of which
has since been weeded out or superceeded with standard SQL compliance),
FireBird is still very much InterBase dialect-compliant.  This is also
why it still uses a modified Mozilla Public License.  I know they've
achieved ANSI SQL-92 compliance, but I don't know how fully compliant
beyond that they are.  PostgreSQL is mostly working on SQL-03 compliance
AFAICT.  Both use MVCC.

Interbase was also primarily used for single instance and embedded
applications, so it's not intended to scale the same way PostgreSQL is.


Firebird's design foci are very small memory footprint, ANSI SQL-92
complaince, multiple dialects that support aging systems, and very low
administrative requirements.  It lack features and scalability compares
to PG, but does what it does very well.

PostgreSQL's design foci are features and robustness.  It's designed to
compete with Oracle, DB2, MS SQL, and other top-end enterprise
databases.  It has a much larger memory footprint and is much more
complicated to administer compared to FB, but is much more configurable
and customizable.

Bottom line:  PostgreSQL is more mature because it's several years
older.  Firebird is intended for different applications.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of RPK
Sent: Thursday, February 01, 2007 10:32 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL/FireBird


How is FireBird rated when compared with PostgreSQL? 
-- 
View this message in context:
http://www.nabble.com/PostgreSQL-FireBird-tf3158857.html#a8761237
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL/FireBird

2007-02-06 Thread Brandon Aiken
Borland simply chose a modified MPL to release their InterBase 6 under.
They have since release InterBase 6 under a commercial license, and have
also released InterBase 7 under a commercial license.  MPL is a fairly
common license.  Sun's CDDL is a modified MPL, for example.  The MPL is
somewhere between a BSD license and the GPL in terms of what you can do
with it.  Unlike BSD, all code changes must stay under the MPL.  Unlike
the GPL, MPL code can be combined with proprietary files.  MySQL's
license is a lot more complicated than the MPL.

 

The FSF says the MPL is not compatible with the GPL, but, well, the FSF
generally finds *all* non-GPL licenses incompatible with the GPL (BSD,
MPL, Apache, etc.).  The only truly GPL-compatible license I know of is
LGPL (and there have been arguments about that).  That's the problem
with the GPL.  You're not agreeing to open source your code as much as
you're agreeing to abide by the FSF's political beliefs.  Political
lock-in for developers in lieu of vendor lock-in for end-users.

 

Compared to SQLite, Firebird has many more features.  Firebird *can*
function as a network server and runs as a separate process instead of a
C library that gets compiled in your binary.  If you want multiple apps
to access the same data or you want to use ODBC, Firebird can do that
without the kitchen sink approach of PostgreSQL.

 

Compared to JetSQL - which I assume is what Access and Exchange use -
Firebird is cross-platform.  I've never used it, but I've also never
been impressed with the performance of anything that has used JetSQL
(Exchange especially).

 

--

Brandon Aiken

CS/IT Systems Engineer



From: Justin Dearing [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 05, 2007 6:29 PM
To: Brandon Aiken
Subject: Re: [GENERAL] PostgreSQL/FireBird

 

 

On 2/5/07, Brandon Aiken <[EMAIL PROTECTED]> wrote:

FireBird is a direct descendant of Borland InterBase 6.  Consequently,
much like Postgres inherited a lot of Ingres's weirdness (most of which
has since been weeded out or superceeded with standard SQL compliance), 
FireBird is still very much InterBase dialect-compliant.  This is also
why it still uses a modified Mozilla Public License.  I know they've
achieved ANSI SQL-92 compliance, but I don't know how fully compliant 
beyond that they are.  PostgreSQL is mostly working on SQL-03 compliance
AFAICT.  Both use MVCC.


What does the MPL have to do with Borland InterBase descendance? Borland
could have chosen any license they wished. Quite frankly I'm quite
ignorant about the MPLs terms so please enlighten me. 

 

Interbase was also primarily used for single instance and
embedded
applications, so it's not intended to scale the same way
PostgreSQL is. 


 So I guess one should ask how it scales to SQLite and JetSQL, on the
appropiate lists of course.

 

Firebird's design foci are very small memory footprint, ANSI
SQL-92
complaince, multiple dialects that support aging systems, and
very low
administrative requirements.  It lack features and scalability
compares
to PG, but does what it does very well.

 

Bottom line:  PostgreSQL is more mature because it's several
years 
older.  Firebird is intended for different applications.

 

If FireBird is descended from Ingres, aren't they both the same age?




** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.


Re: [GENERAL] Postgres SQL Syntax

2007-02-06 Thread Brandon Aiken
This is one instance where I think PGAdmin would really help.  You know
what the schema needs to be, yes?  Create it will PGAdmin and you can
see what some well-formatted PG code looks like.

The majority of the differences in syntax between MySQL and PG are
*generally* MySQL's fault.  MySQL has more unique extensions and
shortcuts for SQL than any other RDBMS I know of.

Keep in mind, though, that no database is 100% ANSI SQL only.  Indeed, I
don't know of any database that is 100% ANSI SQL compliant.  The
relational model does not translate into computer data very well, and
the requirements to implement it contradict some pretty basic computer
restrictions.  At several points along the way, the demands of the
theoretical relational model break down and practical applications are
used instead.  The limitations of modern computers make a true RDB as
envisioned by Boyd and Cobb a virtual impossibility (the most obvious
reason being that a computer database can only store computerized data).

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim C.
Sent: Friday, February 02, 2007 11:37 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgres SQL Syntax

> CREATE TABLE "credits" (
>   "person" integer NOT NULL default '0',
>   "chanid" int NOT NULL default '0',
>   "starttime" timestamp NOT NULL default '1970-01-01 00:00:00+00',
>   "role"
set('actor','director','producer','executive_producer','writer','guest_s
tar','host','adapter','presenter','commentator','guest') NOT NULL
default ''
> --  CONSTRAINT "chanid_constraint0" UNIQUE
("chanid","starttime","person","role")
> --  UNIQUE KEY "chanid" ("chanid","starttime","person","role"),
> --  KEY "person" ("person","role")
> );

I'm doing this table by table, line by line.  Each table, I learn
something new about the differences between MySQL and Postgres, I
mentally catalog it and I can always look it up in my own code next time
for examples.

I've a tool that is providing some help but sometimes it chokes. It
choked on this one for example.

I could use some clues as to how to go about converting this MySQL
implementation of roles to Postgres.

So far I've been through 5 tables and it is getting easier but I'm still
getting stuck now and then.


Jim C.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to store whole XML file in database

2007-02-12 Thread Brandon Aiken
You'll have to escape any quotation marks or your SQL will not parse
your strings correctly.

 

--

Brandon Aiken

CS/IT Systems Engineer



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of deepak pal
Sent: Monday, February 12, 2007 4:07 AM
To: PostgreSQL - General
Subject: [GENERAL] how to store whole XML file in database

 

hi,,
   i have to insert whole xml file in database ,there is a text
field for that.but it shows error  parse error where there are attribute
..plz help me out.




** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.


Re: [GENERAL] postgresql vs mysql

2007-02-21 Thread Brandon Aiken
IMX, the only things going for MySQL are:
1. It's fast.
2. It's easy to install and administer.
3. It's cheap and cross-platform.
4. It's popular.

The problem is:
1. It's fast because fsync is off by default, and MyISAM is not
transactional and doesn't support basic features like foreign keys.
That basically means it's fast because it ignores Boyd and Cobb.  Guess
what?  The same can be said of flat files.
2. Most other RDBMSs have seen the advantage and done this now, too.
Installing an RDBMS is no longer more difficult than installing the rest
of the system.
3. MySQL is no longer the only thing available.  PostgreSQL is on
Windows now, MS SQL 2005 Express, SQLite, Oracle Express, Firebird, etc.
4. So is Windows.

MySQL isn't quite as bad as PHP for internal inconsistencies and
developer aggrivations, but it comes close enough for me to want to
avoid them both.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of gustavo
halperin
Sent: Tuesday, February 20, 2007 4:26 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postgresql vs mysql

  Hello

 I  have a friend that ask me why postgresql is better than mysql.
 I personally prefer posgresql, but she need to give in her work 3 or 4 
strong reasons for that. I mean not to much technical reasons. Can you 
give help me please ?

  Thank you,
   Gustavo


-- 
   ||\ // \
   | \\   //   |  
I'm thinking.   \  \\  l\\l_ //|
_  _ |  \\/ `/  `.||
  /~\\   \//~\   | Y |   |   ||  Y |
  |  \\   \  //  |   |  \|   |   |\ /  |
  [   ||||   ]   \   |  o|o  | >  /
 ] Y  ||||  Y [   \___\_--_ /_/__/
 |  \_|l,--.l|_/  |   /.-\() /--.\
 |   >'  `<   |   `--(__)'
 \  (/~`----'~\)  /   U// U / \
  `-_>-__-<_-'/ \  / /|
  /(_#(__)#_)\   ( .) / / ]
  \___/__\___/`.`' /   [
   /__`--'__\  |`-'|
/\(__,>-~~ __) |   |__
 /\//\\(  `--~~ ) _l   |--:.
 '\/  <^\  /^>   |  `   (  <   \\
  _\ >-__-< /_ ,-\  ,-~~->. \   `:.___,/
 (___\/___)   (/()`---'


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql vs mysql

2007-02-22 Thread Brandon Aiken
If you can remove NULLs without breaking OUTER JOIN, more power to you.

In the vast majority of cases, all fields in a table should have a NOT
NULL constraint.  Storing a NULL value makes little sense, since you're
storing something you don't know.  If you don't know something, why are
you trying to record it?  From a strict relational sense, the existence
of NULL values in your fields indicates that your primary keys are not
truly candidate keys for all your fields.  That means your database
isn't [BCNF] normalized.

Arguments about de-normalization generally result in the basic
limitation in nearly all RDBMS's that they do not allow you to optimize
how data is physically stored on disk.  That is, a generalized SQL
database like Oracle, MySQL, PostgreSQL, etc. sacrifice the ability to
control how data is physically store in order to be a generalized
database that can store generic domains in the form of the most common
datatypes that computer programs use.  

This is a basic limitation of using a generalized database engine, and
if your application demands higher performance than you can get with a
general RDBMS, you'll have to develop your own task-specific RDBMS or
modify your schema so that the problem can be mitigated.  Schema
de-normalization is a way of purposefully degrading the normal quality
of your schema in order to make up for shortcomings of the database
engine and limitations of computerized data storage.  As long as you
understand that de-normalization is a practical workaround and never a
wise logical design choice from the get-go, you shouldn't feel too bad
about doing it.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim Tassonis
Sent: Thursday, February 22, 2007 10:31 AM
To: Rich Shepard
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgresql vs mysql

Rich Shepard wrote:
> On Thu, 22 Feb 2007, Tim Tassonis wrote:
> 
>> I do still think it is a bit of an oddity, the concept of the null 
>> column.
>> From my experience, it creates more problems than it actually solves
and
>> generally forces you to code more rather than less in order to
achieve
>> your goals.
> 
> Tim,
> 
>   Long ago, a lot of database applications used 99, or 999, or -1 to
> indicate an unknown value. However, those don't fit well with a
textual
> field and they will certainly skew results if used in arithmetic
> calculations in numeric fields.

I remember, my first database to write stuff for was an IMB IMS 
hierarchical/network one.

> 
>   The concept of NULL representing an unknown value, and therefore one
that
> cannot be compared with any other value including other NULLs, is no
> different from the concept of zero which was not in mathematics for
the
> longest time until some insightful Arab mathematician saw the need for
a
> representation of 'nothing' in arithmetic and higher mathematics.
> 
>   There was probably resistance to that idea, too, as folks tried to
wrap
> their minds around the idea that 'nothing' could be validly
represented 
> by a
> symbol and it was actually necessary to advance beyond what the Greeks
and
> Romans -- and everyone else -- could do. Now, one would be thought a
bit
> strange to question the validity of zero.

That's one point for me, then!. NULL exactly is _not_ the equivalent the

the number 0, but the mentioned strange symbol that has to be treated 
specially and does not allow normal calculation, like '0' does in 
mathematics. I don't know how many times I had to write a query that 
ends with:

- or column is null
- and column is not null

exactly because it is a special symbol. In mathematics, the only special

case for zero that springs to my mind is the division of something by 
zero (I'm by no means a mathematician).

As a completely irrelevant sidenote to the discussion, I'm greek and not

arabic, but I certinly do accept the superiority of the arabic notation.

> 
>   NULL solves as many intransigent problems with digital data storage
and
> manipulation in databases as zero did in the realm of counting.

As I said, I don't deny it solves some problems (that could be solved in

a different way, too), but in my opinion, it creates more (that also can

  be solved, as above examples show).

Tim


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary 

Re: [GENERAL] postgresql vs mysql

2007-02-22 Thread Brandon Aiken
Digg and Slashdot use MySQL databases, so clearly they *can* be made to
support a high-load, high-performance, limited-write style web
application.  

You might remember a few months back when SlashDot had to turn off
threaded replies because the schema for the parent-child field was still
an UNSIGNED INT4 instead of an UNSIGNED INT8, and they reached the
maximum value of the field (16.7 million).  Obviously, I have no
knowledge of the server configuration, hardware configuration, or
schema, but in-the-wild examples of high performance MySQL installations
are trivial to find (as are PostgreSQL installations such as the .org
DNS TLD root).

I'd like to see a tuned MySQL vs a similarly tuned PostgreSQL system
(that is, fsync in the same state and with the same level of ACID
compliance) subject to a battery of test schema types (OLTP, OLAP,
etc.).

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Jim Nasby [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 22, 2007 6:28 PM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgresql vs mysql

On Feb 21, 2007, at 2:23 PM, Brandon Aiken wrote:
> IMX, the only things going for MySQL are:
> 1. It's fast.

That depends greatly on what you're doing with it. Generally, as soon  
as you start throwing a multi-user workload at it, MySQL stops  
scaling. http://tweakers.net recently did a study on that.
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)





** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql vs mysql

2007-02-23 Thread Brandon Aiken
That's why you make a table for every device or every measurement, and
then use a view to consolidate it.  With updatable views, there's no
excuse not to.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Steve Crawford
Sent: Friday, February 23, 2007 1:04 PM
To: Mark Walker
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgresql vs mysql

Mark Walker wrote:
> I'm not sure what you're trying to do but, it appears that you
database
> design is incorrect.  What you need is something like
> 
> CREATE TABLE temp_readings
> (
>  _date Date,
>  temperature double,
>  source varchar(20),
> )
> 
> No reading, no record.  Are you suggesting that you would have a
weekly
> set of records for each row?
> 
> CREATE TABLE temp_readings
> (
>  weekstart date,
>  sun double,
>mon double,
> tues, double
> etc
> )
> 
> Not such a great way to do it.

Ummm, I'm not trying to make a temperature database. I was responding to
the previous poster with an extremely simple example of usefulness of
the _concept_ of "null".  I'm afraid I hadn't considered the possibility
that it would be mistaken as an example of an actual table.

But since you bring it up, simply omitting rows isn't necessarily an
option. A common scenario for weather observation is to take regular
snapshots or a bunch of measurements (air-temperature, humidity,
wind-speed, soil-temperature, leaf-wetness, UV radiation, etc.) which
can easily be represented in a table with a timestamp and a column for
each of the measurements. In a modular weather station where a specific
instrument can be out of service, one or more of those measurements
could be missing (null) for a period of time while the remaining
measurements are still being inserted.

Cheers,
Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-28 Thread Brandon Aiken
SQLite, MySQL, and MS Access each use indexes for unique constraints.

Doesn't the SQL spec specify that CREATE INDEX can be used to create
UNIQUE indexes?  Are there any real systems that don't support indexes
but that support unique?  It seems silly, since the code for a primary
key is a superset of what's needed for unique, so I would expect only
legacy systems to support non-indexed uniques.  Any newer DBMS would
implement primary keys and then steal the code for uniques.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Wednesday, February 28, 2007 1:17 AM
To: Jim C. Nasby
Cc: John Jawed; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Difference between UNIQUE constraint vs index 

"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> In some databases if you know that an index just happens to be unique
> you might gain some query performance by defining the index as unique,
> but I don't think the PostgreSQL planner is that smart.

Actually, the planner only pays attention to whether indexes are unique;
the notion of a unique constraint is outside its bounds.  In PG a unique
constraint is implemented by creating a unique index, and so there is
really not any interesting difference.

I would imagine that other DBMSes also enforce uniqueness by means of
indexes, because it'd be awful darn expensive to enforce the constraint
without one; but I'm only guessing here, not having looked.  Can anyone
point to a real system that enforces unique constraints without an
underlying index?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-28 Thread Brandon Aiken
Problem number 6,534 with implementing an abstract concept such as an
RDB on a digital computer with an electro-magno-mechanical storage
system.

:p

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 28, 2007 11:03 AM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Difference between UNIQUE constraint vs index 

"Brandon Aiken" <[EMAIL PROTECTED]> writes:
> SQLite, MySQL, and MS Access each use indexes for unique constraints.
> Doesn't the SQL spec specify that CREATE INDEX can be used to create
> UNIQUE indexes?

No, there is no such command in the SQL spec.  In fact the concept of an
index does not appear anywhere in the spec ... it's an implementation
detail.

regards, tom lane



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Anyone know a good opensource CRM that actually installs with Posgtres?

2007-03-09 Thread Brandon Aiken
Why is running on PG so important?  Why not look for the best CRM
application for your user's needs?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bradley Kieser
Sent: Thursday, March 08, 2007 8:22 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Anyone know a good opensource CRM that actually
installs with Posgtres?

I hope that someone has cracked this one because I have run into a brick

wall the entire week and after 3 all-nighters with bad installations, I 
would appreciate hearing from others!

I am looking for a decent OpenSource CRM system that will run with 
Postgres. SugarCRM seems to be the most popular but it's MySQL-centric 
and its opensource parts are very restricted.

vTiger is also mySQL-centric.

I thought that I had a corker of a system with "centricCRM" but when it 
came to actually installing it, I am 48 hours down and hacking through 
screen after screen of installation errors. Basically, it relies way too

much on ant and Java tools. Nothing against Java but my experience with 
ant used for installing PG schemas is a dismal track record of error and

frustration. centric CRM is no exception. Frankly, it just doesn't work 
and after trying to hack out the ant into a PG script I have decided to 
give it up as a bad job.

XRMS promises to run on PG but... it doesn't. The core system is fine, 
but useless without the plugins. The Plugins are mySQL-specific again, I

spent several all-nighters previously hacking through installation 
screens attempting to convert mysql to PG, making software patches... 
you get the picture.

XLSuite looks very promising. Awesome interface, looks great... only 
it's just not ready yet. It is a year away from being at full PG 
production level.

Compiere doesn't support PG.

OpenTAPS the demo won't even work. And it's US-centric whereas we are in

the UK. A pity that it's so very much tied to the US as it could be very

good.

I have tried numerous other CRMs but all the same - either don't run on 
PG, claim to but in reality don't or are simply pre-Alpha and not ready 
for production use.

So if anyone has actually cracked this, please let me know! I really 
need a good CRM.

It has to be OpenSource, not just out of principle, but we need to 
integrate it into an existing business with established inhouse software

so we need to be able to customise the code.


Thanks,

Brad

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Lifecycle of PostgreSQL releases

2007-03-15 Thread Brandon Aiken
If they have a support contract for, say, RHEL, why migrate to something
that support contract doesn't cover?  Those had better be some very
important features or some very critical bug fixes, the latter of which
are very likely to get backported if they're versions covered by a
support contract.

The upgrade question is "why?" not "why not?".

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Thursday, March 15, 2007 2:00 PM
To: Joshua D. Drake
Cc: Erik Jones; CAJ CAJ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Lifecycle of PostgreSQL releases 

"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> 1. More people will run 8.3 than 8.2. Why? Because 8.3 will be in the
> wild as current stable longer than 8.2.

Oh, gimme a break, Josh.  A year or more from now that argument would be
relevant, but unless you are going to counsel your customers not to
update till mid-2008, it's completely irrelevant to whether it makes
sense to update now.  If you *are* going to tell them to wait until
8.3.4 or so (which I can see an argument for, if you don't like being
an early adopter), won't you then be in exactly the same position that
"8.4 is just around the corner"?

Your other four points are mere rehashings of that one.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PgSql on Vista?

2007-03-15 Thread Brandon Aiken
It was clear to me after 20 minutes of actually trying to use the OS that UAC 
was a gimmick designed to supply plausible deniability for the fact that 
Windows XP suffered so many problems with malware.  They can simply ask "were 
you using UAC?" every time someone complains that their box got infected with 
some rootkit of some sort instead of actually having to secure their OS 
intelligently.  They know darn well that everyone who uses the OS will turn it 
off.  The silly thing is the boy-who-cried-wolf and nothing more.
 
Frankly, I like Windows Server 2003's default of not allowing you to execute 
any program you got from elsewhere without explicitly unblocking it.  It works 
just as well, IMX, although the error message you get when you forget about it 
is useless.
 
B Aiken



From: [EMAIL PROTECTED] on behalf of Dave Page
Sent: Thu 3/15/2007 5:24 PM
To: Paul Lambert
Cc: Arkan; pgsql-general@postgresql.org
Subject: Re: [GENERAL] PgSql on Vista?



Paul Lambert wrote:

> After install completes you can turn it back on... if you want -
> personally I leave it off, it's an incredibly annoying "feature".

Doesn't the security center keep popping up to point out that it's
turned off?

Regards Dave

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.



Re: [GENERAL] PgSql on Vista?

2007-03-16 Thread Brandon Aiken
Actually the only reason we have an email disclaimer is for CSIA
compliance.  We know they have a highly dubious legal standing, but,
hey, it's what the auditors require.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: Konrad Neuwirth [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 16, 2007 6:46 AM
To: Brandon Aiken
Subject: Re: [GENERAL] PgSql on Vista?

I find it ironic that this -- to my mind, very on the spot -- comment
comes with one of the mostly useless signature that there for ...
plausible deniability.

:-)
 Konrad

On 3/16/07, Brandon Aiken <[EMAIL PROTECTED]> wrote:
>
>
>
> It was clear to me after 20 minutes of actually trying to use the OS
that
> UAC was a gimmick designed to supply plausible deniability for the
fact that
> Windows XP suffered so many problems with malware.  They can simply
ask
> "were you using UAC?" every time someone complains that their box got
> infected with some rootkit of some sort instead of actually having to
secure
> their OS intelligently.  They know darn well that everyone who uses
the OS
> will turn it off.  The silly thing is the boy-who-cried-wolf and
nothing
> more.
>
> Frankly, I like Windows Server 2003's default of not allowing you to
execute
> any program you got from elsewhere without explicitly unblocking it.
It
> works just as well, IMX, although the error message you get when you
forget
> about it is useless.
>
> B Aiken
>
>  
>  From: [EMAIL PROTECTED] on behalf of Dave
> Page
> Sent: Thu 3/15/2007 5:24 PM
> To: Paul Lambert
> Cc: Arkan; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PgSql on Vista?
>
>
>
>
>
> Paul Lambert wrote:
>
> > After install completes you can turn it back on... if you want -
> > personally I leave it off, it's an incredibly annoying "feature".
>
> Doesn't the security center keep popping up to point out that it's
> turned off?
>
> Regards Dave
>
> ---(end of
> broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
>
>
>  
>  ** LEGAL DISCLAIMER **
> Statements made in this e-mail may or may not reflect the views and
opinions
> of Wineman Technology, Inc. or its employees.
>
> This e-mail message and any attachments may contain legally
privileged,
> confidential or proprietary information. If you are not the intended
> recipient(s), or the employee or agent responsible for delivery of
this
> message to the intended recipient(s), you are hereby notified that any
> dissemination, distribution or copying of this e-mail message is
strictly
> prohibited. If you have received this message in error, please
immediately
> notify the sender and delete this e-mail message from your computer.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PgSql on Vista?

2007-03-16 Thread Brandon Aiken
It's about $20,000 cheaper than ISO (ISA? IEC? One of those TLAs.)
certification.  Industrial engineering.

CSIA is industrial control certification:
http://www.controlsys.org/

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 16, 2007 10:19 AM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PgSql on Vista?

Brandon Aiken wrote:
> Actually the only reason we have an email disclaimer is for CSIA
> compliance.  We know they have a highly dubious legal standing, but,
> hey, it's what the auditors require.

google CSIA turns up a bunch of stuff. what is it exactly?

> 
> --
> Brandon Aiken
> CS/IT Systems Engineer
> -Original Message-
> From: Konrad Neuwirth [mailto:[EMAIL PROTECTED] 
> Sent: Friday, March 16, 2007 6:46 AM
> To: Brandon Aiken
> Subject: Re: [GENERAL] PgSql on Vista?
> 
> I find it ironic that this -- to my mind, very on the spot -- comment
> comes with one of the mostly useless signature that there for ...
> plausible deniability.
> 
> :-)
>  Konrad
> 
> On 3/16/07, Brandon Aiken <[EMAIL PROTECTED]> wrote:
>>
>>
>> It was clear to me after 20 minutes of actually trying to use the OS
> that
>> UAC was a gimmick designed to supply plausible deniability for the
> fact that
>> Windows XP suffered so many problems with malware.  They can simply
> ask
>> "were you using UAC?" every time someone complains that their box got
>> infected with some rootkit of some sort instead of actually having to
> secure
>> their OS intelligently.  They know darn well that everyone who uses
> the OS
>> will turn it off.  The silly thing is the boy-who-cried-wolf and
> nothing
>> more.
>>
>> Frankly, I like Windows Server 2003's default of not allowing you to
> execute
>> any program you got from elsewhere without explicitly unblocking it.
> It
>> works just as well, IMX, although the error message you get when you
> forget
>> about it is useless.
>>
>> B Aiken
>>
>>  
>>  From: [EMAIL PROTECTED] on behalf of Dave
>> Page
>> Sent: Thu 3/15/2007 5:24 PM
>> To: Paul Lambert
>> Cc: Arkan; pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] PgSql on Vista?
>>
>>
>>
>>
>>
>> Paul Lambert wrote:
>>
>>> After install completes you can turn it back on... if you want -
>>> personally I leave it off, it's an incredibly annoying "feature".
>> Doesn't the security center keep popping up to point out that it's
>> turned off?
>>
>> Regards Dave
>>
>> ---(end of
>> broadcast)---
>> TIP 3: Have you checked our extensive FAQ?
>>
>>http://www.postgresql.org/docs/faq
>>
>>
>>  
>>  ** LEGAL DISCLAIMER **
>> Statements made in this e-mail may or may not reflect the views and
> opinions
>> of Wineman Technology, Inc. or its employees.
>>
>> This e-mail message and any attachments may contain legally
> privileged,
>> confidential or proprietary information. If you are not the intended
>> recipient(s), or the employee or agent responsible for delivery of
> this
>> message to the intended recipient(s), you are hereby notified that
any
>> dissemination, distribution or copying of this e-mail message is
> strictly
>> prohibited. If you have received this message in error, please
> immediately
>> notify the sender and delete this e-mail message from your computer.
> 
> ---(end of
broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that
your
>message can get through to the mailing list cleanly
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Lifecycle of PostgreSQL releases

2007-03-21 Thread Brandon Aiken
Not if you're not affected by the bugs.  Software *always* has bugs.
And new code in your environment is *untested* code in your environment.

If I am not affected by bugs, if I'm under a support contract to correct
any bugs that I *am* affected by (as was the case in Josh's original
argument with RHEL), and no new features are required, then all
upgrading will do is take me from a state of known bugs that don't
affect my systems to unknown bugs or undocumented/unintended changes
that *might* affect my systems.

The PostgreSQL community supports latest release.  Here, "upgrade to
most recent" exactly means "upgrade to the version we know has all the
fixes we've already done".  We ask people to upgrade here so we don't
have to reinvent the wheel just because someone wants to use 7.4.1.
Resources are tight enough just supporting the most recent codebase.
Including every codebase back to the beginning of time would require an
enormous number of people.

Support contracts with, for example, RHEL, don't necessarily work that
way.  They typically say "use our most recent packages; anything else is
not covered and you're on your own".  Because support contracts say
this, they have to maintain the codebase themselves to a fair extent.
Granted, they can just take the changes from -- in this case --
PostgreSQL's source code, but they are the people responsible for the
security of the code base and compatibility of the code base.  That's
*exactly* what you buy when you buy the support contract.

Look at it this way:
The benefits to any upgrade are "bug fix" and "new feature".
The caveats to any upgrade are "new bug" and "feature change".  (PHP and
MySQL are notorious for the latter.)

If "bug fix" is 100% handled by support contract, and "new feature" is
100% not useful, what is my impetus?  

For a direct example, why should a business upgrade their desktops from
Windows XP to Windows Vista before 2011 if *none* of the new features
are needed?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Wednesday, March 21, 2007 9:29 AM
To: Naz Gassiep
Cc: Joshua D. Drake; Erik Jones; CAJ CAJ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Lifecycle of PostgreSQL releases 

Naz Gassiep <[EMAIL PROTECTED]> writes:
> Joshua D. Drake wrote:
>> Example discussion with customer:
> ...
> Finally, in the absence of security concerns or performance issues
(and 
> I mean the "we can't afford to buy better hardware" type edge of the 
> envelope type issues) there is zero *need* to upgrade.

This line of argument ignores the fact that newer versions often contain
fixes for data-loss-grade bugs.  Now admittedly that is usually an
argument for updating to x.y.z+1 rather than x.y+1, but I think it
destroys any reasoning on the basis of "if it ain't broke".

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

** LEGAL DISCLAIMER **
Statements made in this e-mail may or may not reflect the views and 
opinions of Wineman Technology, Inc. or its employees.

This e-mail message and any attachments may contain legally privileged, 
confidential or proprietary information. If you are not the intended 
recipient(s), or the employee or agent responsible for delivery of 
this message to the intended recipient(s), you are hereby notified 
that any dissemination, distribution or copying of this e-mail 
message is strictly prohibited. If you have received this message in 
error, please immediately notify the sender and delete this e-mail 
message from your computer.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Precision of data types and functions

2006-08-28 Thread Brandon Aiken
I'm considering migrating our MySQL 4.1 database (barf!) to PostgreSQL 8
or MySQL 5.  

The guy who originally designed the system made all the number data
FLOATs, even for currency items.  Unsurprisingly, we've noticed math
errors resulting from some of the aggregate functions.  I've learned
MySQL 5 stores numbers with the DECIMAL data type as text strings, and
does math at 64-bit precision.  Where can I find information about how
precise PostgreSQL 8 math is?

--
Brandon Aiken
CS/IT Systems Engineer

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Precision of data types and functions

2006-08-28 Thread Brandon Aiken
Oh, I agree.  PostgreSQL is a much more well-behaved RDBMS than MySQL
ever was.  I'm more inclined to select PostgreSQL over MySQL, but I may
not be able to convince management that it's a better choice no matter
how technically superior I can show it to be.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 28, 2006 2:21 PM
To: Brandon Aiken
Cc: pgsql general
Subject: Re: [GENERAL] Precision of data types and functions

On Mon, 2006-08-28 at 12:28, Brandon Aiken wrote:
> I'm considering migrating our MySQL 4.1 database (barf!) to PostgreSQL
8
> or MySQL 5.  
> 
> The guy who originally designed the system made all the number data
> FLOATs, even for currency items.  Unsurprisingly, we've noticed math
> errors resulting from some of the aggregate functions.  I've learned
> MySQL 5 stores numbers with the DECIMAL data type as text strings, and
> does math at 64-bit precision.  Where can I find information about how
> precise PostgreSQL 8 math is?

Much the same.  I'll let the other poster's reference to numeric types
stand on it's own.  Here's why I'd choose PostgreSQL over MySQL:

[EMAIL PROTECTED]:~> mysql test

mysql> select version();
+-+
| version()   |
+-+
| 5.0.19-standard |
+-+
1 row in set (0.00 sec)

mysql> create table test (a numeric(10,2));
Query OK, 0 rows affected (0.05 sec)
 
mysql> insert into test values (123123123123123.2);
Query OK, 1 row affected, 1 warning (0.00 sec)
 
mysql> select * from test;
+-+
| a   |
+-+
| .99 |
+-+
1 row in set (0.00 sec)

-

psql test

test=> select version();
 version

--
 PostgreSQL 7.4.12 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3
20040412 (Red Hat Linux 3.3.3-7)

test=> create table test (a numeric(12,2));
CREATE TABLE
test=> insert into test values (123123123123123.2);
ERROR:  numeric field overflow
DETAIL:  The absolute value is greater than or equal to 10^14 for field
with precision 12, scale 2.
test=> select * from test;
 a
---
(0 rows)


I don't trust a database that inserts something other than I told it to
insert and only gives me a warning.

For more info, take a look at these two pages and compare:

http://sql-info.de/mysql/gotchas.html
http://sql-info.de/postgresql/postgres-gotchas.html

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Precision of data types and functions

2006-08-28 Thread Brandon Aiken
To be fair, that's the fault of the previous designer, not MySQL.  You
don't blame Stanley when your contractor uses 2" plain nails when he
needed 3" galvanized.  The tool isn't to blame just because someone used
it incorrectly.

MySQL works great for what it does: high speed at a cost of data
integrity.  It's fine for discussion boards or anything non-critical
where having a database is a convenience instead of a necessity.
Nevermind that MySQL really doesn't have much place between PostgreSQL
and SQLite nowadays.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Jorge Godoy [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 28, 2006 3:36 PM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Precision of data types and functions

"Brandon Aiken" <[EMAIL PROTECTED]> writes:

> Oh, I agree.  PostgreSQL is a much more well-behaved RDBMS than MySQL
> ever was.  I'm more inclined to select PostgreSQL over MySQL, but I
may
> not be able to convince management that it's a better choice no matter
> how technically superior I can show it to be.

Just show them how much money they might loose with a simple bug as the
one
that was shown to you on the last post :-)  Money speaks very loud for
any
manager... 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql mentioned on Newsforge MySQL article

2006-08-28 Thread Brandon Aiken
Looks like it was a design contest not a benchmark to me.  Surprise,
surprise, the team that personally designs a DBMS has the best
performing DBMS.  The second place winner, Alexander Burger, is the
author of the solution he used: Pico LISP.  The third place team,
MonetDB, used their solution, MonetDB.

The results tell me that people who write databases can write good apps
that use those same databases.  And the world collectively says "O
RLY?".

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tony Caduto
Sent: Monday, August 28, 2006 4:33 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Postgresql mentioned on Newsforge MySQL article

http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml

Don't know the validity of this dvd order test they did, but the article

claims Postgresql only did 120 OPM.
Seems a little fishy to me.

-- 
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL on system with root as only user

2006-08-30 Thread Brandon Aiken
Unless it's a read-only database, I would never recommend using flash
media for an RDBMS.

Unless it's a small database, I would never recommend using USB as a
storage interface for an RDBMS.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of AgentM
Sent: Wednesday, August 30, 2006 10:42 AM
To: PostgreSQL General ML
Subject: Re: [GENERAL] PostgreSQL on system with root as only user


On Aug 30, 2006, at 9:01 , Chris Mair wrote:

>
> If you know what you're doing and you want to disable that
> feature you need to recompile from source and disable the
> uid checks in src/backend/main/main.c.
>
> Unless you're working in the embedded space or some such thing,
> I don't think it's a good idea, anyway.

Has anyone actually used PostgreSQL on an embedded system? I am  
genuinely curious. How about db performance from a flash drive?

-M

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL on system with root as only user

2006-08-30 Thread Brandon Aiken
I haven't got any numbers (or a USB stick I can test with at the moment)
but USB is going to be a bottleneck for read and write performance.
Unless you're accessing very small amounts of data or running small
queries, I would expect performance to be pretty poor.
 
If your data set is so small, why do you need a full RDBMS instead of
flat data/text files or SQLite?  If you're not concerned about disk
media failure, why do you need a transactional DB?  It seems like
putting a deadbolt on a screen door.

It might work just great for your device, of course, but I would not
expect it to scale well at all.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of AgentM
Sent: Wednesday, August 30, 2006 1:21 PM
To: PostgreSQL General ML
Subject: Re: [GENERAL] PostgreSQL on system with root as only user


On Aug 30, 2006, at 12:03 , Brandon Aiken wrote:

> Unless it's a read-only database, I would never recommend using flash
> media for an RDBMS.

Well, it's an embedded device, so you can pretty much be certain that  
it's not storing sales information. If the flash goes belly up, then  
the device has failed anyway, so the lost info is the least of the  
problems.

>
> Unless it's a small database, I would never recommend using USB as a
> storage interface for an RDBMS.

Why? Could you provide more details?

I would be interested in any performance numbers anyone has collected.

-M

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Precision of data types and functions

2006-09-01 Thread Brandon Aiken
Oh, I'm not saying that MySQL is a full-featured database, nor saying
that I agree with the MySQL philosophy.  I don't.  That's why I'm trying
to avoid MySQL.  

However PostgreSQL isn't any more accurate with FLOATs than MySQL is.
The ANSI SQL standard for FLOAT is for an inaccurate number.  It was
never meant to be accurate, so even though MySQL has a much more liberal
philosophy it's still behaving correctly when it does the math
inaccurately.  Which is just like I would expect PostgreSQL or DB2 or
Oracle to do.  If you need numeric accuracy and you pick FLOAT for your
field, that *is* the developer's fault.  You picked a screwdriver when
you needed a chisel.

Now, MySQL's design to 9-fill fields when you try to enter a too-large
number is, in fact, stupid on MySQL's part.  I consider that silent
truncation.  Heck, MySQL lets you create a date on February 31st, or
prior to the year 1500, both of which are obviously nonsensical.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ron Johnson
Sent: Monday, August 28, 2006 6:27 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Precision of data types and functions

-----BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brandon Aiken wrote:
> To be fair, that's the fault of the previous designer, not MySQL.
> You don't blame Stanley when your contractor uses 2" plain nails
> when he needed 3" galvanized.  The tool isn't to blame just
> because someone used it incorrectly.

Shows that you've been afflicted with the MySQL "app developer must
do everything" disease.

Just as a PK should not let you insert a duplicate record, a
NUMERIC(12,2) should not let you insert a too-big number.

Tool analogy: Pneumatic nailer says "maximum nail length 3 inches",
but it *lets* you install *4* inch nails.  So, you do what you can,
it mis-fires and you nail your hand to the deck.  Who's fault is it?
 Theirs, for making it easy to install 4 inch nails, or yours for
doing it?

That's where the analogy breaks down.  DBMSs have *always* returned
errors when the app tries to do something beyond the range of the
DB's parameters.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE823ES9HxQb37XmcRAi2bAKDXSW7ImqWSmpYKLGKFUxkdxtdz/QCgt2RM
DiTn9wpUZoOJ8WIrFXxKmQ4=
=U6SP
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Precision of data types and functions

2006-09-01 Thread Brandon Aiken
The Gregorian calendar was established in the 1500's by Pope Gregory,
so, no, those dates did not exist.

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 2:22 PM
To: Brandon Aiken
Cc: pgsql general
Subject: Re: [GENERAL] Precision of data types and functions

On Fri, 2006-09-01 at 10:37, Brandon Aiken wrote:
> Oh, I'm not saying that MySQL is a full-featured database, nor saying
> that I agree with the MySQL philosophy.  I don't.  That's why I'm
trying
> to avoid MySQL.  
> 
> However PostgreSQL isn't any more accurate with FLOATs than MySQL is.
> The ANSI SQL standard for FLOAT is for an inaccurate number.  It was
> never meant to be accurate, so even though MySQL has a much more
liberal
> philosophy it's still behaving correctly when it does the math
> inaccurately.  Which is just like I would expect PostgreSQL or DB2 or
> Oracle to do.  If you need numeric accuracy and you pick FLOAT for
your
> field, that *is* the developer's fault.  You picked a screwdriver when
> you needed a chisel.
> 
> Now, MySQL's design to 9-fill fields when you try to enter a too-large
> number is, in fact, stupid on MySQL's part.  I consider that silent
> truncation.  Heck, MySQL lets you create a date on February 31st, or
> prior to the year 1500, both of which are obviously nonsensical.

What's nonsensical about a date before the year 1500???  it's not like
that didn't exist or something.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Precision of data types and functions

2006-09-01 Thread Brandon Aiken
Of course the year exists.  The date itself is nonsensical, however.

'January 3, 648' does reference a valid day, but the date itself has no
meaning at that time in the world, so there is very little meaning in
using Gregorian dates except to give us a relativistic idea of when it
occurred.  Nevertheless, you can never say with specificity anything
that occurred on any given date prior to the inception of the Gregorian
calendar without doing conversions to a calendar no longer in use while
taking into account the numerous error corrections that have been made
to various calendars.  Knowing the year and season something happened is
about the best that can be expected.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 2:27 PM
To: Brandon Aiken
Cc: pgsql general
Subject: RE: [GENERAL] Precision of data types and functions

On Fri, 2006-09-01 at 13:24, Brandon Aiken wrote:
> > Now, MySQL's design to 9-fill fields when you try to enter a
too-large
> > number is, in fact, stupid on MySQL's part.  I consider that silent
> > truncation.  Heck, MySQL lets you create a date on February 31st, or
> > prior to the year 1500, both of which are obviously nonsensical.
> 
> What's nonsensical about a date before the year 1500???  it's not like
> that didn't exist or something.
> The Gregorian calendar was established in the 1500's by Pope Gregory,
> so, no, those dates did not exist.

That makes no sense whatsoever.  Just because the calendar is a somewhat
modern invention doesn't mean that the year 45 BC doesn't exist...  How
else are we to keep track of dates from before that time?  Switch to the
Mayan calendar?  I'm pretty sure no one's made a Mayan Calendar
extension for PostgreSQL (or any other database) just yet.

test=> insert into test values ('1023-04-12 BC');
INSERT 2124397005 1
test=> insert into test values ('1023-04-12');
INSERT 2124397005 1
test=> select * from test;
   t

 1023-04-12 00:00:00
 1023-04-12 00:00:00 BC
(2 rows)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database design and triggers...

2006-09-07 Thread Brandon Aiken

It's not clear to me how your data is organized or exactly what you're
counting.  If I understand you correctly, yes, you could use triggers to
maintain a table in this manner.  However, why can't you simply use a
SELECT query using the SUM() or COUNT() aggregate functions?  If the
queries are slow, do some index tuning.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, September 06, 2006 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Database design and triggers...

Hi everybody. Hope I'm posting in the correct group.

My question is part design and part implementation.

Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.

Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?

My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?

Thanks Beforehand!


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database design and triggers...

2006-09-08 Thread Brandon Aiken
Actually, I don't think you need Table 2 at all.  Rather, you can fairly
easily incorporate all the functionality of CurrentCountTable into Table
2 and then use a query or a VIEW.

Say you have these two tables.  [I'm not the best at data modeling yet,
so I wouldn't necessarily take these verbatim.  I'm still a newbie.
Listed here is pseudo-SQL.]

TABLE "Transaction"
(
  "TransactionID" serial,
  "OperationID" integer,
  "OperationType" char(15) NOT NULL,
  "ClientID" integer NOT NULL,
  "TransactionDate" date NOT NULL DEFAULT 'epoch',
  "UserID" char(15) NOT NULL,
  PRIMARY KEY ("TransactionID"),
  UNIQUE ("OperationID")
)

TABLE "TransactionItem"
(
  "TransactionItemID" serial,
  "OperationID" integer NOT NULL,
  "PartID" integer NOT NULL,
  "LotID" integer NOT NULL,
  "Qty" integer NOT NULL,
  PRIMARY KEY ("TransItemID"),
  FOREIGN KEY ("OperationID")
  REFERENCES "Transaction" ("OperationID")
  ON UPDATE CASCADE ON DELETE RESTRICT,
  UNIQUE ("OperationID", "PartID")
)

Now, when you store Qty, you store a positive number if the inventory
increases and a negative number if it decreases.

Now, you can use a query or create a VIEW based on this query:

SELECT "OperationID"
   , "ClientID"
   , "TransactionDate"
   , "PartID"
   , "LotID"
   , "Qty"
   , "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";

Alternately, you can continue to store the Qty as an unsigned integer
and then use this query/VIEW:

SELECT "OperationID"
   , "ClientID"
   , "TransactionDate"
   , "PartID"
   , "LotID"
   , CASE
WHEN "OperationType" = 'Incoming' THEN "Qty"
WHEN "OperationType" = 'Outgoing' THEN (-1 * "Qty")
 END
   , "UserID"
FROM "Transaction" NATURAL JOIN "TransactionItem";

As far as speed, speed is always an issue.  PostgreSQL is going to
perform better than Access, but don't use good performance as a crutch
for bad design.

As far as normalization, it is possible to take it too far.  There is a
time when de-normalizing a database will significantly improve its
performance even if it involves duplicating data.  4NF is not the goal
of DB design, having a usable database is.  Knowing when and how to
de-normalize is much more difficult than learning to design a normalized
data model.


--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Thursday, September 07, 2006 7:09 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database design and triggers...

Hi Brandon, thanks for answering.

The information regarding a transaction is stored on two separate
tables due to normalization. The first table stores general information
regarding the transaction like Transaction number, date, customer ID,
type of transaction, userID etc... The second table stores the details
of the purchase like the products and quantities that the transaction
affected. Like this:

Table 1:
Operation#   Type_of_operation  Client# DateUserID
1  Inbound  10  1/1/2000
Paul37
2  Outbound 10  1/2/2000
Steve04

Table 2:
Operation#  Part#   Lot#   Qty
1   X   a   10
1   Y   ds1 9
1   Z   54ad7
2   X   a   10

Table 2 has Table 1's Operation field as a Foreign key.

Now, to obtain a current count of  Part X we have to create a temporary
table on which we can use aggregate functions.

CurrentCountTable:

Operation   Client# DatePart#   Lot#Qty UserID
1   10  1/1/2000X   a   +10 Paul37
1   10  1/1/2000Y   ds1 +9  Paul37
1   10  1/1/2000Z   54as+7  Paul37
2   10  1/2/2000X   a   -10 Steve04

Now, on the temporary table called CurrentCountTable we can use an
aggregate function, The problem is that creating this table is slow
with INSERT INTO, and so are aggregate functions (On MS Access). So
Naturally it occurred to me that triggers can keep a permanent version
of the CurrentCountTable up to date every time some one inserts in to
Table 1 and Table 2. But it has to be perfect to avoid inconsistencies.

So, are triggers a safe bet? Is using triggers more advisable over the
temporary table solution because on PSQL speed is not an issue? Or
should I use views?

Thanks!!! (Hope the tables got liste

Re: [GENERAL] [NOVICE] Insert Only Postgresql

2006-09-08 Thread Brandon Aiken








Sure.  Any RDBMS can do that.  Just create
a user account (login role for PostgreSQL) and only grant the INSERT privilege
to them on your tables, then connect with that account with your program.  Any
DELETE or UPDATE statements will automatically fail.

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solomon Asare
Sent: Friday, September 08, 2006
4:51 PM
To: [EMAIL PROTECTED]
Subject: [NOVICE] Insert Only
Postgresql



 

Hi All,
pls, is there an Insert only version of postgreql or any other known database?
NO deletes, no updates. Inserts only! Any leads, please?

Best Regards,
solomon.








[GENERAL] Database migration and redesign

2006-09-11 Thread Brandon Aiken








I’ve been tasked with the unenviable job or migrating
a MySQL 4.0 database to something more usable (namely, PostgreSQL 8).  MySQL
4.0 doesn’t even support basic things like subqueries, and in order to
emulate the effects that RULEs, TRIGGERs and VIEWs bring, they had been using
PHP scripts to replicate and build tables across the database (not from one
server to another – within the DB itself).  The database was built
across 5 separate schemata simply to organize the 50 odd tables, and all the
tables are using the MyISAM engine which means no transactions, no row-level
locking, and no foreign key constraints.

 

Yeah.  It’s ugly.  You should see the
front-end.  

 

My question relates to primary keys.  The vast majority
of tables have a primary key on a single char or varchar field.  Is it
considered better practice to create a serial type id key to use as the primary
key for the table, and then create a unique index on the char and varchar
fields?  Should foreign keys reference the new primary id or the old unique
key?  What about compound [primary] keys?

 

Also, any suggestions for good DB design books would be
appreciated.  I no longer have any good DB design books, and I don’t
know what’s good anymore.

 

--



Brandon Aiken





CS/IT Systems Engineer





 





Confidentiality Notice





This email, including attachments, may include confidential
and/or proprietary information, and may be used only by the person or entity to
which it is addressed.  If the reader of this email is not the intended
recipient or his/her authorized agent, the reader is hereby notified that any
dissemination, distribution or copying of this email is prohibited.  If
you have received this email in error, please notify the sender by replying to
this message and deleting this email immediately.



 








Re: [GENERAL] Database migration and redesign

2006-09-11 Thread Brandon Aiken
Excellent, I managed to find one of the major sticking points all by
myself!  This is exactly what I was looking for.  Thanks!

There seems to be a lot of that in the DB world.  Practical vs
theoretical.  Or pragmatic vs strict.  It seems to be whether you came
from a math background -- in which case you're concerned with logical
data sets in the most effective theory possible -- or from a software
engineering background -- in which case you're concerned about the
usability and performance of the database software, particularly in how
other applications reference and access the database.

There are advantages to both schools.  A highly theoretical database can
also be highly normalized and therefore have very agile data models
built on top of them.  However, the practical limitation of computerized
relational databases means that they can't implement all features of the
relational database model very well or very efficiently.  Additionally,
while the logical models built on top of the theoretical DB structure
can be very agile, designing those models is very complex due to the
complex nature of the highly normalized relational structure involved.
If you have to do a multiple nested join of 10 tables just to run what
will be a query of the most basic object in the final app, you've
probably normalized way too far.

I think I can draw some lines of distinction now, though.  There are
some places where the index is just a means to relate the otherwise
unrelated tables (if that makes sense), and sometimes where using
natural keys will save several table joins on many queries.  This was
what I was seeing, and I was wondering if it was acceptable to mix it up
and do both where it seems to make the most sense.



My next task is a bit more difficult simply to define what the
relationship needs to be.  The app is business management software that
manages, among other things, Jobs and Quotes.  The relationship between
Jobs and Quotes is a little odd.  Quotes are sent to customers, and if
those customers accept the Quote then a Job is created and the Quote is
assigned to it.  So Quotes exist without Jobs.  Additionally, customers
will often request additional services after the original Quote and
these services get Quoted in the same way, so multiple Quotes can be
assigned to a single Job.  

It gets worse.  We use Internal Jobs for cost tracking, and these Jobs
do not have Quotes at all.   Also, there are some Jobs that get 'verbal'
quotes (that is, quotes given outside the normal quoting system).  So
Jobs exist without Quotes.

So, one Job can be associated with many Quotes, so Quote is the child of
the relationship even though it gets created first.  There can be 0 or 1
parents, and 0, 1, or many children in any combination.  I can simply
define foreign keys normally and use NULL values where no relationship
exists, but isn't there a better way to do this?  Allowing NULLs is one
of the problems with many of these tables.

Should I create another table with two fields, one for the Quote number
and one for the Job number, and then have those two fields comprise a
compound primary key (and Quote having a unique constraint for itself as
well)?  That way I don't think I can get logically orphaned children
since both tables are parents to this third table.  Is that how it's
supposed to be done?

There are several places across the DB where this style relationship
occurs, and I'd like to try to conform to best practices (especially
since the last guy managed to miss just about every single one).



--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Merlin Moncure [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 11, 2006 1:08 PM
To: Brandon Aiken
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database migration and redesign

On 9/11/06, Brandon Aiken <[EMAIL PROTECTED]> wrote:
> My question relates to primary keys.  The vast majority of tables have
a
> primary key on a single char or varchar field.  Is it considered
better
> practice to create a serial type id key to use as the primary key for
the
> table, and then create a unique index on the char and varchar fields?
> Should foreign keys reference the new primary id or the old unique
key?
> What about compound [primary] keys?

This is a somewhat controversial topic.  I personally am not a big fan
of Ambler or his approach to database design.  This is more or less
the surrogate/natural key debate which is a perennial flamewar in the
database world.

I tend to think natural keys are generally superior but surrogates are
useful in some situations.  Automatic use of ID column in every table
leads to lazy thinking and overcomplicated designs.  However at times
they can really optimize your database.  So I will lay down some
extremes and leave it up to you to find the middle ground.

A trivial case of when not to use a serial key is like this:
create table sex(id

Re: [GENERAL] serial, sequence, and COPY FROM

2006-09-12 Thread Brandon Aiken
Serial fields have a default value of nextval, so if you add an 18th
field to your text file with DEFAULT in every record it should work as
intended.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Tuesday, September 12, 2006 12:28 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] serial, sequence, and COPY FROM

All,

I have a pipe delimited text file I'm trying to copy to a table. The
file has 17 fields per line. The table has 18, with that last field
(record) a serial with sequence. I have done:
select setval('sequence_name_seq', 555, 'TRUE')
but when I do: COPY tablename FROM '/path/to/file/file.txt' delimiter
'|'
the copy stops at the first row, insisting that it's missing data for
the field record. Well, yeah...
I can make this work with inserts but not with COPY FROM. What I've
been doing is dumping it into a mysql table with an auto_increment
field and then dumping that into a text file and using that for the
COPY FROM; certainly clumsy. How might this be done?

r


---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] program for graphical/schematical representation of relations between tables

2006-09-12 Thread Brandon Aiken








MS Paint.  ;)

 

Literally, there were posts about this
yesterday.  Look for DBDesigner4 and Clay (an Eclipse extention/plug-in).

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic
Sent: Tuesday, September 12, 2006
4:54 PM
To: PgSQL General
Subject: [GENERAL] program for
graphical/schematical representation of relations between tables



 



Is there any free program that can graphically/schematically
display relations between all tables in a database? 





Regards,





 





Zlatko










Re: [GENERAL] [NOVICE] INSERT does not finish except if it is carried out a

2006-09-13 Thread Brandon Aiken
Why drop and recreate the table?  Why not TRUNCATE it?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matthieu Guamis
Sent: Wednesday, September 13, 2006 6:15 AM
To: [EMAIL PROTECTED]
Subject: Re: [NOVICE] INSERT does not finish except if it is carried out a

Hello,

PostgreSQL 8.1 is running on Ubuntu 6.06 server edition.

Please trust me, when I use DELETE/INSERT/INSERT statements the job is 
done in a few seconds whereas with DROP/CREATE AS /SELECT it takes 
several minutes (to achieve the SELECT statement). But in this last 
case, if I wait few minutes between  CREATE AS and SELECT then the 
SELECT is done in a few seconds.

Sorry for previous syntax errors (I did not paste statements but wrote 
them with simplified names for fields and tables... it may explain the 
unmatched open parenthesis).

Could you tell me more about some possible causes of the delay?

Regards


Michael Fuhr a écrit :
> [Please don't post HTML.]
>
> On Tue, Sep 12, 2006 at 02:09:40PM +0200, Matthieu Guamis wrote:
>   
>> During the execution of the following requests, INSERT does not finish
>> except if it is carried out a few minutes after the
>> creation of the table. How to explain this latency time?
>> 
> [...]
>   
>> insert into maTable (select * from
>> ((select a.id1 ,b.id2 ,0
>> from maTable a, maTable b
>> group by a.id1,b.id2
>> order by b.id2,a.id1)
>> EXCEPT
>> (select c.id1 ,c.id2 ,0
>> from maTable c
>> ))as tt;
>> 
>
> This statement isn't syntactically correct; it has an unmatched
> open parenthesis.  If I paste the statement into psql it appears
> to hang, presumably because the parser thinks it's incomplete and
> is waiting for more input.  Are you sure you've diagnosed the problem
> correctly?  If so then please post a test case without errors so
> others can attempt to duplicate the problem.
>
> What version of PostgreSQL are you running and on what platform?
> What client interface are you using?
>
>   
>> DROP and CREATE do their job but INSERT does not finish if it is
>> carried out immediately after the CREATE. On the other hand
>> if it is carried out a few minutes (~5min) later then INSERT commits
>> in a few seconds.
>> 
>
> A five-minute delay could hint at some possible causes, but first
> let's find out whether syntax is the problem.
>
>   

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [NOVICE] Question About Aggregate Functions

2006-09-13 Thread Brandon Aiken








Ah, I did not know what was in your
fields, so I did not assume they were Boolean values.  It looked to me
like you were trying to use IS TRUE to substitute for the lack of a GROUP BY,
so I didn’t know what to do.

 

Yes, count() will include all non-NULL
values.  Sorry if I sounded unclear there.  

 

If you do typecasting the value zero is false
(and non-zero is true).  NULL in an _expression_ always returns NULL, and
many programs will interpret that result as false.  So I’m not sure
of what results you might get with a Boolean test against a non-Boolean field,
especially if it’s an integer field.

 

postgres=# select 0::boolean = FALSE;

 ?column?

--

 t

(1 row)

 

You should just be able to take the
previous query and add in your WHERE clauses:

 

SELECT count(t1.fielda), count(t2.fielda),
count(t2.fieldb), AVG(t2.fieldc)

FROM t1 JOIN t2 ON t1.item_id = t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE

GROUP BY NULL;

 

Now, the INNER JOIN you’re using is
only selecting fields where both t1.item_id and t2.item_id exist and the respective
fields are TRUE.  That is, it’s only going to run the count and
average functions against the results of this query:

SELECT *

FROM t1 JOIN t2 ON ON t1.item_id =
t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE;

 

If that’s what you want, that’s
great.  

 

However, you might want a count of each
field where that field is TRUE.  In that case, I would use either
temporary tables, compound queries and derived tables, or multiple simple
queries.

 

It’s also possible that you might
want a count of fields where t1.item_id and t2.item_id exist, but where only
each respective field is TRUE.  That is, you want a count of t1.fielda
where it is TRUE no matter what t2.fielda and t2.fieldb are as long as
t1.item_id matches t2.item_id.  In that case you have to do even more
joins, and that could take a fair bit of time especially if you haven’t
indexed your item_id fields.

 

You really have to look at your result
sets.  Sometimes it is better to run multiple simple queries instead of
one big complex query to be sure you’re getting the data you want and the
query executes in a reasonable amount of time.

 

Also, consider that NULL values are
generally considered bad to purposefully enter.  Logically, It would be
better to create one table for each field and then create a record for each
item_id as you need it so you never have NULLs.  The problem with that is
one of performance if you end up doing large number of JOINs.  In that
case, it might be better to use integers instead of Boolean fields, since you
have three explicit states of TRUE, FALSE, and NOT YET DETERMINED.

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Don Parris
Sent: Tuesday, September 12, 2006
9:16 PM
To: [EMAIL PROTECTED]
Subject: Re: [NOVICE] Question
About Aggregate Functions



 

On 9/12/06, Brandon
 Aiken <[EMAIL PROTECTED]>
wrote:











First, aggregate functions always have to have a GROUP BY
clause.  If you want everything in a table or join, you use GROUP BY
NULL.  












Thanks.  I did not realize that. 





 







Next, IS TRUE statements will select anything that is not
NULL, 0, or FALSE, so I'm not sure what you're trying to get because you're
getting nearly everything, and count() already ignores NULL values.












I didn't see that in the manual's coverage, but could have overlooked it. 
But count() will include the FALSE values along with the TRUE values - ignoring
only those that are NULL.  At least, I think that's the case.  So,
for each column I select, I need to be sure I am counting only the TRUE
values.  I do have NULL, FALSE and TRUE values in each column, since I do
not always know for sure whether an attribute is TRUE or FALSE when I record
the item.  That may be determined later, but not in all cases. 





 







Next, count(x, y, z) isn't a valid function.  Count()
only has one parameter, so you'll have to call it several times.












I knew my syntax was wrong - but wasn't sure about calling multiple functions
since I hadn't seen any examples of that in my hunting for info.  I was
trying to make a little clearer what I wanted to do. 
 





 







Depending on what you were hoping count(x, y, z) was
returning, you do this: 

 

SELECT count(t1.fielda), count(t2.fielda), count(t2.fieldb),
AVG(t2.fieldc)

FROM t1 JOIN t2 ON ON t1.item_id = t2.item_id

GROUP BY NULL;












This one looks more like what I am attempting to do.  However, I do need
to be sure my  count() functions are counting the values that are
TRUE.  Is this a case where I should run a query to select the records
where the values for the desired columns are true, insert that result into a
temp table, and then perform the count() function as above on just 

Re: [GENERAL] [NOVICE] Question About Aggregate Functions

2006-09-13 Thread Brandon Aiken








I think I mistakenly sent this to General
instead of Novice.  Oops.

 

Yeah, I either skipped over or forgot the
bit in the OP about bools.  Mea culpa.

 

You should be able to use OR instead of
AND in any logical _expression_.

 

  Well this sounds more like what I want. 
Given t2.fielda, t2.fieldb, t2.fieldc, any one (or all three) could be true,
but frequently at least one of the fields is false.  Initially, all of the
fields might be unknown (thus NULL) for a given item until I am able to
investigate the items to determine TRUE/FALSE.  I frequently have items
that are inactive, and thus unable to determine any of attributes in t2. 

My end result needs to be a count of all the values in each field where the
value is TRUE, as opposed to FALSE or NULL.

 

Yeah, I would probably run 4 separate,
simple queries.  That will get you the best performance since you’re
doing no JOINs and no composite queries.

 

If you need to enter the results into
another table, try INSERT … to insert the defaults and any primary key
you have (like timestamp), then four UPDATE … SELECT statements.

 

 

The real problem with NULLs is some of the
(in my mind) nonsensical results you get, especially with logical operators:

NULL AND TRUE => NULL

NULL OR TRUE => TRUE

NULL AND FALSE => FALSE

NULL OR FALSE => NULL

 

Plus you have to use IS instead of = since
any NULL in an = _expression_ makes the result NULL (yes, this is an error in my previous
queries).  NULL just has all these special cases.  I find it much
nicer to avoid it wherever possible since it has somewhat unpredictable
results.

 

 









From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Don Parris
Sent: Wednesday, September 13,
2006 12:50 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] [NOVICE]
Question About Aggregate Functions



 

On 9/13/06, Brandon
 Aiken <[EMAIL PROTECTED]>
wrote:











Ah, I did not know what was in your fields, so I did not
assume they were Boolean values.  It looked to me like you were trying to
use IS TRUE to substitute for the lack of a GROUP BY, so I didn't know what to
do.












That was in the first paragraph of my OP.   "How do I create a query
that (1) evaluates each boolean field for TRUE/FALSE and (2) counts the number of
rows where each field is TRUE?"  Maybe you just hadn't had your first
cup of coffee? ;-)  Seriously, though, I really do appreciate your help.





 







Yes, count() will include all non-NULL values.  Sorry if
I sounded unclear there.  

 

If you do typecasting the value zero is false (and non-zero
is true).  NULL in an _expression_ always returns NULL, and many programs
will interpret that result as false.  So I'm not sure of what results you
might get with a Boolean test against a non-Boolean field, especially if it's
an integer field.

 

postgres=# select 0::boolean = FALSE;

 ?column?

--

 t

(1 row)

 

You should just be able to take the previous query and add in
your WHERE clauses:





 

SELECT count(t1.fielda), count(t2.fielda), count(t2.fieldb),
AVG(t2.fieldc)





FROM t1 JOIN t2 ON t1.item_id = t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda = TRUE AND t2.fieldb =
TRUE

GROUP BY NULL;

 

Now, the INNER JOIN you're using is only selecting fields
where both t1.item_id and t2.item_id exist and the respective fields are TRUE.
 That is, it's only going to run the count and average functions against
the results of this query:

SELECT *





FROM t1 JOIN t2 ON ON t1.item_id = t2.item_id





WHERE t1.fielda = TRUE AND t2.fielda = TRUE AND t2.fieldb =
TRUE;

 

If that's what you want, that's great.  












Can I use OR instead of AND here?





 







However, you might want a count of each field where that
field is TRUE.  In that case, I would use either temporary tables,
compound queries and derived tables, or multiple simple queries. 











 







It's also possible that you might want a count of fields
where t1.item_id and t2.item_id exist, but where only each respective field is
TRUE.  That is, you want a count of t1.fielda where it is TRUE no matter
what t2.fielda and t2.fieldb are as long as t1.item_id matches t2.item_id.
 In that case you have to do even more joins, and that could take a fair
bit of time especially if you haven't indexed your item_id fields.











  Well this sounds more like what I want.  Given t2.fielda,
t2.fieldb, t2.fieldc, any one (or all three) could be true, but frequently at
least one of the fields is false.  Initially, all of the fields might be
unknown (thus NULL) for a given item until I am able to investigate the items
to determine TRUE/FALSE.  I frequently have items that are inactive, and
thus unable to determine any of attributes in t2. 

My end result needs to be a count of all the values in each field where the
value is TRUE, as opposed to FALSE or NULL.











You really have to look at your 

Re: [GENERAL] PostgreSQL slammed by PHP creator

2006-09-14 Thread Brandon Aiken
And yet this man is smart enough not to run MySQL on Windows.  Methinks
this says something

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua D. Drake
Sent: Thursday, September 14, 2006 5:16 PM
To: [EMAIL PROTECTED]
Cc: PostgreSQL General ML
Subject: Re: [GENERAL] PostgreSQL slammed by PHP creator

[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote on 15/09/2006 04:40:04 a.m.:
> 
>>> I wasn't able to find anything the "article" worth discussing. If
you 
>>> give up A, C, I, and D, of course you get better performance- just 
> like 
>>> you can get better performance from a wheel-less Yugo if you slide
it 
>>> down a luge track.
>> I don't think a Yugo would give you better performance no matter what

>> you do ;)
> 
> I found in my youth that a mini slides better on it roof than on it's 
> wheels.

I can top that :). I hit a telephone pole at 135MPH in a 66 Cuda :)
If it hadn't been for the gravel around that corner, I would have beat 
the other car too.

Joshua D. Drake


-- 

=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Installation with sources or with packages

2006-09-15 Thread Brandon Aiken
I prefer Debian packages wherever possible.  That will provide you with
the easiest manageability in the long run for maintaining updates.  It's
hard to beat "sudo apt-get update && sudo apt-get upgrade && sudo
apt-get autoclean" as a complete patch script.

However, keep in mind that Debian Sarge (stable) is currently at
PostgreSQL 7.3.  If you're running Sarge, you'll either have to add Etch
(testing) repositories or download the PostgreSQL packages from Etch
repositories.  Since Etch is nearing release (which could mean anything
in the Debian world) I suspect you won't have any major problems even if
you do this.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Friday, September 15, 2006 7:26 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Installation with sources or with packages

Hello,

I need install postgresql 8.X.X on my debian host.
But someone says me I should use sources files for the install
and other one says me I should use debian package.
Does anybody have some "theory" or opinion about the installation of
postgresql
by sources or debian package ?

What's the most reliable/best solution to install AND UPDATE the sgbdr
during
his all life ?

thanks for your help.



---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Installation with sources or with packages

2006-09-15 Thread Brandon Aiken
Fair enough.  I've never done it before except with trivial things that
had no dependencies, and I just downloaded the packages with wget.

Another option would be to use Ubuntu server.  That's kinda Debian de
facto, and offers more current packages, IIRC.

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: Weerts, Jan [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 15, 2006 9:49 AM
To: Brandon Aiken; [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Installation with sources or with packages

> However, keep in mind that Debian Sarge (stable) is currently at
> PostgreSQL 7.3.  If you're running Sarge, you'll either have to add
> Etch (testing) repositories or download the PostgreSQL packages
> from Etch repositories.  Since Etch is nearing release (which could
> mean anything in the Debian world) I suspect you won't have any
> major problems even if you do this.

I prefer Debian on my servers too, but running a mixed mode 
system with packages from stable, testing and possibly 
unstable will give you major headaches, when software depends 
on different library versions or even different libraries than
those already installed on your machine. 

Instead try www.backports.org, which offers a lot of
backported packages for stable. Right now their top news is :)

# I'm going to remove postgresql-8.0 from the backports.org 
# archive. It's was already removed from Debian, and the last 
# version of the Debian package which was available is vulnerable 
# to CVE-2006-2313 and CVE-2006-2314, hence the backport is 
# also affected.
# Please upgrade to the postgresql-8.1 backport.

Regards
  Jan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] basic geographic structure

2006-09-15 Thread Brandon Aiken
What about a self-referencing table?  

Region
--
region_id
country_id
parent_region_id
PRIMARY KEY ("region_id")
FOREIGN KEY ("parent_region_id") REFERENCES "Region" ("region_id") ...


--
Brandon Aiken
CS/IT Systems Engineer

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dario Borda
Sent: Friday, September 15, 2006 2:29 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] basic geographic structure

Hello

I need help with the data model of a basic geographic structure
Country > State/Region > Sub-Region

Most of the countries have a 3 level, so i want to do this
_
Country
contry_id
...
_
Region
region_id
country_id

__
Sub-region
sub_region_id
region_id
_
Client
client_id
sub_region_id

My Question:

Some countries do no have 3rd level (Sub-region) what is the best way to
manage this when normaly my Client table needs to be linked to the 3er
level data (sub_region_id)

Thank you, any tips will be appreciated

Dario



  

Stay in the know. Pulse on the new Yahoo.com. Check it out. 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly