Re: Creditcard Number Security was Re: [GENERAL] Encrypted column

2007-06-07 Thread Guy Fraser
On Tue, 2007-06-05 at 16:51 -0400, Andrew Sullivan wrote:
> On Tue, Jun 05, 2007 at 07:29:02PM +0100, Peter Childs wrote:
> > Unfortunately you still need to store them somewhere,  and all systems can
> > be hacked.  
> 
> Yes.  I agree, in principle, that "don't store them" is the best
> advice -- this is standard _Translucent Databases_ advice, too.  For
> the least-stealable data is the data you don't have.
> 
> But if there is a business case, you have to do the trade off.  And
> security is always a tradeoff (to quote Schneier); just do it well. 
> (Someone else's advice about hiring a security expert to audit this
> sort of design is really a good idea.)
> 
> A

Have you thought about setting up an account with PayPal, and having
people pay through PayPal?

Let PayPal deal with the security, and credit card info, after all it's
what they do.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] array_to_set functions

2007-08-02 Thread Guy Fraser
On Wed, 2007-08-01 at 07:14 +0530, Merlin Moncure wrote:
> On 8/1/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > David Fetter and I just came up with these, perhaps others will find
> > them useful:
> >
> > CREATE OR REPLACE FUNCTION array_to_set(anyarray, int) RETURNS SETOF 
> > anyelement LANGUAGE SQL AS $$
> > SELECT $1[i] from generate_series(array_lower($1, $2), array_upper($1, 
> > $2)) i
> > $$;
> > CREATE OR REPLACE FUNCTION array_to_set(anyarray) RETURNS SETOF anyelement 
> > LANGUAGE SQL AS $$
> > SELECT array_to_set($1, 1)
> > $$;
> 
> very nice, although IMO there is a strong justification for these
> functions to be in core and written in C for efficiency (along with
> array_accum, which I have hand burn from copying and pasting out of
> the documentation).
> 
> merlin
> 
Excellent timing guys. :^)

I was trying to build a function to list the items of an array, but 
ran into problems and was going to post what I had been working on.

Your functions work great.

In case you don't have the function to generate an array from a set
here is one I have been using :


CREATE AGGREGATE array_accum (
BASETYPE = anyelement,
SFUNC = array_append,
STYPE = anyarray,
INITCOND = '{}'
);


> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>    subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] array_to_set functions

2007-08-08 Thread Guy Fraser
On Tue, 2007-08-07 at 17:46 -0500, Decibel! wrote:
> On Sun, Aug 05, 2007 at 08:18:08PM +0530, Merlin Moncure wrote:
> > On 8/3/07, Guy Fraser <[EMAIL PROTECTED]> wrote:
> > > On Wed, 2007-08-01 at 07:14 +0530, Merlin Moncure wrote:
> > > > On 8/1/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > > > > David Fetter and I just came up with these, perhaps others will find
> > > > > them useful:
> > > > >
> > > > > CREATE OR REPLACE FUNCTION array_to_set(anyarray, int) RETURNS SETOF 
> > > > > anyelement LANGUAGE SQL AS $$
> > > > > SELECT $1[i] from generate_series(array_lower($1, $2), 
> > > > > array_upper($1, $2)) i
> > > > > $$;
> > > > > CREATE OR REPLACE FUNCTION array_to_set(anyarray) RETURNS SETOF 
> > > > > anyelement LANGUAGE SQL AS $$
> > > > > SELECT array_to_set($1, 1)
> > > > > $$;
> > > >
> > > > very nice, although IMO there is a strong justification for these
> > > > functions to be in core and written in C for efficiency (along with
> > > > array_accum, which I have hand burn from copying and pasting out of
> > > > the documentation).
> > > >
> > > > merlin
> > > >
> > > Excellent timing guys. :^)
> > >
> > > I was trying to build a function to list the items of an array, but
> > > ran into problems and was going to post what I had been working on.
> > >
> > > Your functions work great.
> > >
> > > In case you don't have the function to generate an array from a set
> > > here is one I have been using :
> > >
> > >
> > > CREATE AGGREGATE array_accum (
> > > BASETYPE = anyelement,
> > > SFUNC = array_append,
> > > STYPE = anyarray,
> > > INITCOND = '{}'
> > > );
> > 
> > I think that's what just about everyone uses.  Unfortunately the
> > reverse of the function (array_to_set above) AFAIK does not map
> > directly to the C array API.
> 
> Oh, cool, hadn't thought about using an aggregate to do this. That's
> probably faster than what I came up with.
You are welcome. I am glad someone appreciated my gesture.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL and AutoCad

2007-10-26 Thread Guy Fraser
On Thu, 2007-10-25 at 08:57 -0600, Josh Tolley wrote:
> On 10/24/07, Bob Pawley <[EMAIL PROTECTED]> wrote:
> > Is there any way of converting text from an AutoCad (.dwg ot .dxf) file into
> > a PostgreSQL  Database??
> >
> > Bob Pawley
> 
> I know nothing of AutoCad, but your message has been sitting for a
> while without response, so I'll throw out the suggestion that you
> probably want AutoCad to export the text to some more common format
> (like a ASCII or UTF8 file or some such) and import that.
> 

Stupid list!
I guess my response went directly to the poster instead of the list.

DXF is Text that can be parsed if I remember correctly.

Look it up on Wikipedia, it has some info, but there are AutoDesk
manuals that detail the format.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Migrating MySQL app to postgres?

2005-04-22 Thread Guy Fraser
One of my associates swears SMARTY is the best thing since sliced 
bread. I think it uses PHP an PEAR, but is more abstract. I looked 
at it and it did some cool things with only a few lines of code.

Being an old dog, I have built many different "libraries" in C 
and PHP and prefer to use them. The last project I worked on, 
we made it so it could be used with MySQL, but was primarily 
designed to work with PostgreSQL. I don't like MySQL, and I 
was the primary developer.

Give smarty a look.

On Fri, 2005-22-04 at 10:27 -0700, Rich Shepard wrote:
> On Fri, 22 Apr 2005, Scott Marlowe wrote:
> 
> > Unfortunately, there is no use of an abstraction layer in SugarCRM, and the
> > few comments I found in their forum about alternate databases indicated a
> > current focus on feature enhancements and sales / services, not supporting
> > alternate databases.
> 
> Scott,
> 
>Thanks for the insight. Up until the mid-1990s (when I migrated from DOS to
> linux), I used my own CRM written in Paradox. I tuned it over time to do
> everything I wanted, and it was open and used every day. I just never did
> port that to postgres.
> 
>Guess now's the time. I'm trying to find a copy of pygresql to use, but
> it's not easy (www.pygresql.org leads to druid.net and there's nothing on the
> page about pygresql). I'm learning python to use on a major project here (but
> the core libraries will remain in C) so I'd like to use that for the UI on
> this project, too. I'm still not sure about the reports, but one step at a
> time. That is, will the pygresql display nicely formatted reports based on
> sql queries to the backend? Need to find out.
> 
> Again, thanks,
> 
> Rich



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Hot to restrict access to subset of data

2005-07-08 Thread Guy Fraser
On Sun, 2005-03-07 at 23:14 +0300, Andrus Moor wrote:
> > Does the application really need superuser privileges or is that
> > just a convenience?  It's usually a good idea to follow the "Principle
> > of Least Privilege" -- do some searches on that phrase to learn
> > more about it and the rationale for following it.
> 
> > Whether this approach is "secure and better" depends on the application
> > requirements, the threat model, how well the application is written,
> > etc.  As Bruno pointed out, if users have enough access to the
> > system that they could discover the account name and password, then
> > they could easily bypass the application's security.  Another
> > potential problem is SQL injection: if the application isn't careful
> > with how it handles user input, then specially-crafted data could
> > result in the pre-defined queries doing more than intended.  You'll
> > have to evaluate the risks and benefits of the various approaches
> > in the context of your own environment; there's no universal "this
> > way is better" answer.
> 
> My application is general purpose accounting and sales application. If 
> database does not exists, it prompts user and creates new database containig 
> some hundreds of tables and
> upsizes local data to database.
> Each database can have a lot of schemas. Each schema represents a single 
> company. All those schemas have exactly the same tables, each schema 
> contains 80 tables.
> In public schema I store tables common for all companies (60 tables).
> 
> So I seems that my application needs to be run with super-user privileges in 
> Postgres.
> 
> Andrus 
I am quite sure that you can use a non super-user account and still 
work with different schemas.

First thing I would do in your case is determine who should have 
access to PgAdmin, and create restricted-users for each of them.

Next I would remove all privileges, then specifically grant access 
to the action required on any specific table to your application.

If your application needs more privileges under special 
circumstances then have a higher privileged user defined to 
allow those changes. I have some applications that have 
a couple of user levels defined. I have also built a php 
interface for one customer that used postgresql to store 
user accounts and session information. It should be possible 
to extend that type of system to use the authenticated user
as the application user, but depending on how many users 
simultaneously connect, you may run into a problem due to 
too many open connections. If you don't use a separate PG 
user for each user, you can use views as stipulated by others.
The program I wrote used a hierarchal access system and each 
record had a userid and privilege level associated with it.
In that system users were stored in a hierarchal lookup table 
using id's and the specific information for the user was held 
in a contact table, so that a real person could belong to more 
than one organization without having to be redefined. The 
privilege was basically ; private, supervisor, peer, subordinates
and public. The permission levels a user was allowed to 
access and assign were defined in the hierarchal lookup table.

I hope that helps. I have another similar but much larger 
project I have been mulling over, that will require this 
same kind of granularity, and due to privacy concerns, I will 
need to use all the tricks I have used before and maybe even 
some new ones.

Good Luck


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Pl/PgsSQL array

2005-07-08 Thread Guy Fraser
On Thu, 2005-07-07 at 08:30 +0100, Richard Huxton wrote:
> Ben-Nes Yonatan wrote:
> > Richard Huxton wrote:
> > 
> >>> Can anyone tell me if Pl/PgSQL can support a multi dimensional array 
> >>> (of up to 5 levels top I guess) with about 100,000 values?
> >>> and does it stress the system too much?
> >>
> >> I can't imagine it being wonderful - you probably want a different 
> >> procedural language. Or a different approach come to think of it - can 
> >> I ask what you are using 5D arrays with 100,000 values for?
> > 
> > Indeed you can my good sir :)
> > 
> > Im going to insert into one table a list of about 3.5 million products 
> > each day, thouse products also contain data about their own categories.
> > 
> > Now I receive all of this data every day and I need to delete all of the 
> > previous data each time and insert the new one.
> > In order to create the category table (tree table) I want to create an 
> > array out of the categories information at the products and then insert 
> > it into the the categories table (the tree :)).
> 
> I would be tempted use an external perl script to receive the data and 
> insert it into the database, building the categories tree as it did so.
> 

"Perl and multidimensional arrays" eq "Yikes!"

I just beat down a three dimensional hash yesterday, and it was a 
nasty blood fest. I almost considered rewriting the application in 
PHP but figured I had enough invested to just push through. I 
think my Programming perl by O'Reilly is getting too old in the 
tooth, and I don't write in perl often enough to know the CPAN 
modules very well.

If this is a fresh project, I personally would use PHP it dead 
easy to manipulate arrays. As a mater of fact almost everything 
I do in PHP uses arrays, it's the main reason I use it. The main 
problem I have run into is the 8MB data limit, but that can be 
changed and if run as a serial processor, it may not be a problem.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Php abstraction layers

2005-08-31 Thread Guy Fraser
On Tue, 2005-30-08 at 21:00 -0700, Chris Travers wrote:
> Antimon wrote:
> 
> >Hi,
> >I'm working on a new web project based on php and i decided to use
> >PostgreSQL 8.x as
> >dbms. For triggers, views, stored procedures etc. I was going to write
> >a simple wrapper class and use pg_* functions. But some friends adviced
> >me to use an abstraction layer. I checked PEAR:DB and AdoDB. They look
> >pretty but i don't understand why sould i need one?
> >
> Do yourself a favor and write lightweight wrapper functions.  This means 
> that if something needs to be changed (say, a PHP API name change 
> happens) you don't have to rewrite a lot of your code.  Additionally, if 
> you do have to port someday to Interbase or even (gasp) MySQL, it 
> becomes possible thought not always straightforward.
>   *Simple* and light database abstractions are very nice because they 
> isolate your framework from the API syntax and after a few years, 
> something could change and then you don't have to rewrite a whole lot.
> 
I would have to agree with this.

As a network administrator I work with IP and MAC addresses alot, 
so I can use the extensive support for them as leverage when 
choosing a database for a project. That being said, I have other 
people to please and sometimes they get their way and I have to 
use MySQL. I do not to use PEAR or CPAN modules because I haven't 
had the time to learn their API's and don't know how secure they 
are.

As Chris suggests, I write wrapper functions to create and 
maintain one or more connections to the database as well as 
functions for inserting, updating, deleting and making 
different kinds of selections from the database. I have found 
it handy to store status, error and data in an array of 
associative arrays for each transaction with the database. The 
trick to making the code compatible is how and where the error 
and status data is stored. The PostgreSQL pg_ functions return 
quite different error and status information than the MySQL 
functions.

Another trick is to write wrapper functions that take a 
"driver" option to determine which "library" to use. Since the 
same array format is returned from either "library" the rest of 
the code doesn't need to know anything about the database.

One thing to take care with is ; making sure you escape any 
filter anything put into a database command to ensure that 
you don't create SQL injection vulnerabilities.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] array_dims array_lower/upper distance

2005-09-22 Thread Guy Fraser
On Wed, 2005-21-09 at 07:48 -0700, Tony Wasson wrote:
> On 9/20/05, Matthew Peter <[EMAIL PROTECTED]> wrote:
> > Wondering if there's a way for postgres to return how
> > many elements are in a array as a single integer? For
> > instance, returning 10 (items in array) instead of
> > [-5:4]
> >
> > Also, is there a way to return the position of an item
> > in a array?
> >
> 
> Try using array_upper and specify which array dimension.
> 
> from http://www.postgresql.org/docs/current/static/arrays.htm:
> 
> "array_dims produces a text result, which is convenient for people to
> read but perhaps not so convenient for programs. Dimensions can also
> be retrieved with array_upper and array_lower, which return the upper
> and lower bound of a specified array dimension, respectively."
> 
> 
> pg8=# SELECT array_dims('{4,2,3,5,6}'::INT[]);
>  array_dims
> 
>  [1:5]
> (1 row)
> 
> pg8=# SELECT array_upper('{4,2,3,5,6}'::INT[],1);
>  array_upper
> -
>5
> 
So to answer his question he would likely want :

SELECT
 array_upper(item,1) - array_upper(item,0) + 1 as elements
FROM
 arraytest ;




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] array_dims array_lower/upper distance

2005-09-22 Thread Guy Fraser
On Thu, 2005-22-09 at 12:43 -0400, Greg Stark wrote:
> Guy Fraser <[EMAIL PROTECTED]> writes:
> 
> > So to answer his question he would likely want :
> > 
> > SELECT
> >  array_upper(item,1) - array_upper(item,0) + 1 as elements
> > FROM
> >  arraytest ;
> 
> Note that this doesn't work for empty arrays. 
> It will return NULL instead of 0.
Your response was not at all helpfull, I would like to 
encourage you to expand on what I put off the top of my 
head.

I have not used array_upper() before, and the question was 
how to return the total number of elements, not how to 
handle NULL and empty arrays.

One could construct a fully logic compliant routine using 
CASE and IF NULL to generate the type of response you want 
when checking empty or NULL arrays.

If you have something to add then provide details. If you 
are just trying to seem like you know more than everyone 
else then don't bother posting.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] array_dims array_lower/upper distance

2005-09-23 Thread Guy Fraser
On Thu, 2005-22-09 at 21:52 -0500, Bruno Wolff III wrote:
> On Thu, Sep 22, 2005 at 14:16:48 -0600,
>   Guy Fraser <[EMAIL PROTECTED]> wrote:
> > On Thu, 2005-22-09 at 12:43 -0400, Greg Stark wrote:
> > > Guy Fraser <[EMAIL PROTECTED]> writes:
> > > 
> > > > So to answer his question he would likely want :
> > > > 
> > > > SELECT
> > > >  array_upper(item,1) - array_upper(item,0) + 1 as elements
> > > > FROM
> > > >  arraytest ;
> > > 
> > > Note that this doesn't work for empty arrays. 
> > > It will return NULL instead of 0.
> > Your response was not at all helpfull, I would like to 
> > encourage you to expand on what I put off the top of my 
> > head.
> > 
> > I have not used array_upper() before, and the question was 
> > how to return the total number of elements, not how to 
> > handle NULL and empty arrays.
> 
> I think his point was that your example was going to give the wrong answer
> for empty arrays, which is relevant to your question. The normal way around
> that is to use the COALESCE function.
OK what I jotted down was totally wrong.

This is slightly more correct :

SELECT
 array_upper(item,1) - array_lower(item,1) + 1 as elements
FROM
 arraytest ;

Without do a tonne of research, I can not refine this to handle
all circumstances.

Can someone point me to documentation that explains the function 
better than :

Dimensions can also be retrieved with array_upper and array_lower, which
return the upper and lower bound of a specified array dimension,
respectively.

The table "Table 9-36. array Functions" does not explain how empty 
and null arrays are handled either.

How do array_upper() and array_lower() respond to :
1) NULL
2) Empty Array
3) Nonexistent requested dimension

Also is there a function that specifies how many dimensions the 
array has?



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] array_dims array_lower/upper distance

2005-09-23 Thread Guy Fraser
On Fri, 2005-23-09 at 09:48 -0700, Matthew Peter wrote:
> 
> --- Guy Fraser <[EMAIL PROTECTED]> wrote:
> 
...snip...
> > OK what I jotted down was totally wrong.
> > 
> > This is slightly more correct :
> > 
> > SELECT
> >  array_upper(item,1) - array_lower(item,1) + 1 as
> > elements
> > FROM
> >  arraytest ;
> > 
> > Without do a tonne of research, I can not refine
> > this to handle
> > all circumstances.
> > 
> > Can someone point me to documentation that explains
> > the function 
> > better than :
> > 
> > Dimensions can also be retrieved with array_upper
> > and array_lower, which
> > return the upper and lower bound of a specified
> > array dimension,
> > respectively.
> > 
> > The table "Table 9-36. array Functions" does not
> > explain how empty 
> > and null arrays are handled either.
> > 
> > How do array_upper() and array_lower() respond to :
> > 1) NULL
> > 2) Empty Array
> > 3) Nonexistent requested dimension
> > 
> > Also is there a function that specifies how many
> > dimensions the 
> > array has?
> > 
> 
> That was exactly the answer I was looking for when I
> posted the question. Now if there was a function to
> delete a position in the array
> 
> ie set array1 = array_delete_at(array1,5) where 5 is
> the position to delete

I hope someone else can answer that, the best I can do 
is provide a link to the docs :

http://www.postgresql.org/docs/current/static/functions-array.html

My best guess is that you need to "walk" the array and 
drop the element you don't want. The way I currently 
use arrays is I read the whole array into my application
the modify the array then update the whole array.

Unfortunately the arrays in PG are not associative and the 
elements must be sequential. I only use arrays in limited 
ways in PG because of earlier constraints, and have not 
needed to investigate the newer features.

Good luck.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] text file import

2005-10-06 Thread Guy Fraser
On Thu, 2005-06-10 at 16:14 +0100, Richard Huxton wrote:
> Bart van den Eijnden wrote:
> > Hi list,
> > 
> > I am trying to transfer the following from MySQL to PostgreSQL:
> > 
> > load data local 
> > infile 'D:/tmp/InterAcces- MySQL/03102005/bedrijven.txt'
> > into table bedrijven
> > fields terminated by ',' optionally enclosed by '^' 
> > lines terminated by ';\r\n';
> > 
> > Is there a way to do this without changing the file (this file is delivered 
> > by a 3rd party)? Btw, I am on the win32 platform.
> 
> Normally, I'd pipe it through a Perl filter into a COPY FROM STDIN 
> command. Not sure what you'll have installed on Windows.
> 
> Oh - and you might need to correct some of the data if MySQL isn't being 
> strict enough.
> 
I would have to second Richard on the last statement.

MySQL formats many data types in ways that may not be possible to
directly import into PostgreSQL, and has a bad habit of using non-
standard representations for NULL; such as '00-00-' for date, and
other similar invalid or wrong data values for NULL in other types.

Good Luck


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Spam from EnterpriseDB?

2007-01-19 Thread Guy Fraser
On Thu, 2007-01-18 at 17:48 -0800, Richard Troy wrote:
> On Thu, 18 Jan 2007, Joshua D. Drake wrote:
> > >
> > > Spam is spam.  I don't care what they're selling.  Anyone dumb enough to
> > > send spam in 2006 should be fired on the spot.
> >
> > That is a bit extreme. One persons SPAM is another persons interesting
> > information. Although I agree that the behavior in this particular
> > situation was a bit less than the average IQ score.
> >
> > If you are going to communicate with potential customers, especially as
> > SPAM have the integrity to do it yourself and take the heat yourself.
> > Don't use some off brand secondary service and pay them to spam for you.
> >
> > I send out email all the time to potentials. It is common practice but I
> > do it, directly as me.
> 
> Josh, under the law, that's not spam. Individually written emails are
> never spam even if they may be "unsolicited sales material." So, rest
> assured.
> 
> Richard
> 
You are oh so wrong in so many ways.

SPAM is a term used for unsolicited email of any kind be it UCE
{unsolicited commercial email}, UBE {unsolicited bulk email} or 
anything else that is unsolicited and sent to a large number of
individuals or cross posted to a number of mailing lists and or 
news groups.

I feel that all @en25.com and @enterprisedb.com should be 
considered for banning from the PostgreSQL mailing lists without 
a better apology than has been given to date:

"EnterpriseDB recently sent out email advertising new support offerings.
Unfortunately, the email was incorrectly sent to some people who should
not have received it.  We apologize for the error, and are taking steps
to avoid such mistakes in the future."
Denis Lussier, CTO & Founder

The part about : "incorrectly sent to some people who should not have
received it" 

Is far too weak. I am certain that I never expressed any interest 
in EnterpriseDB on the announce list or any other PostgreSQL list.
Posting the message on the list would have been just as effective. 
I would expect EnterpriseDB to remove me and anyone else who 
has not expressed interest from their list. I thought that the FAQ
indicated that harvesting of emails off the list was not allowed, 
I will have to review it, when I have time. As an employee of a 
Network Service Provider, I know that we do not tolerate repeat 
incidences of intentional Unsolicited email, and especially frown 
on complaints of harvested mailing lists.

I do not want any personal communication from EnterpriseDB in 
response to this posting or ever again in the future, if they want 
to respond they can reply to this post on this mailing list.

I have verified that their NSP is in Canada and have determined 
that address harvesting may be illegal if it is not already in 
Canada. See Item 7 on :

http://www.e-com.ic.gc.ca/epic/internet/inecic-ceac.nsf/en/h_gv00337e.html




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] OT: Canadian Tax Database

2007-03-08 Thread Guy Fraser
On Thu, 2007-03-08 at 09:15 -0500, Ted Byers wrote:
> - Original Message - 
> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
> To: "Richard Huxton" 
> Cc: 
> Sent: Thursday, March 08, 2007 8:00 AM
> Subject: Re: [GENERAL] OT: Canadian Tax Database
> 
> 
> > Richard Huxton wrote:
> >> http://www.thestar.com/News/article/189175
> >>
> >> "For instance, in some cases the field for the social insurance number 
> >> was instead filled in with a birth date."
> >>
> >> Unbelievable. Sixty years of electronic computing, fifty years use in 
> >> business and the "professionals" who built the tax system for a wealthy 
> >> democratic country didn't use data types.
> >
> > This is Unbelievable? This is commonplace.
> >
> And due at least in part to government (and other institutions operated by 
> damned fools) opting for the least expensive provider rather than paying for 
> someone who actually knows what they're doing.  Just as buying cheap junk 
> always comes back to get you, hiring incompetent fools that don't know their 
> ass from a hole in the ground will come back to get you too.
> 
Come on, they don't hire incompetent fools. The hire the people 
they need to fill their quota regardless of how well trained 
and experienced they are. I am not saying that non white males 
are in any way less competent than white males, but by removing 
them from the pool does not make things better. The biggest 
problem with quotas is not hiring less qualified staff, it is 
that less qualified staff know why they were hired and know that 
they are very unlikely to be fired, so they have little incentive 
to work hard or attempt to do their best, they can always fail 
upwards.

...snip...

-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] OT: Canadian Tax Database

2007-03-09 Thread Guy Fraser
government, that certain favoured 
> > groups are deemed to be superior to white men, even if the favoured 
> > party has no education nor experience and the latter have earned 
> > doctorates and decades of experience), but no one has said anything 
> > about such people being employed on the projects to which I referred.  
> > But this is an aspect of our present society that is bound to 
> > degenerate into a flame war, launched by the politically correct, so 
> > we ought to say little, or even leave it alone.  Those in power tend 
> > to be vicious, especially when there are no effective checks on their 
> > conduct and no consequences for what they do.
> >
> > Cheers
> >
> > Ted
> >
> >
> > ---(end of broadcast)---
> > TIP 4: Have you searched our list archives?
> >
> >   http://archives.postgresql.org/
> 
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] OT: Canadian Tax Database

2007-03-13 Thread Guy Fraser
On Sat, 2007-03-10 at 08:07 -0300, Jorge Godoy wrote:
> omar <[EMAIL PROTECTED]> writes:
> 
> > I'm curious what people think about the following statement considering the
> > database typing talk being brought up here.  My experience is that more 
> > times
> > than not I have to put data validation in my client code even when it's
> > available on the server, if for no other reason that users don't understand
> > what foreign key violation, etc messages mean.  It begs the question of
> > whether it's really necessary on the server or not.  SQLite seems to take 
> > the
> > position that it isn't since there is no referential integrity and the
> > following.  To be honest, there's a lot of power in the ability to view
> > everything as a string, with of course proper data validation.
> 
> I believe that data validation is essential at the server side.  The ideal
> situation to me is something like data validation on server, errors /
> exceptions being risen and then catched by the client code that will translate
> them to a suitable message.
> 
> Inserting data validation on client side helps with simple input and eliminate
> the average number of roundtrips needed for getting the data stored, but
> shouldn't be the only validation done.
> 
I completely agree and would add that I also prefer to use server 
side session cookies to validate the authenticity of the remote 
user. Stopping man in the middle, client spoofing and SQL injection 
are all good reasons to use multiple levels data and remote user 
verification. One other good trick is to use table permissions to 
only permit read only database access, and in many cases from a view 
not the actual table.

Paranoia and systems administration/development go well together 
in my humble opinion. Keeping the server and data safe is a big 
part of our responsibility.

I give cheers to PostgreSQL developers, in developing the excellent
tools they have provided thus far and would like to encourage them 
to keep up the excellent trend.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] DST failing on 8.1.3

2007-03-14 Thread Guy Fraser
On Wed, 2007-03-14 at 11:16 -0400, Jaime Silvela wrote:
> I'm running a production database on Linux (select version() = 
> "PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.3")
> I read that all 8.1.* versions are DST-compliant, and sure enough, my 
> development server, which runs 8.1.0, switched fine, as did my 8.2.3 
> database at home.
> The production database was upgraded a while ago to 8.1.3 from 7.*. I'm 
> ready to upgrade to 8.2.3 to get the the benefit of all the development 
> since then, but before doing that I'd like to find out what's the 
> problem with the DST not taking.
> 
> The timezone is 'EST5EDT', and
> 
> SELECT CASE WHEN
>   timestamptz '20070401 0100'
>   + interval '1 hour'
>   >= '20070401 0300'
>   THEN 'Wrong' ELSE 'Right' END;
> 
> returns 'Wrong'.
> 

That should have been wrong.

1+1=2 not 3 or more

EST5EDT change is :
2nd Sunday of March @02:00:00 +01:00:00
 and 
1st Sunday of November @02:00:00 -01:00:00

Were you not aware that your current President legislated the 
changes to Daylight Savings. As a result most of North and South 
America has had to legislate the changes to alleviate what 
could only have been chaos figuring out what the time would be 
in different parts of the Americas.


> Are there perhaps timezone definition files that may not have been added 
> when upgrading from 7.*?
> 
> Thanks
> Jaime
> 
> 
> ***
> Bear Stearns is not responsible for any recommendation, solicitation,
> offer or agreement or any information about any transaction, customer
> account or account activity contained in this communication.
> 
> Bear Stearns does not provide tax, legal or accounting advice.  You
> should consult your own tax, legal and accounting advisors before
> engaging in any transaction. In order for Bear Stearns to comply with
> Internal Revenue Service Circular 230 (if applicable), you are notified
> that any discussion of U.S. federal tax issues contained or referred to
> herein is not intended or written to be used, and cannot be used, for
> the purpose of:  (A) avoiding penalties that may be imposed under the
> Internal Revenue Code; nor (B) promoting, marketing or recommending to
> another party any transaction or matter addressed herein.
> ***
> 
> ---(end of broadcast)-------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DST failing on 8.1.3

2007-03-14 Thread Guy Fraser
On Wed, 2007-03-14 at 12:12 -0400, Tom Lane wrote:
> Jaime Silvela <[EMAIL PROTECTED]> writes:
> > ... before doing that I'd like to find out what's the 
> > problem with the DST not taking.
> 
> > The timezone is 'EST5EDT', and
> 
> Is it really 'EST5EDT', or 'est5edt' ?  8.1 is case sensitive about this.
> 

/usr/local/share/postgresql/timezone/EST5EDT
/usr/local/share/postgresql/timezone/SystemV/EST5EDT
/usr/share/zoneinfo/EST5EDT
/usr/share/zoneinfo/SystemV/EST5EDT

>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] About PostgreSQL's limit on arithmetic operations

2004-10-08 Thread Guy Fraser
Signed integers use a number system known as "two's compliment" and in 
order to
make room for negative numbers the most significant bit is used to 
indicate a negative
value and the compliment of the rest of the bits minus 1 {there is no 
such number as -0}
evaluate to the negative numbers. This method is used because it makes 
addition and
subtraction simpler for computers to perform on integers.

I am not sure how to declare an unsigned bigint, but if you can it would 
produce
the expected result.

Only Oracle developers could tell you why they would generate an invalid 
result when
doing math using 64 bit signed integers {9223372036854775808 is not a 
valid value
for a 64bit signed integer.} Just because Oracle gives you the number 
you want it
in no way makes the result correct. Since the result is outside the 
scope of a 64 bit
signed integer an overflow error could be a valid result, or the value 
postgresql
returns could also be a valid result. If you are multiplying by 512 in 
order to perform
a binary right shift of 9 bits on the integer you don't want scope 
validation, because if
were enforced a right shift would not work, and the result would produce 
an error.
Please read the documentation on data types, it details the scope 
{range} for all data
types. If you use the proper data type for your data set you can reduce 
the amount of
storage required for small numbers and be fairly certain that the math 
will work as
expected using large numbers.


Shridhar Daithankar wrote:
On Wednesday 29 Sep 2004 2:25 pm, Devrim GUNDUZ wrote:
 

template1=# SELECT 512*18014398509481984::numeric(20) AS result;
 result
-
9223372036854775808
(1 row)
 

Ok, I got the same result in 7.4.5... But... Why do we have to cast it
into numeric? The results from other databases shows that they can perform
it without casting...
   

Probably because the normal integer is 4 bytes long and bigint is 8 bytes 
long. The value above is exactly 2^63 at which a 8 bytes long signed bigint 
should flip sign/overflow. I am still puzzled with correct value and negative 
sign..

For arbitrary precision integer, you have to use numeric. It is not same as 
oracle. 

Furthermore if your number fit in range, then numbers like precision(4,0) in 
oracle to smallint in postgresql would buy you huge speed 
improvement(compared to postgresql numeric I mean)

Please correct me if I am wrong..
Shridhar
 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] '1 year' = '360 days' ????

2004-10-27 Thread Guy Fraser
Tom Lane wrote:
Doug McNaught <[EMAIL PROTECTED]> writes:
 

template1=# select '1 year'::interval = '360 days'::interval;
?column? 
--
t
(1 row)
   

Yeah, if you look at interval_cmp_internal() it's fairly obvious why.
I think that this definition is probably bogus, and that only intervals
that match exactly (equal months parts *and* equal seconds parts) should
be considered "equal".  However the most obvious way to redefine it
(compare the months, and only if equal compare the seconds) would lead
to rather nonintuitive behaviors such as "'1 year' > '1000 days'".
Anybody have any thoughts about a better way to map the multicomponent
reality into a one-dimensional sorting order?
(Note also that as Bruno was just mentioning, we really ought to have
months/days/seconds components, not just months/seconds; which makes the
comparison issue even more interesting.)
			regards, tom lane
 

As any of us who have ever researched how to calculate time know;
1) The amount of time in 1 year depends on the year due to 
leap years.
2) The amount of time in 1 month depends on the month and year 
because a month is an arbitrary number of days.
3) A week is a theological creation always equal to 7 days.

Using the Gregorian Calendar there are 10 missing days between 
Oct. 4, 1582 and Oct. 15, 1582 . Leap Years are (((every 4 years) 
except when modulo 100) except when modulo 400).

It is therefore not possible to define a Month or Year in Seconds, 
without knowing which Day, Month and Year you calculating.

Time constants :
1 Solar Day = 23 hours 56 minutes 4.091 seconds
1 Lunar Month = 27.32158 days
1 Tropical Year = 365.24215 Solar Days
1 Year in Gregorian time is : 
365 Days 5 Hours 49 Minutes 12 Seconds

As it is now obvious there is not any simple way to convert 
months to seconds since a month is an abstract number of days used 
to split four (13 week) seasons three ways plus one day every non 
leap year and two days every leap year.

When calculating any usage based on time, it is a good idea to 
store usage in days:hours:minutes:seconds because they are static 
and stable, if you discount the deceleration of the earth and 
corrections in leap seconds for atomic clocks [see 
http://tycho.usno.navy.mil/leapsec.html ]. 

Trivia: In approximately 620 million years a day will be twice as 
long as it is today.


--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] '1 year' = '360 days' ????

2004-10-27 Thread Guy Fraser
Tom Lane wrote:
Bruno Wolff III <[EMAIL PROTECTED]> writes:
 

Wikipedia gives 365.242189670 days (86400 seconds) as the length of
the mean solar year in 2000. To give you some idea of how constant
that values is, Wikipedia claims that 2000 years ago the mean solar
year was about 10 seconds longer.  Using the above value I get there
is an average of 2629743 seconds in a month.
   

And yet another option is to note that in the Gregorian calendar there are
400*365+97 days or 400*12 months in 400 years, which gives 2629746 seconds
per month on average.
   

I like the latter approach, mainly because it gives a defensible
rationale for using a particular exact value.  With the solar-year
approach there's no strong reason why you should use 2000 (or any other
particular year) as the reference; and any value you did use would be
subject to both roundoff and observational error.  With the Gregorian
calendar as reference, 2629746 seconds is the *exact* answer, and it's
correct because the Pope says so ;-).
(Or, for the Protestants among us, it's correct because the SQL standard
specifies use of the Gregorian calendar.)
			regards, tom lane
 

Give or take one day every 4000 years. ;-)
--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] '1 year' = '360 days' ????

2004-10-28 Thread Guy Fraser
Gaetano Mendola wrote:
Guy Fraser wrote:
Trivia: In approximately 620 million years a day will be twice as 
long as it is today.
Do you think then that Postgres628M.0 will fix it ?  :-)
Regards
Gaetano Mendola
I just hope, I don't have to work an equivalent fraction of the day for the
same pay, but with any luck I'll have all my bills paid and be retired by
then. ;-)
--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Comment on timezone and interval types

2004-10-29 Thread Guy Fraser
Yes For example :
MST = GMT - 7 hours
MDT = GMT - 6 hours
The GMT time remains constant no mater if it is or is not
daylight savings time.
You still want to bill someone for 1 hour of usage from
02:00 MDT to 02:00 MST, but you don't want to bill an
hour from 02:00 MST to 03:00 MDT.
Unless you are using GMT or another timezone that does not
use daylight savings, you should always include the timezone
with the time.
1 day should always be calculated as 24 hours, just as an hour
is calculated as 60 minutes...
Since interval does not store an actual time range, it is not sensitive to
daylight savings.
Where problems occur is when you try to use units larger than a week
because they vary in the number of days per unit depending on the date
range.
I would prefer to see interval state time in :
Days:Hours:Minutes:Seconds.Microseconds
Rather than :
Years Months Days Hours:Minutes:Seconds.Microseconds
Since months and years are not a constant number of days it does not
seem reasonable to use them in calculations to determine days, unless
it is qualified with a start or stop time and date including the time zone.
Since I don't need to account for microseconds or durations larger
than +/- 68 years I usually use an int4 to store time usage in seconds.
Since int4 can be cast into reltime, it is simple to calculate the
beginning or end of the interval with one timestamp with timezone and
an int4 duration. The Storage required for this is 16 bytes ; 12 for the
timestamp and 4 for the int4 {integer}. If you need more accuracy
you could use a timestamp and an interval, but the storage required
would be 24 bytes IIRC.
Stuart Bishop wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bruno Wolff III wrote:
| Recently there has been some discussion about attaching a timezone to
| a timestamp and some other discussion about including a 'day' part
| in the interval type. These two features impact each other, since
| if you add a 'day' to a timestamp the result can depend on what 
timezone
| the timestamp is supposed to be in. It probably makes more sense to use
| a timezone associated with the timestamp than say the timezone GUC 
or the
| fixed timezone UTC.

If you add a 'day' to a timestamp, it should be identical to adding 24
hours. Any other interpretation leads to all sorts of wierd ambiguities.
For example, what is '2am April 3rd 2004 US/Eastern + 1 day'? 2am on
April 4th 2004 didn't exist in that timezone because the clocks were put
forward and that hour skipped. If you round up to the nearest existant
time, you then have the issue that '2am April 3rd + 1 day == 3am Aril
3rd + 1 day'.
- --
Stuart Bishop <[EMAIL PROTECTED]>
http://www.stuartbishop.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQFBge+sAfqZj7rGN0oRAgInAJsEuYkxX6/jsaszquhjEX/PH3nXvACfVBW9
Z3sfU5XGgxSOI77vuOOOzKA=
=euY6
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
.
--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Bug in COPY from CSV?

2005-02-17 Thread Guy Fraser
If you have command line support for PHP it is fairly easy to 
write a program that would use "pg_escape_string" to condition 
the data. You could even use "pg_copy_to" to send the data 
to the db as well.

On Mon, 2005-14-02 at 22:30 -0500, Bruce Momjian wrote:
> I did some research on this and it turns out it is one of our TODO
> items.  It is:
> 
> o Allow COPY FROM ... CSV to interpret newlines and carriage
>   returns in data
> 
>   This would require major refactoring of the copy source code.
> 
> We are actually careful to warn people who dump out data with newlines
> in CSV format:
> 
>   test=>  insert into t values (4, 'lkjasdf
>   test'> lkjasdf
>   test'>
>   test'>
>   test'>
>   test'> ');
>   INSERT 542038 1
>   test=> copy t to '/bjm/3' with null as '' csv quote as '"';
>   WARNING:  CSV fields with embedded linefeed or carriage return 
> characters might not be able to be reimported
>   COPY
> 
> Someone has come up with a patch which might allow this so it might work
> in 8.1.
> 
> I am attaching a reproducable case of your report.
> 
> ---
> 
> Rick Schumeyer wrote:
> > I think I've found a bug in PG 8.0 that occurs while copying from CSV files.
> > 
> > I checked the bugs list but didn't see anything similar.
> > 
> >  
> > 
> > This occurs when reading a CSV file where one of the 'text' fields has
> > 
> > a blank line.  I included an example that shows the problem below.
> > 
> > I found that if I change the blank line to have one space, pg reads
> > 
> > it just fine.
> > 
> >  
> > 
> > If this is indeed a bug, if someone could let me know the best
> > 
> > way to address it, I would appreciate it.
> > 
> >  
> > 
> > --EXAMPLE
> > 
> >  create table t (
> > 
> >   id integer,
> > 
> >   description text
> > 
> > );
> > 
> >  
> > 
> > copy t from stdin with null as '' csv quote as '"';
> > 
> > 1,"Now is the time"
> > 
> > 2,"for all good men
> > 
> > to come
> > 
> > to the
> > 
> >  
> > 
> > aid of their party"
> > 
> > 3,"The quick brown fox"
> > 
> > \.
> > 
> > --END EXAMPLE
> > 
> >  
> > 
> > The above example produces this output:
> > 
> >  
> > 
> > CREATE TABLE
> > 
> > psql:test2.sql:8: ERROR:  unterminated CSV quoted field
> > 
> > CONTEXT:  COPY t, line 5: ""
> > 
> >  
> > 
> 
> plain text document attachment (/bjm/0)
> CREATE TABLE t (
>   id INTEGER,
>   description TEXT
> );
> COPY T FROM stdin WITH NULL AS '' CSV QUOTE AS '"';
> 1,"Now is the time"
> 2,"for all good men
> to come
> to the
> 
> aid of their party"
> 3,"The quick brown fox"
> \.
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Adding another primary key to a populated table

2006-01-06 Thread Guy Fraser
Have you considered dumping the data, dropping the table and
building the replacement table with the correct properties 
then repopulating the table with the dumped data?

On Thu, 2006-05-01 at 23:02 -0800, Daniel Kunkel wrote:
> Why do I want to include 6 fields in the primary key?  
> 
> Good question...  I don't know. It's a requirement of OFBiz, a really
> awesome ERP/CRM/Accounting/ECommerce system.
> 
> I'm upgrading the software which requires it, and need to upgrade the
> database to match.
> 
> Once I find out, I'll publish the solution in the OFBiz forums and Wiki
> so others won't come knocking.
> 
> Thanks
> 
> Daniel
> 
> On Thu, 2006-01-05 at 22:44 -0800, Aaron Koning wrote:
> > Are you trying to create a primary key composed of 6 fields? What is
> > the result you want to achieve with the constraint? If you just want
> > UNIQUE, NOT NULL values in a field, you can achieve that without
> > creating a primary key.
> > 
> > Aaron
> > 
> > On 1/5/06, Daniel Kunkel <[EMAIL PROTECTED]> wrote:
> > Hi
> > 
> > It makes sense that I can't have more than 1 primary key.
> > 
> > Postgres was trying to create another primary key instead of
> > modify the
> > existing primary key.
> > 
> > So...
> > 
> > As I understand it, a table does not always have to have a
> > primary key 
> > defined.
> > 
> > Would it work to first delete/drop the primary key, then
> > recreate the
> > primary key on all 6 columns.
> > 
> > ALTER TABLE product_price DROP CONSTRAINT  product_price_pkey;
> > 
> > I tried this, but it doesn't seem to work...  If I look at the
> > table 
> > from pgAdmin, it is still there, reindexable, I can't add a
> > new primary
> > key, etc.  But if I try to run the above command twice, it
> > says it's
> > already been removed.
> > 
> > --
> > 
> > Just for the record...  the error message I got was: 
> > 
> > ERROR:  ALTER TABLE / PRIMARY KEY multiple primary keys for
> > table
> > 'product_price' are not allowed
> > 
> > 
> > On Fri, 2006-01-06 at 05:19 +, Andrew - Supernews wrote:
> > > On 2006-01-06, Daniel Kunkel < [EMAIL PROTECTED]>
> > wrote:
> > > > Hi
> > > >
> > > > I'm trying to add another primary key to a table populated
> > with data and
> > > > a number of foreign key constraints. 
> > >
> > > You can only have one primary key on a table.
> > >
> > > You can add additional unique constraints to get the same
> > effect. (A
> > > primary key constraint is just a unique constraint that is
> > also not null, 
> > > and is the default target for REFERENCES constraints
> > referring to the table -
> > > this last factor is why there can be only one...)
> > >
> > 
> > 
> > ---(end of
> > broadcast)--- 
> > TIP 9: In versions below 8.0, the planner will ignore your
> > desire to
> >choose an index scan if your joining column's datatypes
> > do not
> >match
> > 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on PostgreSQL

2006-03-14 Thread Guy Fraser
On Tue, 2006-14-03 at 07:45 -0500, Andrew Rawnsley wrote:
> One doesn't 'install' oracle. That implies you have control of the
> situation. One attempts to convince it to condescend to install itself onto
> your machine.  
> 
> Of course, this is like convincing my 3 year old to go to bed on time. Such
> powers of persuasion are not common.
> 
> On 3/13/06 5:41 PM, "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> 
> > On Mon, 2006-03-13 at 15:26, Scott Marlowe wrote:
> >> On Mon, 2006-03-13 at 15:16, Tony Caduto wrote:
> >>> Kevin Grittner wrote:
>  Overall, PostgreSQL
>  has been faster than the commercial product from which we converted.
>    
> >>> 
> >>> 
> >>> Kevin,
> >>> Are you allowed to say what commercial product you converted from?
> >> 
> >> And whether he can or not, this would make a GREAT case study for the
> >> advocacy folks.
> > 
> > Yeah, anyone who says installing postgresql is hard should have to
> > install Oracle first.  Or compile MySQL from source.  :)
> > 
I'll agree with that, we had a tech who tried for a week to install
Oracle, only having to resort to buying third party books, to 
figure out how to configure it. Managing it, is another scary task
when that guy left, I soon discovered the jumble of tools required 
to administer it. We stopped supporting Oracle when our last 
customer using it stopped using it.

I use PostgreSQL for most projects but have been supporting MySQL 
for customers who request it, and usually build any new libraries 
to be able to support either transparently, just by changing the 
driver and user credentials in the config file. The hard part is 
usually getting the MySQL to do what I expect, and what PostgreSQL 
does by default.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL scalability concerns

2006-03-17 Thread Guy Fraser
On Thu, 2006-16-03 at 13:51 -0500, Robert Treat wrote:
> On Wednesday 15 March 2006 18:14, Alen Garia - IT wrote:
> > Hi,
> >
> >   We are currently planning the deployment of our next generation
> > enterprise database and we are wondering whether or not PostgreSQL could do
> > the heavy lifting that would be required.  My post is a little bit long but
> > I hope it will provide you with information to allow someone to provide a
> > definitive answer.
> >
> 
> The definitive answer is yes, PostgreSQL can handle this. You'll need to make 
> sure you have good hardware that matches the nature of your app (oltp/olap 
> and/or web/desktop). You'll probably want something that can do connection 
> pooling.  You can get more help on the -performance list too, just make sure 
> you provide specifics.  You might also want to look into getting commercial 
> support, though choice questions to the mailing list might be enough to steer 
> you on the right path.  
Yes this also looks like a good type of implementation for Slony or the
Java clustering implementations. Both could provide fail over recovery 
and load sharing capabilities.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] will slony work for this ?

2006-03-17 Thread Guy Fraser
On Thu, 2006-16-03 at 19:05 -0500, Robert Treat wrote:
> On Thursday 16 March 2006 14:46, Joshua D. Drake wrote:
> > Tony Caduto wrote:
> > > I have two Postgresql 8.1 servers each one is in a DMZ and each one
> > > has a apache server running on it.
> > > What I want to do is have real time 2 way replication between the two
> > > databases so I can use DNS fail over, i.e when the primary goes down
> > > the secondary would take over.
> > > Then when the primary comes backup it should get all the changes that
> > > happened to the secondary while it was down.
> >
> > Slony or Replicator will handle that but keep in mind they are both Async.
> >
> 
> I have some doubts. If the primary fails, how will the secondary know to take 
> over?  In slony at least, you'll have to issue a failover command (switchover 
> wont work on the downed primary) which means you downed primary will be in an 
> untrustable state. The importance of this being that there will be no way for 
> it to just "get all the changes"; you'll have to rebuild it.  My 
> understanding is the same is true for replicator, but perhaps you can 
> elaborate on that? 
I would concur, and add that DNS fail over is not reliable if a server 
fails, it is only good for pseudo load balancing. It is better to use 
either a hardware load balancing product, or use a High Availability 
Fail Over monitoring application that can take over the IP of the 
downed server. There are tons of examples on the Net just waiting to 
be perused. If you don't need redundant comparative response analysis, 
then off the shelf hardware and open source solutions are available.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-22 Thread Guy Fraser
On Wed, 2006-22-03 at 11:34 -0800, Benjamin Smith wrote:
> On Wednesday 22 March 2006 03:06, Jimbo1 wrote:
> > Hello there,
> > 
> > I'm a freelance Oracle Developer by trade (can almost hear the boos now
> > ;o)), and am looking into developing my own Snowboarding-related
> > website over the next few years. Anyway, I'm making some decisions now
> > about the site architecture, and the database I'm going to need is
> > obviously included. If my site works out, I'm expecting reasonably
> > heavy traffic, so want a database that I'm confident can cope with it.
> 
> I've built many sites based on PostgreSQL. Originally,like most, I started 
> with MySQL, but after I discovered PG in about 2000, I've switched all 
> development to it, and have never looked back. I have "enterprise" systems 
> developed with PostgreSQL with 500 users, 50 online at a time, > 100 database 
> tables. Although the data sample is still not that impressive, (71 MB sql 
> file with pg_dump) the database itself is quite complex, with multiple 
> foreign keys in a single table being the norm.
> 
> It's just been a dream. It's solid, reliable, and virtually always behaves as 
> expected. 
> 
> My only caveat is that occasionally, you really have to watch the use of 
> indexes. I had one query (nasty, with 7-8 tables involved in a combined 
> inner->outer->inner join) that was taking some 20 seconds to execute. Just 
> changing the order of some of the tables in the query, without logically 
> changing the result at all, dropped that time down to < 50 ms! 
> 
> > Regarding MySQL, I've been put off by Oracle's recent purchase of
> > InnoDB and realise this could badly impact the latest version of the
> > MySQL database. I can almost hear Larry Ellison's laughter from here
> > (allegedly)! I've also been put off by the heavy marketing propaganda
> > on the MySQL website.
> 
> Perhaps the single thing I most like about PostgreSQL is the feeling that "it 
> can't be taken away from me". The license is sufficiently open, and the 
> product is sufficiently stable, that I don't ever wonder if I'm "compliant" 
> or "paid up", nor do I wonder if my growth will be particularly limited 
> anywhere in the forseeable future. 
> 
> > "With MySQL, customers across all industries are finding they can
> > easily handle nearly every type of database workload, with performance
> > and scalability outpacing every other open source rival. As Los Alamos
> > lab (who uses MySQL to manage their terabyte data warehouse) said, "We
> > chose MySQL over PostgreSQL primarily because it scales better and has
> > embedded replication.".".
> 
> PostgreSQL has replication, as well. From what I've read, it's probably about 
> on par with MySQL in terms of manageability and reliability. 
> 
> But, truthfully, having dealt with database replication, it's a PAIN IN THE 
> ARSE and very unlikely worth it. In fact, systems that I've worked on that 
> included replication are generally less reliable than those that simply do a 
> dump/copy every hour or two, due to the increased management headaches and 
> niggling problems that invariably seem to occur. 
> 
> Consider replication if the cost of a full-time DB Admin is justified by 
> saving perhaps a few hours of uptime per year. If so, go for it. Be honest 
> about it - most people grossly overestimate the actual cost of few hours of 
> downtime every other year. 

You can dump a running DB. Unless you have a hardware failure you should
not require any down time. I have been running a PG database that is
backed up every day, and has been been running for 5 years with one 
5 min interruption when the server was moved to a new rack.

I am in the process of building a replacement machine, because the 
hardware is bound to fail sometime, and it will be nice to upgrade 
the OS. Unfortunately the base program I heavily customized to 
put the collected data directly into PostgreSQL is no longer 
maintained, so I need to build a whole new management and client 
interface system around the new program, that has native support
for PostgreSQL but uses a significantly different table system.

> 
> > If any PostgreSQL devotees on this group can comment on the above and
> > its accuracy/inaccuracy, I'd really appreciate it.
> 
> PG does constraints wonderfully. It's performance is midline with simple 
> schemas. It handles very complex schemas wonderfully, and, with a little 
> tuning, can make very effective use of memory to speed performance. 

MySQL also does not properly Support NULL, has glaring errors in the
scope of some data types and does not have robust support for many of 
the data types I use on a regular basis.

If you are only interested in varchar and blobs MySQL may have a small 
advantage. If you need proper support for NULL and/or robust data 
types with proper scope handling, MySQL would not serve you well. The 
performance and features of MySQL also depend on the type of table you
use and if your needs change you need 

Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Guy Fraser
On Fri, 2006-24-03 at 14:53 -0600, Scott Marlowe wrote:
...snip...
> Only my most recent personal experience, when I was explaining to the
> guy from MySQL how frustrating it was that installing MySQL broke my
> build of PHP and meant I had to use the mysqli libs, not the mysql
> ones.  The answer from the guy at MySQL was that the standard fix was to
> buy the commercial version, which is generally an older, stabler
> version.

So is this older more stable version ANSI compliant, or is it broken
like the free version?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-28 Thread Guy Fraser
On Sat, 2006-25-03 at 10:11 -0800, Chris Travers wrote:
> Leif Jensen wrote:
> 
> >   Hello,
> >
> >  I have with great interrest been following this thread. We have a
> >(small) flame war in house about this and I'm very happy about all the
> >arguments I have seen. I'm a long time user of PostgreSQL (which possibly
> >makes me a bit biased ;-) ) and I think it's great. I'm not a big database
> >expert, but I try to make things as good and standard as I can.
> >
> >  In this respect I have 3 questions:
> >
> >1) I wonder that no one has mentioned anything about security issues in
> >those two. I know that I'm a novice and that I didn't use MySql very much,
> >but it seems to me that the PostgreSQL security is much better than MySql
> >!?
> >
> >  
> >
> Most people on the list only grudgingly use MySQL and so most are not so 
> well aware of the limitations of MySQL's security model.
> 
> MySQL has no concept of group memberships or group permissions (or the 
> more complex role permissions).  The permissions are simply at the level 
> of the individual user.  When I have coded complex apps on MySQL, I have 
> sometimes found it necessary to emulate this level of permission so that 
> the permissions can be "compiled" down to individual permissions on the 
> tables.  It is a real pain sometimes.
One thing that MySQL does have over PostgreSQL is column level 
permissions.
I rarely need them and similar effects can be achieved joining data 
from tables with different permissions.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup method

2006-04-06 Thread Guy Fraser
On Wed, 2006-05-04 at 22:29 +0100, Simon Riggs wrote:
> On Wed, 2006-04-05 at 15:42 -0400, Bob Powell wrote:
> 
> > I have a systems admin that is backing up our Linux computers
> > (postgres)  by backing up the directory structure.  This of course
> > includes all the files that pertain to my postgres databases.  I
> > maintain that using pgdump and creating a file of SQL commands for
> > restore is a better method by allowing the restore on any linux box that
> > is running postgress as opposed to having to reconstruct the directory
> > on another server. 
> > 
> > Does anyone have any thoughts on this matter.  Is one way better than
> > the other?  Thanks in advance.  
> 
> If you want to do this quickly then you should use PITR. The base backup
> is faster, plus you're covered if you crash between backups.
> 
> Archivelogmode is standard for Oracle/DB2 etc installations; PITR should
> be your standard if you run PostgreSQL too. Here's why:
> 
> pg_dump produces portable backups, but that won't help you if you took
> the backup at 04:00 and your server crashes at 14:15 - you'll still lose
> *all* the transactions your business performed in the last 10+ hours.
> You'll also have to explain that away to your boss and remember she/he's
> the one handing out the raises at the end of the year...
> 
> PITR takes more thought, but then is the purpose of a backup to make
> your life easier or to recover the data for the person paying you?
> 
> Best Regards, Simon Riggs

How do you suggest one does PITR ?

It has been a while since I read the Docs, but do not recall 
any tools that allow one to do such a thing.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup method

2006-04-06 Thread Guy Fraser
On Thu, 2006-06-04 at 15:21 -0400, Douglas McNaught wrote:
> Guy Fraser <[EMAIL PROTECTED]> writes:
> 
> > How do you suggest one does PITR ?
> >
> > It has been a while since I read the Docs, but do not recall 
> > any tools that allow one to do such a thing.
> 
> PITR went in to 8.0 (IIRC); the docs for that version will cover it.
> 
Excellent.

I checked out the docs, and will definitely be planning on using 
PITR.

Thanks a lot for bringing this to my attention.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How does an application recognize the death of the

2006-05-03 Thread Guy Fraser
It does not get mail for a long time. ;^)

It also can not establish a connection to the listener.

If you are on the same machine as the db, you could check
to see if the process is running. You could also setup 
an inetd listener that indicates the status of the 
postmaster. I have not done that in a long time, but 
should be fairly simple and using tcpwrappers you can
make it reasonably safe.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] RAID + PostgreSQL?

2006-06-26 Thread Guy Fraser
On Mon, 2006-26-06 at 11:08 -0500, Scott Marlowe wrote:
> On Mon, 2006-06-26 at 08:59, Tony Caduto wrote:
> > MG wrote:
> > > Hello,
> > >  
> > > we are using PostgreSQL 8.0.3 together with RAID on OpenServer 6.
> > >  
> > > When we do a big SELECT-query the whole maschine becomes very very 
> > > very slowly or stands.
> > > The maschine has 3 GB RAM, so we suppose it`s the RAID.
> > >  
> > > Has anyone some experience with RAID + PostgreSQL?
> > >  
> > > Where does PostgreSQL saves his temporary files? Perhaps these are the 
> > > reason for the bad performance.
> > >  
> > > Greetings
> > >  
> > > Michaela
> > What kind of RAID?  I know if you have it set up to mirror it becomes 
> > slow as pond water.
> 
> I have to say this has NOT been my experience.  With a pair of U320
> drives on an LSI-Megaraid with battery backed cache (256M or 512M, not
> sure which it was, it's been a few years) our pg server was noticeable
> faster with a mirror set than with a single IDE drive (with cache
> disabled) on the same machine.  And faster than a single SCSI drive with
> no RAID controller as well.
> 
> 
> > I have a server that was a hand me down so I did not have a choice in 
> > the RAID and it was set up to mirror with two drives and
> > the performance on large selects was very bad.  
> 
> Wow, something's wrong then.  normally, selects are much faster on
> mirror sets rather than on a single drive.  Do you have a lot of sorts
> spilling onto disc?
> 
> > The performance would 
> > increase by 50 or more percent if
> > fsync = offis set in the postgresql.conf file.
> 
> Of selects?  Or performance in general?  I can see if for performance in
> general, but selects really shouldn't be greatly affected by fsync.
> 
> In another vein, I agree with Joshua.  There could be LOTS of causes of
> poor performance.  I wouldn't just assume it's RAID until it's been
> proven to be the cause of the problem.
> 

Confusion Abounds.

Mirroring does not improve performance. Mirroring writes the same data 
to two sets of identically sized partitions/slices, and reads the data
from both and verifies the integrity of the returned data. A good RAID 
controller will add some latency on first read, but with caching can 
improve the speed of some drive operations. Large selects may very 
well produce more data than the cache on the controller can hold so 
the speed will be limited by the sustained throughput of the drive and 
controller with a number of other factors that can cause lower levels 
of performance. Some performance improvements can be made using RAID
methods that distribute the data over multiple drives like striping, 
but even with striping large selects can still be larger than the 
cache, but the data can usually be accessed more quickly than with 
a single drive, or straight mirroring.

The main advantage mirroring has is that it can provide redundancy 
in the event of premature drive failure, as is usually the least 
expensive data redundancy solution.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: =?UTF-8?Q?re: [GENERAL] How to insert .xls files into

2006-07-12 Thread Guy Fraser
On Fri, 2006-07-07 at 22:41 +, [EMAIL PROTECTED] wrote:
> > [mailto:[EMAIL PROTECTED] On Behalf Of Adrian Klaver
> > 
> > I guess the solution depends on what is a 'large amount of data'. The 
> > most time consuming part is going to be converting the single data 
> > elements at the top of each sheet into multiple elements. I would 
> > create columns for the data in the sheet. At the same time I would 
> > order the columns to match the database schema. Then it would a matter 
> > of cut and paste to fill the columns with the data. The event id's 
> > could be renumbered using Excel's series generator to create a non 
> > repeating set of id's. If the amount of data was very large it might 
> > pay to create some macros to do the work. Once the data was filled in 
> > you would have a couple of choices. One, as mentioned by Ron would be 
> > to use OpenOffice v2 to dump the data into the database. The other 
> > would be to save the data as CSV and use the psql \copy command to 
> > move the data into the table.
> > On Friday 07 July 2006 09:40 am, Parang Saraf wrote:
> 
> Evrything You described is familiar to me, except the OpenOffice v2
>  dump - could You explain this more in details pls? I tried to do it
>  many times, without success.
> 
> Thank You
> 
> Tomas
Does the "OpenOffice v2 dump" convert the date correctly when 
exporting into PostgreSQL?

The date in .xls when using excel is exported to CSV as a number 
which is tricky to convert to a date. This is what I use :

date_pli('epoch'::date, date_num::integer - 25569) AS date_fmt

The number "25569" is a fudge factor, that can be different between
dumps, but stays consistent through the dump. I usually adjust it 
and compare the result to the value shown in excel until I get a 
match.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] US Telephone Number Type

2006-07-12 Thread Guy Fraser
On Mon, 2006-10-07 at 10:33 -0700, Karen Hill wrote:
> Hello,
> 
> How would one go about creating a US telephone type in the format of
> "(555)-555-" ?  I am at a loss on how it could be accomplished in
> the most correct way possible while not going into the various
> different country styles e.g. +01 (555) 555-.
> 
> Is the difficulty of creating a telephone type the reason it is not in
> postgresql already?
> 
> Should the telephone type be able to do something such as:
> 
> SELECT * from tableFOO where telephone.areacode = 555;
> 
> Or would regex be better?
> 
> 
> regards,
> 

Someone mentioned separating the NPA/NXX, but it is likely better 
to store the phone number as formatted text.

"(123) 456-7890" -> "123-456-7890"

or if you need international/North America mixes try:

"1 (123) 456-7890" -> "1-123-456-7890"
"01 (555) 333-1212" -> "01-555-333-1212"

It is fairly simple to extract the Country Code/NPA/NXX/Line from 
that format using split_part(ph,'-',#) where # is a digit from 1 to 
4. It is also fairly simple to add an extension using a decimal 
point, which can be extracted using split_part(ph,'.',2).

I normally determine the allowed number formats using data entry 
filters in the front end, then store the data as a formatted 
string :
ccc-npa-nxx-line.ext
Where ccc is the Country Code.

Depending on your needs you may want to store the raw number
and the formatted number separately.

In many jurisdictions it is now necessary to dial 10 digit
phone numbers so that should be the minimum used.

Although the NPA/NXX can be used in many cases to determine a 
local, there are changes to these assignments on a regular basis 
and access to that information is quite expensive. I looked 
into accessing the data for a VOIP project I was working on 
and was quite surprised when I discovered the access costs. 
there can be other reasons why the data is unreliable as well 
since many jurisdictions have legislated that phone companies 
make there numbers available using LNP {Local Number Portability} 
to other local phone providers. Using LNP and VOIP combinations 
can allow someone to use their local phone number anywhere 
in the world, just as they can with a Satellite phone.

Best of Luck


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: =???UTF-8?Q?re: [GENERAL] How to insert .xls files into

2006-07-13 Thread Guy Fraser
On Thu, 2006-13-07 at 06:52 -0700, Adrian Klaver wrote:
> Dates don't transfer correctly. When I try it the date moves two days ahead. 

Thanks for the heads up, I will continue to avoid using 
Open Office for Spreadsheet and DB activities. I 
prefer Gnumeric for Spreadsheet activities, and am very 
comfortable using psql for db activities.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] why the need for is null?

2004-01-02 Thread Guy Fraser
Baldur Norddahl wrote:

Quoting Martijn van Oosterhout <[EMAIL PROTECTED]>:

...

You could create a new operator, but that means you'll have difficulty
moving it to any database that doesn't have that operator (which is most of
them).
   

Any commercial database vendor would be happy to make such a feature just for
that reason: to lock me in to their database :-). I do not try to stay database
neutral, and use lots of other features that will only work in postgresql.
 

There already is an operator, and it is the ANSI SQL operator "IS". Just 
because "IS" does not use puctuation characters does not mean it is not 
an operator.

If you want it to match perhaps you should forget NULL and use '' (zero
length string) instead.
   

An empty string is an empty string, and a NULL is the lack of there being a string, they are not the same. If you want to emulate what you have proposed then use the function "coalesce".

Example:

select coalesce(string_column,'') from some_table ;

This will return an empty string for all records that have no data in 
string_column.

I have designed a number of realtime data collection programs, and when inserting only the available data into the proper columns of the table, I often end up will "NULL" columns because there was no data for that column. It is very usefull to know if you had data available or if the data was 0 or an empty string.

If you still don't understand, then use MySQL it is messed up and allows weird things like most of what you want to do.

Happy New Year



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] subqueries vs. separate queries

2004-01-29 Thread Guy Fraser
CSN wrote:

Is there much difference between using subqueries and
separating out them into separate queries?
 

That would depend on what results your expecting, and how you intend on using the results.

For example if you want all the data in a single statement, you can't break it up, but 
the request will likely use more resources to complete.
If your application can cut and paste the data from multiple transactions, then simple 
queries may need less resources to complete, but remember that your application will 
require resources to cut and paste the results as well.
I have written a number of PHP functions that uses an array the results of a query to generate formatted html tables, graphs and statistical information. It is easiest to use a single complex query with subqueries, but it is possible to 'fake' a complex query by 'stacking' multiple results into an array.

Hope that helps.





---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] PHP or JSP? That is the question.

2004-03-23 Thread Guy Fraser
[EMAIL PROTECTED] wrote:

It may be added to the Mark's points bellow that PHP has more libraries in
the out of the box setup (like regular expressions) but PHP is interpreted
(right?) while JSP is compiled: when I was making decision I have chosen JSP
because of "compiled" criteria: I do like the idea to catch as many bugs as
possible at compile time and rather hate "interpreted" development.
 

Sorry but your not correct.

Java is a binary 'interpreted' language. Although it is binary it is not 
a binary exucutable
native on almost all platforms. The java executable interprets the java 
binary code and
translates it into native machine executable code. Since the binary data 
is tokenized it is
more efficient and has already had it syntax and scopes verified so less 
processing is
required making it faster once it has been compiled.

One of the bigger advantages of JSP is the two way data flow capability. 
With the
java application running at the client end providing the interface, and 
the java servlet
running at the server end doing the I/O, JSP can make a more fluid 
feeling interface.
With PHP once the page is rendered it is static.

I did not know that AOLserver was TCL based, but that is if I remember 
correctly
simmilar to JSP, but I don't remember it being precompiled. I 
experimented with
a tcl module and appletts years ago, but I didn't have an application 
for it back then.

I do have a DB application that needs to be able to quickly "flip" 
between a selection
records now. I have not decided which way to go yet.

 

In terms of installing and configuring, well PHP is just an Apache 
module, so it is reasonably easy to get going.
   

But once JSP starts running - it does run. One has to invest into this
setup/configuration job once.
Brgds, Laimis

 

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Mark Kirkwood
Sent: 23. mars 2004 09:28
To: [EMAIL PROTECTED]
Subject: Re: [GENERAL] PHP or JSP? That is the question.

In terms of getting a page up and working, then PHP likely to be 
quicker, as its pretty easy to learn.
Java kind of demands you learn more "machinary" before doing 
much, but 
it may provide you with more capability once you get 
comfortable with it.

In terms of installing and configuring, well PHP is just an Apache 
module, so it is reasonably easy to get going.
To get JSP's going you really need to use a servlet engine 
(e.g Jakarta 
Tomcat) - you dont *actually* need Apache httpd at all in this case 
(there are reasons to have it tho, and there is an Apache 
module to link 
'em - mod_jk or mod_jk2).

To further confuse, there are a number open source J2EE engines that 
offer more functionality than Tomcat - (Jboss and Jetty come 
immediately 
to mind).

Any of these would actually be a good choice - hope I have helped :-)

regards

Mark

Andrew Falanga wrote:

   

So, what I'm wondering is, which platform offers better 
 

(more stable, 
   

ease of use, ease of maintainability, etc.) support for developing 
such an application.  One consideration that has been brought to my 
attention is that PHP is natively supported by Apache and 
 

JSP requires 
   

a separate interpreter (Jakarta or whatever).

 

---(end of 
broadcast)---
TIP 8: explain analyze is your friend

   

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html

.

 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PHP or JSP? That is the question.

2004-03-24 Thread Guy Fraser
This is an example of what I mean :

1) Fill out form to get data.
2) Server responds with java and set of records.
3) Java applet displays one of the records and with out further 
intervention from the
server, tha java applet can be used to scroll back and forth through the 
other records.

Because data from the set of records is now at the client side, 
manipulation of the
data seems faster to the end user than retrieving each record one at a 
time every
time they want to move back or forth through the search results. This 
allows the
records to be nicely formatted and displayed one at a time rather than 
as a table.

Bas Scheffers wrote:
...snip...
Guy,
I have no idea what you mean when you say: "With the java application
running at the client end providing the interface, and the java servlet
running at the server end doing the I/O, JSP can make a more fluid feeling
interface." How do you mean? A JSP page on the browser is as static as a
PHP one.
As for Tcl, ever since 8.0, it compiles the code into bytecode at runtime,
which does speed up the use of procedures, but it being a higher level
language than java and not as strictly typed, it is not as fast in all
cases. But for a web app in a fast enviroment like AOLserver, it is
unlikely you will see any performance difference, there are too many
factors influencing that and your actual code on the page is a tiny
fraction. (database access being the worst offender)
 

...snip...

--
Guy Fraser


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PHP or JSP? That is the question.

2004-03-26 Thread Guy Fraser
scott.marlowe wrote:

On Thu, 25 Mar 2004, Bas Scheffers wrote:

 

Guy,

   

1) Fill out form to get data.
2) Server responds with java and set of records.
3) Java applet displays one of the records and with out further
 

I see what you mean now, a business application in a browsers. It's
usefull for that - my company has written various trading/market data apps
that way for banking clients - but for a public website, html and forms
are a better solution.
   

Note that if you want to write straight client-server apps, tcl/tk and 
php-gtk both work pretty well too.
 

I have never written a gui based app for Windows or Macs and since the
application would have to be cross platform compatable it would have to
run on Windows, Macs as well as Linux and other Unix variants. That is
too much work for me to do by myself. About the only alternative would
be to write the Client in Java {Not that MS cruft, but real Sun Java} then
anything woth a Java VM would work. I could then write the server in
what ever I wanted {probably C}.
This is way off the topic, though. The question was about PHP and JSP not
gtk client/server application efficiencies.
--
Guy Fraser




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] COPY TO order

2004-04-12 Thread Guy Fraser
Christopher Browne wrote:

Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Clodoaldo Pinto Neto) would write:
 

How to make sure COPY TO writes the table lines to the file in the same order
they were inserted?
   

You probably want to rewrite PostgreSQL then.

 

I'm producing html pages in pl/pgsql and using COPY TO to write then
to file.  Occasionaly, about once in 7 or 9, the lines are copied to
the file out of the order they were inserted in the table.
   

If you need to maintain data in some order, then you need to add a key
field that indicates that ordering, and use ORDER BY in order to
select the data in that order.
That will involve not using COPY TO.
 

Not really.

If you have a 'serial' or 'bigserial' field like this :

create table test_table (
test_id bigserial,
data integer,
comment text
);
and you use :

copy test_table (data,comment) 
from '/wherever/the/file/is' 
using delimiters ',';

to insert data like this :

27,some kind of entry
32,another kind of entry
16,yet another entry
...
Assuming this is the first set of data entered the table will get populated with :

1 | 27 | some kind of entry
2 | 32 | another kind of entry 
3 | 16 | yet another entry 
...

I have used this in the past and it works well.



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly

2004-04-22 Thread Guy Fraser
Dann Corbit wrote:

A following VACCUM brings back return times to 'start' - 
 

but I cannot 
   

run VACUUM any other minute (?). And it exactly vaccums as 
 

many tuples 
   

as I updated.. sure thing:
 

Why not? You only have to vacuum this one table. Vacuuming it 
once a minute should be doable.
   

Shouldn't the Database server be the entity that decides when vacuum is
needed?
How is the database supposed to know when you want to purge records?
Once a vacuum has been run, the table can not be rolled back or time
traveled.
Something is very, very strange about the whole PostgreSQL maintenance
model.
Giving the administrator full control over database management is a good 
thing.
If you want to write a cron job, to purge records automaticaly, thats your
prerogative. Not every one needs to, nor want's to constantly purge records.

Most of my databases collect information and changing information in them
would be taboo. Since records are not updated or deleted their is no reason
to vacuum the collection tables, and they collect between 400 K to 40 M
records per period.
Oracle uses MVCC and I do not have to UPDATE STATISTICS constantly to
keep the system from going into the toilet.
Does Oracle purge records automaticaly?
If so how do you configure it, and what are the default parameters?
Also, I should be able to do an update on every row in a database table
without causing severe problems.  Every other database system I know of
does not have this problem.
If I have a million row table with a column called is_current, and I do
this:
UPDATE tname SET is_current = 0;
Horrible things happen.
Just an idea:
Why not recognize that more rows will be modified than the row setting
can support and actually break the command into batches internally?
It sounds like you have significant hardware limitations.

I have a database I use for traffic analysys, that has over 40,000,000 
records, I have
done some complicated queries with multiple subselects and joins. The 
complicated
queries take a long time to complete, but they work. I have also done 
updates that
affected at least 5% of the records, then vacuumed the table shortly 
there after.

The bigger the table the more "scatch pad" disk space, and memory you need.



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly

2004-04-26 Thread Guy Fraser
Philipp Buehler wrote:
On 22/04/2004, Guy Fraser <[EMAIL PROTECTED]> wrote To [EMAIL PROTECTED]:
 

Shouldn't the Database server be the entity that decides when vacuum is
needed?
 

How is the database supposed to know when you want to purge records?
Once a vacuum has been run, the table can not be rolled back or time
traveled.
   

Hmm, if the UPDATE is in a transaction block. After this block is 
committed, the deleted tuple could be purged if there is a flag. Like, 
either in the schema 'purge-on-commit', or as an option like 'UPDATE PURGE'?

Just an idea.. (Brainfart?) :)
ciao
 

What if your not the only person accessing the database and someone else 
has an active transaction that was initiated before your transaction was 
committed?

If you delete the 'stale' data, then you would have to abort their 
transaction or their transaction would have data with mixed results from 
before your update and after your update. When to remove 'stale' data is 
not a simple thing to determine in an active database. In order to 
preserve transactional integrity, all transactions that are being 
handled during the transaction that modifies the data, must be completed 
before the data can be considered 'stale' and once the 'stale' data is 
purged roll backs can not be permitted for any transaction before the 
purge. Eventually the automatic purging of  'stale' data will be 
supported, but hopefully it will be configurable to allow 'time travel' 
when required, and allow for a reasonable time to be able to roll back 
transactions.

I am not an RDBMS developer and don't claim to know how the inner 
workings work, but I have been using RDBMS application for many years 
and have worked with a number of different products. I prefer PostGreSQL 
over all others, but I do use others for some projects where their 
support or special features out way those of PostGreSQL.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] BLOB help needed...

2004-04-27 Thread Guy Fraser
If you are using php, the two functions below should help.
http://ca.php.net/manual/en/function.pg-escape-bytea.php
http://ca.php.net/manual/en/function.pg-unescape-bytea.php
Taber, Mark wrote:
We’re implementing our first PostgreSQL database, and enjoying it very 
much. However, we have a table that will store binary image files (pie 
charts, etc.) for later display on a dynamic webpage. While we’re 
putting together our prototype application, I’ve been asked by the 
programmers (I’m the DBA) to “put the images in the database.” I can 
see how to do this using Large Objects, but then getting them out 
again seems problematic, and the documentation is a bit sketchy. Would 
BYTEA columns be better? However, it seems to me that there is no easy 
way using psql to load images into a BYTEA column. Any help would be 
greatly appreciated.

Regards,
Mark Taber
State of California
Department of Finance
Infrastructure & Architecture Unit
916.323.3104 x 2945
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] copy a database

2004-08-26 Thread Guy Fraser
Have you tried to use "copy" to export the data from each table?
As a last resort you could try this, since you said you can still
select data from the tables.
Make a list of tables then :
sed -e "/^.*/copy & TO '&.sql';/" table.list | psql database
This should create a file for each table ending with ".sql".
Then re-create the table on the new DB and use "copy" to
import the data from the files.
David Suela Fernández wrote:
El mié, 25-08-2004 a las 20:54, Tom Lane escribió:
 

David Suela =?ISO-8859-1?Q?Fern=E1ndez?= <[EMAIL PROTECTED]> writes:
   

The problem is that pg_dump always give me the next error:
 

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  relation "pg_user" does not exist
 

Have you tried re-creating that view?
CREATE VIEW pg_catalog.pg_user AS 
   SELECT 
   usename, 
   usesysid, 
   usecreatedb, 
   usesuper, 
   usecatupd, 
   ''::text as passwd, 
   valuntil, 
   useconfig 
   FROM pg_shadow;

			regards, tom lane
   

It return:
ERROR:  permission denied to create "pg_catalog.pg_user"
DETAIL:  System catalog modifications are currently disallowed.
How can i change this permissions?
 

--
Guy Fraser

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] division by zero issue

2004-09-15 Thread Guy Fraser
Maybe try something like this :
SELECT
 task_id,
 CASE
  WHEN task_count = '0'
  THEN '0'::int4
  ELSE (task_duration * 
   task_duration_type / 
   task_count) as hours_allocated
 END
FROM
 (SELECT
   task_id,
   task_duration,
   task_duration_type,
   count(user_tasks.task_id) as task_count
 FROM tasks
 LEFT JOIN user_tasks
   ON tasks.task_id = user_tasks.task_id
 WHERE tasks.task_milestone = '0'
 GROUP BY
   tasks.task_id,
   task_duration,
   task_duration_type
 ) as intermediate
;

This was done off the cuff so it may not work as is.
Greg Donald wrote:
Converting some MySQL code to work with Postgres here.
I have this query:
SELECT
 tasks.task_id,
 (tasks.task_duration * tasks.task_duration_type /
count(user_tasks.task_id)) as hours_allocated
FROM tasks
LEFT JOIN user_tasks
 ON tasks.task_id = user_tasks.task_id
WHERE tasks.task_milestone = '0'
GROUP BY
 tasks.task_id,
 task_duration,
 task_duration_type
;
The problem is that sometimes count(user_tasks.task_id) equals zero,
so I get the division by zero error.  Is there a simple way to make
that part of the query fail silently and just equal zero instead of
dividing and producing the error?
TIA..
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Bad news for Open Source databases, acording to survey

2001-07-12 Thread Guy Fraser

Blah Blah Blah

Who cares if GMC or some other big company want to throw a huge
amount of computer resources and money for support personnel and
contracts!

I have worked with Oracle. It is a pig on resources, a nightmare
and daymare to administer. Like all proprietary software bug fixes
and enhancements take for ever, and the price for service contracts
and software are large. I have migrated all our databases into 
PostgreSQL and we have never been happier. We no longer need to
spend countless hours on the phone to get help and have turned 
our Sun equipment into DNS and mail servers. We did not see a
performance increase at the time we switched, but there has been
speed improvements since. The official Oracle documentation is
horrible but there are some good third party books. The
documentation for PostgreSQL when I first switched was incomplete,
but sufficient to get me up and running in a lot less time than
oracle did.

Since then new Linux distributions can configure and install
PostgreSQL server and clients from the initial install the job
of getting a server up and going is simplified. Webmin now by
default comes with an interface for PostgreSQL, but is NOT perfect.

There are more and more add-on modules to server software that
allow PostgreSQL connectivity, and I have built some of my own
without to much difficulty.

With all the API's for different programming languages and
connectivity solutions {ODBC, JDBC ...} PostgreSQL is an excellent
solution for self sufficient people and organizations. If you
like throwing money around donate it to PostgreSQL and or go
out and pay a fair bit for RedHat Database, which is just Linux
and PostgreSQL which have been optimized for each other and are
supported by a single vendor {not enough to convince me}.

Guy Fraser

PS :

I would like to thank Tom and all the rest people at PostgreSQL
for there excellence in support and creation of great software.

-- 
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] PostgreSQL Log Analyzer 1.0 is out

2001-08-29 Thread Guy Fraser

Gilles DAROLD wrote:
> 
> Hi all,
> 
> The first release of the PostgreSQL Log Analyzer is out, you can take a look
> of a screenshot at:
> 
> http://www.samse.fr/GPL/pg_analyzer/sample/
> 
> Documentation and download have a link here:
> 
> http://www.samse.fr/GPL/
> 
> This is the first version base on a detail log parsing (take a look at the
> screenshot)
> the next release will include the following feature:
> 
> - Hourly statistics + graph
> - Seconds statistics
> - I/O statistics
> - Most heavy SQL query
> - etc
> 
> I need report to know if it do the work for you. Let me know any parsing
> problem... When the parsing process will be ok I will develop the next feature.
> 
> To know how it work read the README and the pg_analyzer.conf file.
> 
> I have include the elephant diamond logo, let me know if this is a bad idea !
> 
> Regards,
> 
> Gilles Darold
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
Hi

So far so good.

Guy Fraser

-- 
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re: Run scripts

2001-08-29 Thread Guy Fraser

Lee Harr wrote:
> 
> On 24 Aug 2001 05:43:37 -0700, Christoph <[EMAIL PROTECTED]> wrote:
> > Working with oracle I used the command
> > @/home/lange/textfile.sql
> > to run scripts. Does anyone happen to know how this works with postgres?
> >
> 
> In psql, you could use:
> 
> \i /home/lange/textfile.sql
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl

>From the command line, you can do this :

psql -f /home/lange/textfile.sql database

-- 
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] mx is needed by postgresql-python-7.1.3-1PGDG

2001-08-30 Thread Guy Fraser

Hi

I had to "rpm --rebuild postgresql-7.1.3-1PGDG.src.rpm" on a RedHat 6.2
machine.

After I rebuilt the software I attempted to install all the compiled
packages but got the error message :

error: failed dependencies:
mx is needed by postgresql-python-7.1.3-1PGDG

This "/root/.cpan/build/Net-DNS-0.12/demo/mx" is the only mx I could
find on my system. Being a simple program name, I have no idea where to
get the required package. If the file is supposed to be a program to
find DNS MX RR's then this program may be OK.

Below is taken from the output from the build process.

 Begin output from build 

Requires: python >= 1.5 mx ld-linux.so.2 libcrypt.so.1 libc.so.6
libpq.so.2 /usr/bin/env libc.so.6(GLIBC_2.0)
Conflicts: python >= 1.6

 End 

Can anyone tell me what it is and where I can find it?

Guy

-- 
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Appology : MySQL threads belong else where.

2001-08-30 Thread Guy Fraser

Sorry if I insulted the wrong people.

Like I said in the message, "The odd comparison is OK" was poorly
stated, I was tired of having to go through all the posts from people
who appear to be close to anti PostgreSQL. I was trying to indicate that
suggesting feature enhancements that are SQL92 and in other databases
are appropriate. I am not saying in any way that MySQL's features are
bad, but I would rather see PostgreSQL specification compliant than a
MySQL knock off.

Of course the management programs may not be required to meet any
specifications. I the suggestions are aimed at making the management
system better suggestions should be welcome.

I am not against discussing comparisons between PostgreSQL and other
RDBMS systems. Discussions are often a good way of sowing the seeds of
ideas for the developers. When discussions appear to become arguments it
seems appropriate that somebody steps in to calm down the thread.

Again, I am sorry for being harsh, I was upset, but that is no excuse.

Guy

PS: I hope this thread can stop soon.

--
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] SQL Loader?

2001-09-10 Thread Guy Fraser

Joel Pang wrote:
> 
> Hi Fraser,
> 
> I've read your mail in postgresql web site that you've a procedure that will do bulk 
>loading of records into database.  I've been looking a utility just like the sql 
>loader of Oracle for the Postgres database.  I was wondering if you will share your 
>procedure will me? or give me some URL links that I can get reference on how to write 
>my own sql loader procedure?
> 
> Thanks in advance for your help.
> 
> Cheers,
> Joel

Please not the script below uses some special program to modify radius
accounting files for import as an ordered tab delimited file format.

The script also shows a way to import into an intermediate table for
seperating different uniqu records into seperate files then moving the
duplicate entries to a table for data integrity.

This also demonstrates a method of using tables determined from the data
being imported.

I wrote this software a long time ago and have spent little time
patching or rewriting. Since the script was initially written, I now
know some better ways of performing some of these tasks. I don't have
time to rewrite the script and it has worked for over 3 years so it is
low priority.

Guy Fraser

 begin radimport 
#!/bin/bash
ORIG=`pwd`
WORKDIR='/usr/local/pgsql'

cd $WORKDIR
echo Start collecting files
wget -P $WORKDIR/ -t 10 -T 30 -c \
ftp://username:[EMAIL PROTECTED]/path/*.acct.gz   # modified for
security
echo Finished collecting files
echo Start decompressing files
gunzip -v *.acct.gz
echo Finished decompressing files

for fname in `ls *.acct`;
do {
[ -f $fname ] && {
date
echo Start inputing $fname
# parserad - convert radius detail file to tab delimited format.
/usr/local/sbin/parserad -i $fname \
| /usr/local/pgsql/bin/psql -c "\
copy detail from stdin;" radius
echo Finished inputing $fname
echo Start compressing $fname
gzip -v $fname
echo Finished compressing $fname
#
#
# Clean up detail
#
/usr/local/bin/psql -c "vacuum detail;" radius
#
# If more than one month determine Current and Previous, Month and Year.
#
#
MINTIME=`/usr/local/pgsql/bin/psql -c "\
select date_trunc('month',min(\"Time-Stamp\")::abstime) from detail;\
" -A -t radius`
MINMON=`echo $MINTIME | awk '{print $2}' -`
MINYEAR=`echo $MINTIME | awk '{print $5}' -`
MAXTIME=`/usr/local/pgsql/bin/psql -c "\
select date_trunc('month',max(\"Time-Stamp\")::abstime) from detail;\
" -A -t radius`
MAXMON=`echo $MAXTIME | awk '{print $2}' -`
MAXYEAR=`echo $MAXTIME | awk '{print $5}' -`
[ "$MAXYEAR" = "" ] && (echo "Exiting: No Data in detail table." || exit
1)
echo Moving $fname
mv $fname.gz /mnt/sdb3/done/$MAXYEAR
echo Start processing data from $fname
#
# Process records in detail file and create a unique record file called
radius.
#
#
echo Creating lookup table
/usr/local/bin/psql -c "\
select min(oid) as recnum,max(\"Acct-Session-Time\"),\

\"Acct-Status-Type\",\"Acct-Session-Id\",\"NAS-IP-Address\",\"NAS-Port-Id\",\
 \"User-Name\",\"Realm\",\"Framed-IP-Address\",\"Calling-Station-Id\" \
into radius \
from detail \
group by
\"Acct-Session-Id\",\"NAS-IP-Address\",\"NAS-Port-Id\",\"User-Name\",\

\"Realm\",\"Framed-IP-Address\",\"Calling-Station-Id\",\"Acct-Status-Type\";\
" radius
/usr/local/bin/psql -c "vacuum radius;" radius

#
#
# Move stop records to stop table
#
echo Creating unique stop record table
/usr/local/bin/psql -c "\
select recnum into radstop from radius where \"Acct-Status-Type\" =
'Stop';\
" radius
echo Filtering stop records
/usr/local/bin/psql -c "\
select a.* into stop from detail as a,radstop as b where a.oid =
b.recnum;\
" radius
/usr/local/bin/psql -c "select count(*) as \"Filtered\" from stop;"
radius
echo Cleaning stop records
/usr/local/bin/psql -c "\
delete from detail where oid = radstop.recnum;\
" radius
echo Cleaning temporary tables
/usr/local/bin/psql -c "drop table radstop;" radius
/usr/local/bin/psql -c "vacuum detail;" radius

#
# Move start and alive records to start table
#
echo Creating unique start record table
/usr/local/bin/psql -c "\
select recnum into radstart from radius where \"Acct-Status-Type\" !=
'Stop';\
" radius
  

Re: [GENERAL] 7.3.4 RPM

2003-08-19 Thread Guy Fraser
Before anyone can make an rpm for you they will need some more information.

What type of CPU are you using ? {SPARC, ALPHA, Pentium ...}

What kernel, and libraries are you using?

Good luck

Guy

Vilson farias wrote:

Hi again

I'm still using RedHat 6.2. I would be happy if I could find some PostgreSQL
7.3.4 RMPs for this old version of RedHat Linux. Are you planning to release
it? Maybe you should start a new User Survey at postgresql.org main page
asking people that use Linux wich version/release they use. This information
could be valuable to coordenate RPMs generation effort.
Best regards,



José Vilson de Mello de Farias
Software Engineer
Dígitro Tecnologia Ltda - www.digitro.com.br
APC - Customer Oriented Applications
E-mail: [EMAIL PROTECTED]
Tel.: +55 48 281 7158
ICQ 11866179


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html

 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.




---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] The ..... worm

2003-08-22 Thread Guy Fraser
Running mozilla on linux and having my mail processed by postini, 
_http://www.postini.com_, I haven't had any problems other that lots of 
quarantined mail at postini. Having the mail quarantined off site saves 
bandwith as well.

I work at an ISP and we use postini for all email that ends up on our 
mail servers.

We used to run Spam Assassin for all our mail, but since we moved to 
postini our bandwith savings have been great. I still have Spam Assassin 
running for my account but postini is so good that I only get about 1% 
of the UE {unsolicited email} that makes it through and Spam Assassin 
usualy catches about half of the ones that make it through. In a week I 
usaualy get about one UE to my inbox, 5 into my Spam Assassin mail box 
and about 500 quarantined at postini. Now that I ahve my white lists 
setup at postine I only get one or two legitimate messages captured per 
week. I normaly get around 5000-10,000 messages a week, so the time 
savings of having all UE quarantined off site where I can delete them 
without downloading them save a lot of time and bandwidth.

I don't work for, or get kick backs for, Postini. They are worth while 
looking into especialy for medium to large organizations, because they 
keep their virus checkers and UE algorithms up to date and most large to 
medium sized organizations can recoup their postini costs with their 
savings in bandwidth, and lost productivity of staff having to download 
and pick out the UE from the real mail, possibly getting infected by a 
virus while doing so.

NOTE: I use UE so that Hormel {http://www.spam.com/} doesn't get upset 
with me ;-)

Guy

PS Keep your worm to your self :-D

Dennis Gearon wrote:


Jeessh, a lot of people have my email address.
I have received about 500 copies of the worm in the last 24 hours. My 
mail spool at work was so full I couldn't get out or relay or 
anything. The wierd part is that it's my work address, and I'm 
subscribed to almost all my lists through the address above or my 
previous home address. YEARS ago I was using the work address for 
lists, but not for a LNG time.




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Picture with Postgres and Delphi

2003-09-10 Thread Guy Fraser
Thanks that is extremely helpfull.

Guy

Jonathan Bartlett wrote:

What is the size limit of bytea, I thought it was 8K?
 

No limit that I've found.  Some are several meg.

 

How do you dump your database when you have bytea, do you need to do a
binary dump?
 

Nope.  pg_dump automagically escapes everything.

 

What are you using to insert the binary data?
 

Perl example:

my $COMPLETED_TEMPLATE_VARS_INSERT = <
   $sth = $dbh->prepare($COMPLETED_TEMPLATE_VARS_INSERT);
   $value = undef;
   $binvalue = $field->{BINANS};
   $value = $field->{ANS} unless $binvalue;
   $sth->bind_param(1, $self->getOID);
   $sth->bind_param(2, $name);
   $sth->bind_param(3, $value);
   $sth->bind_param(4, $binvalue, DBI::SQL_BINARY);
   $sth->execute || die("DBERROR:${DBI::errstr}:");
Note that I explicityl set DBI::SQL_BINARY.

Now, for php, you do the following:

$logodata = pg_escape_bytea($tmpdata);
$tmpsql = "update advertisements set $column_name = '$logodata'::bytea where object_id = 
$advertisement_oid";
$tmp = $db->query($tmpsql);
I never got it to work with parameterized queries, but this works fine for
me.  To select it back out, you need to do:
$q = $db->query("select teaser_logo_gif_image from advertisements where
object_id = ?::int8", array($_GET['advertisement']));
$row = $q->fetchrow();
$data = pg_unescape_bytea($row[0]);
NOTE that many versions of PHP include pg_escape_bytea but NOT
pg_unescape_bytea.  Look in the docs to see which function appeared in
which version.
Jon
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Picture with Postgres and Delphi

2003-09-10 Thread Guy Fraser
What is the size limit of bytea, I thought it was 8K?

How do you dump your database when you have bytea, do you need to do a 
binary dump?

What are you using to insert the binary data?

Thank you in advance.

Guy

Jonathan Bartlett wrote:

For the education of me and maybe others too, why was
that?  i.e. what problems did you run into, that bytea avoids?
   

Compared to the filesystem, bytea provides data integrity.

Bytea gives you remote access, which you can cache if needed.

Bytea gives you the same permissions as anything else in Postgres, so you
don't have to worry about that separately.
Compared to BLOBs, bytea's are just simpler.  You can select them with a
single statement, you don't have to worry about leaving unreferenced
BLOBs, and, after 4 billion inserts, byteas are still meaningful while
BLOBs might not be. (due to OID problems).
Jon

 

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
   



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html

 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.




---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] detecting a NULL box

2003-09-22 Thread Guy Fraser
I don't know why you want to list a NULL with no other info, but here 
you go:

SELECT coords FROM dlg_control WHERE coords IS NULL LIMIT 1;

[EMAIL PROTECTED] wrote:

Does anybody know how to detect a NULL in a geometric box type?

When I execute the following sql statement (coords is a box type)
autotest=# select coords from dlg_control limit 1
autotest-# \g
coords

(1 row)

So, I have data that contains a "NULL" in the coords column, but when I
test on box(NULL) I don't get any data...
autotest=# select coords from dlg_control where coords=box(NULL) limit 1
autotest-# \g
coords

(0 rows)
Any ideas?
Thanks,
Dennis




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: VS: [GENERAL] Backup/dump of huge tables and performance

2000-08-02 Thread Guy Fraser

Hi

I have been using this method for over a year now.

NOTE:   Beware of "\" (escape) characters.
If you have to use them, they must be changed to "\\".


This is a sample of something similar to what I do:

/bin/cat tab-delimited.txt | /usr/bin/psql -c "copy table1 from stdin;"
database1

The "/bin/cat tab-delimited.txt" represents a program that generates the
required
tab delimited text.

Example file.

tab-delimited.txt =>7713857 bytes   69423 lines

Example table.

Table   = table1
+--+--+---+
|  Field   |  Type|
Length|
+--+--+---+
| Time | abstime 
| 4 |
| Type | text
|   var |
| Id   | text
|   var |
| Duration | int4
| 4 |
| Server   | inet
|   var |
| User | text
|   var |
| Location | text
|   var |
| Port | int4
| 4 |
| Service  | text
|   var |
| Remote   | inet
|   var |
| Caller   | text
|   var |
| Input| int4
| 4 |
| Output   | int4
| 4 |
| State| text
|   var |
| Delay| int4
| 4 |
+--+--+---+

Resources required to import data :
21.82 seconds elapsed
10% CPU utilization
(183 major + 29 minor) page faults
0 swaps

Machine has 128 MB Total memory, 8.5 MB shared memory on a 
Pentium Pro 200 with 256 KB L2 cache
and a 9 GB SEAGATE ST19101W Ultra Wide SCSI Drive
running postgresql-6.5.3.

This translates to approximately 3182 records per second,
averaging 111 bytes per record , using only 10% CPU.



[GENERAL] How can I do an UPDATE OR CREATE ?

1999-07-21 Thread Guy Fraser

Hi

I am trying to figure out how I can create an entry if one does not
exist when attempting an update.

This kind of what I want to do :

if (select "User-Name","Realm" from details;) {
update details set "Time-Used" = old."Time-Used" +
new."Time-Used";
} else {
insert into details "User-Name","Realm","Time-Used";
}

I have not seen any documentation specifying what pragmatic operators
are available if any.

Thank You In Advance

Guy Fraser
mailto:[EMAIL PROTECTED]