[SQL] Re: [PHP] a script that queries database periodically

2000-11-30 Thread Jason

> I was thinking of writing up a PHP script and put into crontab, which is
> somehow easier than a shell script, but PHP is an apache module, so I
> cannot execute PHP under crontab (it has to be executed in a browser
> right?).  I guess a shell script is necessary.  So, is it possible to
> call 'psql' and returning its query result and I can use sendmail to
> email the result? Any other idea?

if you know perl, you should check out DBI.  Read all about it at
www.cpan.org.  It's pretty similar to the php database interface.



Re: [PHP-DB] Re: [SQL] a script that queries database periodically

2000-12-03 Thread Jason

On Mon, Nov 27, 2000 at 02:56:59PM -0700, Roberto Mello wrote:
> 
>   PHP does not have a scheduling facility? AOLserver (the web/application
> server that powers AOL) has had such facility (and many many others for
> db-backed websites) since 1995. ns_schedule_proc.
>   http://www.aolserver.com

aolserver is a web/application server.  PHP is a server-side scripting 
language.  Why exactly *should* it have a job scheduler?

Some (such as myself) might also ask why should a web server have a job
scheduler, but that's a thread for a different list :)




[SQL] Date: the day before yesterday.

2001-08-14 Thread Jason

I am trying to retrieve records generated in the passed two days and
encountered difficulties in dealing with the date in Oracle.
Here is the query I try to form:

select * from Table where InputDate>=[the day before yesterday]

I tried "sysdate-2", didn't work.
Any suggestions?
Thanks!

Jason




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] copy DELETES to audit table

2000-08-21 Thread jason watkins

I'd like to have a trigger that does something like serialize any row to a
text string, then copys it into an audit table befor allowing the delete...
having an restore function that can deserialize would be nice as well, but
not criticle..

the goal would be to allow a couple people who need the ability to remove
rows the ability to do so, but to not loose the information for a week or
so, so that the choice can be double checked by someone else.

jason watkins




Re: [SQL] Birthday search.

2001-03-22 Thread Jason Earl

Here's an example of something that works with some of
the data that I have lying around.  Replace exit_dt
with "birthday" (or whatever you have named the field
that holds the birthday information) and replace
ticket_no with name (or whatever) and you should be on
the right track.

SELECT exit_dt, ticket_no FROM tscales 
   WHERE (
 date_part('month', exit_dt) > 4 AND 
 date_part('month', exit_dt) < 12) 
 OR (
   date_part('month', exit_dt) = 4 AND 
   date_part('day', exit_dt) >= 15
   ) 
 OR (
   date_part('month', exit_dt) = 12 AND 
   date_part('day', exit_dt) <= 20)
   ORDER BY exit_dt;

I don't consider myself enough of a SQL guru to know
if this is the "correct" way to do this, but this
query selects some 21000 records out of a 12
record table on my desktop test machine (PII 400 96M
Ram standard IDE hard drive running Linux) without any
indexes on exit_dt in a couple of seconds.

For more information check out the user guide section
on date/time functions.  It is quite interesting.  The
extract function would have served as well, but it
requires more typing.

Jason

--- Antti Linno <[EMAIL PROTECTED]> wrote:
> I have a table with personal data, including date of
> birth in date format.
> What i need is a select that searches birthdays in
> certain range.
> E.g. I need to find all birthdays starting from
> 15.april until 20th
> december. As you all understand, year doesn't matter
> :P
> Any simple and elegant solutions to it?
> 
> Greetings,
>   Antti
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 3: if posting/reading through Usenet, please
> send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED]
> so that your
> message can get through to the mailing list cleanly


__
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail. 
http://personal.mail.yahoo.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Sorting and then...

2001-04-10 Thread Jason Earl

SELECT name FROM test ORDER BY id DESC LIMIT 10;

Take care,
Jason


--- Wei Weng <[EMAIL PROTECTED]> wrote:
> Suppose I have a table 
> 
> create table test
> (
> id integer,
> name text
> );
> 
> And I want to get the names of the largest 10 "id"s.
> How can I do that in 
> sql?
> 
> Thanks!
> 
> 
> ---(end of
> broadcast)---
> TIP 2: you can get off all lists at once with the
> unregister command
> (send "unregister YourEmailAddressHere" to
[EMAIL PROTECTED])


__
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail. 
http://personal.mail.yahoo.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] system time

2001-05-15 Thread Jason Earl

Try using:

SELECT now()

It should do what you want.  If not there are a whole
pile of date functions.  Look them up in the Users
Guide.

Hope this is helpful.
Jason

--- Seema Noor <[EMAIL PROTECTED]> wrote:
> is there any function from which i can get system
> time?
> 
>

> Do You Yahoo!?
> Get your free @yahoo.co.uk address at
> http://mail.yahoo.co.uk
> or your free @yahoo.ie address at
> http://mail.yahoo.ie
> 
> ---(end of
> broadcast)---
> TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]


__
Do You Yahoo!?
Yahoo! Auctions - buy the things you want at great prices
http://auctions.yahoo.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] batch file

2001-05-25 Thread Jason Earl


I just put files commands that I want in some file
like "foo.sql" and then do:

psql mydatabase -f foo.sql

Is this what you are looking for?

Jason


--- Milen <[EMAIL PROTECTED]> wrote:
> Hi all!,
> 
> I want to execute several SQL statements one after
> another.
> Does anyone know if it is possible to create
> something like a "batch file" 
> which contains all SQL commands I want.
> 
> Best Regards:
> Milen
> 
> ---(end of
> broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html


__
Do You Yahoo!?
Yahoo! Auctions - buy the things you want at great prices
http://auctions.yahoo.com/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] drop table if exists

2001-07-03 Thread Jason Watkins

How can I duplicate the behavior of:

DROP TABLE IF EXISTS mytable;

CREATE TABLE mytable (
blah,
blah
);

INSERT INTO mytable
(blah)
VALUES
(blah);

in other words, so that I have a single sql file that restores the database
to a known state.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Re: drop table if exists

2001-07-03 Thread Jason Watkins

Doesn't work. I like wrapping up the entire file in a transaction so that if
I make a stupid syntax error or the like, I can just do a rollback. Because
of that, the transaction enters abort state. I suppose I can just stop using
transactions and use this method.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Updated rowcount

2001-07-10 Thread Jason Alexander

Hello all,


I'm attempting to find the property (if it exists) where I can check to see
how many rows were affected during an UPDATE. An equivalent would be
@@ROWCOUNT in SQL Server, etc...

As an example:

UPDATE tablefoo
SET foo1 = 'spork'
WHERE foo2 = 'goomba'

IF @@ROWCOUNT = 0
  INSERT INTO tablefoo
  .
  .
  .
  .

And, so on...

TIA,
-Jason



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Re: [NOVICE] Knowing new item in table...

2001-08-03 Thread Jason Earl


--- macky <[EMAIL PROTECTED]> wrote:
> im still in the dark when it comes to postgres.
> is there a way i can
> know if there was a new entry on a table...

That's a pretty common problem.

> scenario...
> 
>   i have a script that runs every 5 minutes... that
> script checks if there's
> a new entry in that table... and if YES there's a
> new entry ill do some
> processing...

Probably the most straightforward thing to do is to
simply insert a timestamp in each row as it is added
to the database.  That way you can do a simple select
to figure out which rows have been added:

SELECT * FROM my_table WHERE insert_time > now() - '5
minutes'::interval

or alternatively

SELECT * FROM my_table WHERE insert_time > '2001-08-03
11:30'

The best part is that creating a column that includes
a timestamp automagically is fairly easy to do. 
Simply define your table like this:

CREATE TABLE foo (
insert_time timestamp DEFAULT CURRENT_TIMESTAMP,
nametext
);

Once your table is created you simply insert into
table foo ignoring the insert_time column like so:

INSERT INTO foo (name) VALUES ('Jason');

and your timestamp automagically gets inserted:

processdata=> SELECT * FROM foo;
     insert_time| name  
+---
 2001-08-03 11:32:48-06 | Jason
(1 row)

Pretty neat, huh?

> is there an internal utility that i can use in
> postgres that can tell me
> that this rows a new commers... hehehe

Nope, you have to come up with the logic yourself. 
However, PostgreSQL has all kinds of tools that are
really helpful.

> if someone has other ideas on how to deal with this
> speak out...
> 
> thanks in advance..
> 
> btw..
>my idea is that that table will have an addtional
> column as reference
> lets say column "READ" 1 for yes 0 for NO
> 

That would work too, but it would be a lot harder. 
For example, you would have to first select all the
rows where READ is 0, do your processing, and then
update all of those rows to 1.  You would almost
certainly want to do all of this in a transaction so
that you could roll READ back to 0 if something went
wrong, and you would probably want to lock the table
to boot as you would have to worry about your
processing step taking more than 5 minutes.  If it
did, the second transaction would see the last 10
minutes of of inserts as being unread even though the
first transaction was still working on them.

I hope this is helpful,
Jason


__
Do You Yahoo!?
Make international calls for as low as $.04/minute with Yahoo! Messenger
http://phonecard.yahoo.com/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Re: [NOVICE] Arithmetic operation on DATE

2001-08-14 Thread Jason Wong

I know you can do it for days, thus:

   expires = CURRENT_TIMESTAMP + 10

would be 10 days from now. Don't know about months though.

HTH
-- 
Jason Wong
Gremlins Associates
www.gremlins.com.hk


- Original Message - 
From: macky <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, August 13, 2001 5:48 PM
Subject: [NOVICE] Arithmetic operation on DATE


> is it possible to add a date datatype column to a number resulting to a
> date..
> 
> 
> theoretically it should do this,,,
> 
> X is in months
> 
> date + X = date
> 
> -->  2001-08-20 + 6 = 2002-02-20
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] changes to table creation syntax in 7.1.2?

2001-08-29 Thread Jason Earl

It looks like it works here :(.  Do you have an error
message?

processdata=# select version();
version   

---
 PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by
GCC 2.95.4
(1 row)

processdata=# CREATE TABLE category
processdata-# (
processdata(# uid int4 PRIMARY KEY,
processdata(# description text NOT NULL,
processdata(# parent int4 NULL REFERENCES
category(uid)
processdata(# )
processdata-# 
processdata-# ;
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit
index 'category_pkey' for table 'category'
NOTICE:  CREATE TABLE will create implicit trigger(s)
for FOREIGN KEY check(s)
CREATE

--- Jayson Callaway <[EMAIL PROTECTED]> wrote:
> In postgres 7.0.x I had some working code that
> lookes something like:
> 
> CREATE TABLE category
> (
> uid int4 PRIMARY KEY,
> description text NOT NULL,
> parent int4 NULL REFERENCES category(uid)
> )
> 
> After upgrading to postgres 7.1.2 however this
> syntax is not accepted
> anymore. I receive an error that says it can not
> create the reference
> because the table category does not exist.
> 
> How do I setup this type of reference structure in
> 7.1.2? Did the syntax
> change?
> 
> I am running under Linux.
> --
> Jayson Callaway
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]


__
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] HELP: Trend reporting

2001-08-30 Thread Jason Charette

Hello newsgroup buddies,

I am looking for information on trend reporting. We run Linux, PostreSQL,
Apache and PHP4. We have a series of scripts that track customer usage
through our sites, and give us similar information to what is found in the
Apache logs, but with more data.

I now find myself in the position of having to create "trend reports" or
"common routes through our site".

Any search I have done has yielded many commercial products that will do
this for me, but, unfortunately, I am looking more for a tutorial on how to
do this, rather than a product that will do it for me.

Essentially, I would like a tutorial(s) or article(s) that discusses this
topic, preferably something along the lines of my afforementioned setup (I
would like to minimize the amount of converting I have to do).

If anyone can help, in any way, shape or form, it would be greatly
appreciated.

Thanks in advance,

Jason



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] to_date/to timestamp going to BC

2001-10-03 Thread jason . servetar

Can someone tell me if this is a bug with the date functions or am I using
them incorrectly?
If anyone has a workaround for this I could use some help this data
conversion.

dev=> create table test_date (dt varchar(100));
CREATE

dev=> insert into test_date values ('March 11, 1997');
INSERT 706020 1

dev=> select dt, to_date(dt, 'Month dd, '), to_timestamp(dt, 'Month dd,
') from test_date;
   dt   |to_date| to_timestamp  
+---+---
 March 11, 1997 | 0001-03-19 BC | 0001-03-19 BC

drw_dev=> select version();
version 

 PostgreSQL 7.1 on sparc-sun-solaris2.6, compiled by GCC 2.95.2
(1 row)




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] system maintained keys

2001-10-19 Thread Jason Earl

Check out the SERIAL type.  It does precisely what you
want.  An idea as to how this is used would be like
this:

CREATE TABLE foo (
  prim_key SERIAL PRIMARY KEY,
  bar  text
);

I tend to create sequences by hand like this:

CREATE SEQUENCE my_sequence_seq;

And then I create my table with a definition like
this:

CREATE TABLE foo (
  prim_key int DEFAULT nextval('my_sequence_seq')
 PRIMARY KEY,
  bar  text,
);

But that's just because I have been using PostgreSQL
long enough that it didn't have the SERIAL type when I
started.  The SERIAL type is just syntactic sugar for
what I generally do the long way.

Either way you simply pretend that the column isn't
there when you do inserts (unless you know what you
are doing) like so:

INSERT INTO foo (bar) VALUES ('hello');
INSERT INTO foo (bar) VALUES ('goodbye');

And then when you select you get:

processdata=> SELECT * FROM foo;
 prim_key |   bar   
--+-
1 | hello
    2 | goodbye
(2 rows)

I hope that is helpful,
Jason Earl

--- Stefan Lindner <[EMAIL PROTECTED]> wrote:
> Is there any way to get system maintained keys from
> postgres? e.g. to
> have a table with a primary key column (varchar or
> int) and let postgres
> chose the next unique value for this column?
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] ORDER BY case insensitive?

2001-10-04 Thread Jason Earl

SELECT * FROM MyTable ORDER BY lower(Name);

Should do the trick.

Jason Earl

--- Bob Swerdlow <[EMAIL PROTECTED]> wrote:
> How do I get the rows sorted in a case insensitive
> way?
> 
> I have some queries that basically fit the form:
>  SELECT * FROM MyTable ORDER BY Name;
> When I view the results, all of the Name's that
> start with an upper case
> letter precede all of the Name's that start with a
> lower case letter.  I
> want them all in alphabetic order, regardless of
> case.
> 
> I've looked in PostgreSQL Introduction and Concepts
> by Bruce Momjian and in
> the FAQ at
> http://postgresql.overtone.org/docs/faq-english.html
> 
> Thanks for your help.
> 
> --
> Bob Swerdlow
> Chief Operating Officer
> Transpose, LLC
> [EMAIL PROTECTED]
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org


__
Do You Yahoo!?
NEW from Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
http://geocities.yahoo.com/ps/info1

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] to_date/to timestamp going to BC

2001-10-04 Thread jason . servetar

Thanks Karel,
Good call, you guessed it. I was just using my Oracle knowledge of
the to_date and applying it to the results I was expecting in pgsql. Guess I
should not make those assumptions

-Original Message-
From: Karel Zak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, October 04, 2001 2:42 AM
To: Tom Lane
Cc: Servetar, Jason; [EMAIL PROTECTED]
Subject: Re: [SQL] to_date/to timestamp going to BC


On Wed, Oct 03, 2001 at 05:14:02PM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > Can someone tell me if this is a bug with the date functions or am I
using
> > them incorrectly?
> 
> I get the right thing when I use the right format:
> 
> regression=# select dt, to_timestamp(dt, 'FMMonth dd, ') from
test_date;
>dt   |  to_timestamp
> +
>  March 11, 1997 | 1997-03-11 00:00:00-05
> (1 row)
> 
> However, I'd agree that this shows a lack of robustness in to_timestamp;
> it's not objecting to data that doesn't match the format.

 The manual is transparent about this. I can add feauture that will
check everythig, but users who knows read manual and use already 
debugged queries will spend CPU on non-wanted code. 

Hmm.. I look at Oracle, and it allows parse queries like:

SVRMGR> select to_date('March 11, 1997', 'Month dd, ') from dual;
TO_DATE('
-
11-MAR-97
1 row selected.

.. well, I add it to my TODO for 7.3 (I plan rewrite several things 
in to_* functions).

Karel

-- 
 Karel Zak  <[EMAIL PROTECTED]>
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] ORDER BY case insensitive?

2001-10-04 Thread Jason Earl

My guess is that compared to the task of sorting
millions of names the fact that you have to lowercase
them first is not going to be a particular burden.  No
matter what you do you are going to get a table scan
(unless you qualify your select with a where clause).

You can, however, create an index like:

create index MyTable_lower_idx on MyTable
(lower(name));

It won't help with your particular query, but it
certainly would help for queries like:

SELECT * FROM MyTable WHERE lower(name) = 'jason';

It is also possible to create a trigger that would
automatically lowercase information when it is
inserted or updated.  Or even less high tech you can
make sure that all of your INSERTS and UPDATES use the
lower command like this:

INSERT into MyTable (name) VALUES (lower('VALUE'));

Jason

--- Bob Swerdlow <[EMAIL PROTECTED]> wrote:
> Thanks for the suggestion, Jason.
> 
> A co-worker of mine, however, had this response:
> Yes, but my guess is that that will first
> convert all million (or
> whatever) records to upper case, and then physically
> sort them.  It won't be
> able to make use of the index.
> 
> To make this efficient, do we need to uppercase all
> of the data before
> adding to the table? (yuk)
> 
> - Bob
> 
> 
> - Original Message -
> From: "Jason Earl" <[EMAIL PROTECTED]>
> To: "Bob Swerdlow" <[EMAIL PROTECTED]>;
> <[EMAIL PROTECTED]>
> Sent: Thursday, October 04, 2001 3:47 PM
> Subject: Re: [SQL] ORDER BY case insensitive?
> 
> 
> > SELECT * FROM MyTable ORDER BY lower(Name);
> >
> > Should do the trick.
> >
> > Jason Earl
> >
> > --- Bob Swerdlow <[EMAIL PROTECTED]> wrote:
> > > How do I get the rows sorted in a case
> insensitive
> > > way?
> > >
> > > I have some queries that basically fit the form:
> > >  SELECT * FROM MyTable ORDER BY Name;
> > > When I view the results, all of the Name's that
> > > start with an upper case
> > > letter precede all of the Name's that start with
> a
> > > lower case letter.  I
> > > want them all in alphabetic order, regardless of
> > > case.
> > >
> > > I've looked in PostgreSQL Introduction and
> Concepts
> > > by Bruce Momjian and in
> > > the FAQ at
> > >
> http://postgresql.overtone.org/docs/faq-english.html
> > >
> > > Thanks for your help.
> > >
> > > --
> > > Bob Swerdlow
> > > Chief Operating Officer
> > > Transpose, LLC
> > > [EMAIL PROTECTED]
> > >
> > >
> > >
> > > ---(end of
> > > broadcast)---
> > > TIP 6: Have you searched our list archives?
> > >
> > > http://archives.postgresql.org
> >
> >
> > __
> > Do You Yahoo!?
> > NEW from Yahoo! GeoCities - quick and easy web
> site hosting, just
> $8.95/month.
> > http://geocities.yahoo.com/ps/info1
> >
> 


__
Do You Yahoo!?
NEW from Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
http://geocities.yahoo.com/ps/info1

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] to_date/to timestamp going to BC

2001-10-03 Thread jason . servetar

Thanks Tom that worked great.
I guess I should have not skipped the
FM prefix section of the date conversion doc.

drw_dev-> to_timestamp(dt, 'FMMonth dd, ') from test_date;
   dt   |  to_date   |  to_timestamp  
++
 March 11, 1997 | 1997-03-11 | 1997-03-11 00:00:00-07

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 03, 2001 3:14 PM
To: Servetar, Jason
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] to_date/to timestamp going to BC 


[EMAIL PROTECTED] writes:
> Can someone tell me if this is a bug with the date functions or am I using
> them incorrectly?

I get the right thing when I use the right format:

regression=# select dt, to_timestamp(dt, 'FMMonth dd, ') from test_date;
   dt   |  to_timestamp
+
 March 11, 1997 | 1997-03-11 00:00:00-05
(1 row)

However, I'd agree that this shows a lack of robustness in to_timestamp;
it's not objecting to data that doesn't match the format.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Triggers do not fire

2001-10-17 Thread Jason Earl

I can vouch for that.  I have several tables with 10
to 16 million entries in much the same sort of setup
as you are describing (primary key, timestamp, value).

PostgreSQL is will quite happily use the timestamp
indexes when accessing this table, and it doesn't
default to a sequential scan until a considerable
number of tuples would be searched.  For example:

processdata=> explain select count(*) from
caseweights1 where dt > '2001-10-11'; 
NOTICE:  QUERY PLAN:

Aggregate  (cost=255053.37..255053.37 rows=1 width=0)
  ->  Index Scan using caseweights1_dt_idx on
caseweights1  (cost=0.00..254827.01 rows=90544
width=0)

EXPLAIN
processdata=> select count(*) from caseweights1 where
dt > '2001-10-11';
 count  

 146773
(1 row)


processdata=> select count(*) from caseweights1;
  count   
--
 14984087
(1 row)


As you can see, even though my table is fairly large
PostgreSQL will happily use indexes for queries even
when there is a significant number of tuples that are
to be accessed.  The count command with the index took
perhaps a second on my 400MHz 128M ram normal IDE hard
drive test server.  The count of all the records, on
the other hand, triggered a sequential scan that took
a long time to complete.

In other words, chances are good that PostgreSQL will
handle your data without special modification.

Jason

--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Reiner Dassing <[EMAIL PROTECTED]> writes:
> > I have a table which has a lot of entries (some x
> millions) of the kind
> > (id, timestamp, value)
> > The access (selects) is concentrated to the timely
> last some thousands
> > entries. 
> > To adapt this fact I want to setup a "virtual"
> table - test in my
> > example - which
> > is accessed by the clients but in reality the
> entries are separated to
> > different small
> > tables. These table are dynamically created to
> hold the values
> > distinguished by years.
> 
> Why bother?  Seems like you are just making life
> complicated for
> yourself.  One big table with a suitable index ought
> to work fine.
> 
>   regards, tom lane
> 
> ---(end of
> broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] serial column vs. explicit sequence question

2002-06-13 Thread Jason Earl

Charlie Toohey <[EMAIL PROTECTED]> writes:

> I'm having a problem and there seems to be 2 solutions. It is simple
> and straighforward, but will take several paragraphs to explain.
> 
> I have a schema with a master-detail design. The master table does
> not have an expicit id, so I have a column of type serial.
> 
> Lets say I need to insert a row into the master table and N rows
> into the detail table. After inserting a row into master, and before
> detail, I need to read the master table to obtain the value of the
> id for the row just inserted, so I can insert this id as the foreign
> key value for the N rows in the detail table.
> 
> This seems like a poor solution because I have to write and then
> read the master table each time. With lot of activity on these
> tables, I don't know how well this will scale. Additionally, the
> only way that I can guarantee that I am getting the id of the most
> recent row inserted into master is to SET TRANSACTION ISOLATION
> LEVEL SERIALIZABLE --- because otherwise, if other processes are
> inserting rows into master/detail concurrently, I may pick up the id
> from an incorrect row (the technique used to get the correct id is
> to include a timestamp column on the insert into master and then
> query for the latest row).
> 
> A better solution would seem to use a sequence explicitly, rather
> than a id column of type serial. I would obtain the id value from
> the sequence, and then insert this id into the master table and into
> the detail table. This way, I wouldn't be writing/reading the same
> table constantly -- I would only be writing to it, and, I would
> guarantee that I would be using the correct id in both master and
> detail without have to SET TRANSACTION ISOLATION LEVEL
> SERIALIZEABLE.
> 
> Any comments on which solution you would choose, or is there a
> better solution ?
> 
> Thanks,
> Charlie

The SERIAL type is a thin veneer over an underlying conglomeration of
a unique index and a sequence, nothing more, nothing less.  I still
prefer to use the old syntax that spelled this out explicitly (mostly
because it reminded me that I needed to drop the sequences as well as
the table if I made changes during the development phases of my
project).  Instead of using a serial type I have a whole pile of
scripts that contain bits that look like this:

DROP TABLE prod_journal;
DROP SEQUENCE prod_journal_id_seq;

CREATE SEQUENCE prod_journal_id_seq;

CREATE TABLE prod_journal (
   id   int PRIMARY KEY 
DEFAULT nextval('prod_journal_id_seq'),
   ...
);

The SERIAL type does precisely the same sort of thing.  The only
difference is that PostgreSQL thinks up the sequence name for you
(currently PostgreSQL tries to choose a name that looks precisely like
the one I have chosen in this example).  The reason that I bring this
up is A) it makes me happy to think that I have been using PostgreSQL
long enough that my PostgreSQL memories predate the SERIAL type, and
B) to point out that there is not really a difference between using
the SERIAL type and using sequences explicitly.

What you *really* need is to get acquainted with the nifty sequence
functions currval and nextval.  They hold the secret to sequence
Nirvana.  See Chapter 4 Section 11 of the PostgreSQL User's Guide for
the full scoop.  The short story is that curval gives the current
value of the sequence (for whichever backend you are connected to) and
nextval will give you the next value of the sequence.

Now let's say that you had two simple tables foo for the master record
and bar for the detail records.

test=# create table foo (id serial primary key, name text);
NOTICE:  CREATE TABLE will create implicit sequence 'foo_id_seq' for SERIAL column 
'foo.id'
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 'foo_pkey' for table 
'foo'
CREATE

test=# create table bar (master int references foo, detail text);
NOTICE:  CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)
CREATE

You could then insert into these tables using something like this:

test=# begin;
BEGIN
test=# insert into foo (name) values ('Jason');
INSERT 67423220 1
test=# insert into bar (master, detail) values (currval('foo_id_seq'), 'Does this 
work');
INSERT 67423221 1
test=# insert into bar (master, detail) values (currval('foo_id_seq'), 'Apparently 
So!');
INSERT 67423222 1
test=# commit;
COMMIT

As long as you hang onto your connection to the back end you don't
even have to wrap this as one transaction.  Currval is connection
dependent, and so as long as you have the same connection currval will
give the correct answer, and currval is very very fast.

Hope this was helpful,
Jason

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Cross tables Like%

2002-09-09 Thread Jason Davis

I have 2 tables, one is products and one is manufactors.

The products table has a col. for Product_descriptions and manufactor_id
column that is a foriegn key to the manufactors table.

The manufactors table has a manfuactor_description column, for each unique
Manufactor_id.

I want to search (using like) on both tables (columns:
Products.Product_description and Manufactors.Manufactor_description).

Can anyone suggest a cross-table %Like% Query for it?

Thanks!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] corresponding primary key for max(column)?

2002-12-23 Thread Jason Pyeron

is there a better way?

for reasons not defined by me, i cannot constrain the data in the 
locations table with unique(ref,ts)

that said Microsoft mappoint craps out if there are duplicate values in 
the ref column.

so i need to create a view which returns locations.* for the row which 
contains the max(ts) group by ref.

-jason pyeron

assetdb=# \d locations
Table "locations"
 Column |   Type   |  Modifiers
+--+--
 id | integer  | not null default 
nextval('"locations_id_seq"'::text)
 ref| integer  | not null
 ts | timestamp with time zone |
 lat| numeric(12,10)   | not null
 lon| numeric(13,10)   | not null
 disp   | character varying(63)|
Primary key: locations_pkey

assetdb=# select * from locations ;
 id | ref |  ts   |  lat  |  lon   |   
  disp
+-+---+---++--
  1 |   1 | 2002-12-22 05:47:26.863774-05 | 39.337307 | -76.6245916667 | 0.00MPH 
038.6 degrees
  2 |   1 | 2002-12-22 05:52:57.211416-05 | 39.337307 | -76.6245916667 | 0.00MPH 
038.6 degrees
  3 |   1 | 2002-12-22 05:54:18.125055-05 | 39.337307 | -76.6245916667 | 0.00MPH 
038.6 degrees
  4 |   2 | 2002-12-22 06:04:48.348906-05 | 40.337307 | -76.2459166670 | 0.00MPH 
038.6 degrees
  5 |   2 | 2002-12-22 06:04:48.348906-05 | 99.99 | -99.99 | Foo Data 
Point 0.00MPH 038.6 degrees
  6 |   1 | 2001-01-01 00:00:00-05| 38.00 | -76.90 | dfdsfsd
(6 rows)

assetdb=# SELECT
assetdb-#  l0.id, l0.ref, l0.ts, l0.lat, l0.lon, l0.disp
assetdb-#
assetdb-# FROM
assetdb-#  locations l0
assetdb-#
assetdb-# WHERE
assetdb-#  l0.id =
assetdb-#  ANY (
assetdb(#   SELECT
assetdb(#min(l1.id) AS max
assetdb(#
assetdb(#   FROM
assetdb(#(
assetdb(# SELECT
assetdb(#  l2.id, l2.ref, l2.ts, l2.lat, l2.lon, l2.disp
assetdb(#
assetdb(# FROM
assetdb(#  locations l2
assetdb(#
assetdb(# WHERE
assetdb(#  l2.ts =
assetdb(#  (
assetdb(#   SELECT
assetdb(#max(l3.ts) AS max
assetdb(#   FROM
assetdb(#locations l3
assetdb(#   WHERE
assetdb(#l3.ref = l2.ref
assetdb(#  )
assetdb(#) as l1
assetdb(#
assetdb(#   GROUP BY
assetdb(#l1.ref
assetdb(#  )
assetdb-# ;
 id | ref |  ts   |  lat  |  lon   | 
disp
+-+---+---++---
  3 |   1 | 2002-12-22 05:54:18.125055-05 | 39.337307 | -76.6245916667 | 0.00MPH 
038.6 degrees
  4 |   2 | 2002-12-22 06:04:48.348906-05 | 40.337307 | -76.2459166670 | 0.00MPH 
038.6 degrees
(2 rows)

Seq Scan on locations l0  (cost=0.00..22535105.55 rows=500 width=98)
  SubPlan
->  Materialize  (cost=22535.08..22535.08 rows=1 width=8)
  ->  Aggregate  (cost=22535.06..22535.08 rows=1 width=8)
->  Group  (cost=22535.06..22535.07 rows=5 width=8)
  ->  Sort  (cost=22535.06..22535.06 rows=5 width=8)
->  Seq Scan on locations l2  (cost=0.00..22535.00 rows=5 
width=8)
  SubPlan
->  Aggregate  (cost=22.51..22.51 rows=1 width=8)
  ->  Seq Scan on locations l3  
(cost=0.00..22.50 rows=5 width=8)





-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
-   -
- Jason Pyeron   http://www.pyerotechnics.com   -
- Owner & Lead  Pyerotechnics Development, Inc. -
- +1 410 808 6646 (c)   500 West University Parkway #1S -
- +1 410 467 2266 (f)   Baltimore, Maryland  21210-3253 -
-   -
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

This message is for the designated recipient only and may contain 
privileged, proprietary, or otherwise private information. If you
have received it in error, purge the message from your system and 
notify the sender immediately.  Any other use of the email by you 
is prohibited.




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Cross-database references

2003-01-28 Thread Jason Turner
On Tue, Jan 28, 2003 at 05:30:06PM -0200, Pedro Igor Craveiro e Silva wrote:
> AIP - Assessoria Informática e Proteção LTDASomeone knows how is going the 
>implementation of cross database references in pgsql ?
> Would have some future release with this great functionality ?

This is usually left up to the front-end.  Usually for me, perl DBI.  This way, I
can connect to multiple databases, possibly all of different types (PgSQL, Oracle,
Informix, ...).

I'm sure the SQL gurus can corret me, but I was under the impression that 
cross-database references in the backend would be capital-D Difficult to 
implement with transaction support.   Then there's the further tearing asunder
of the ANSI SQL standard...

Jason
-- 
Indigo Industrial Controls Ltd.
64-21-343-545
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Home-brewed table syncronization

2003-07-09 Thread Jason Earl
Michael A Nachbaur <[EMAIL PROTECTED]> writes:

> On Wednesday 09 July 2003 02:28 pm, Cliff Wells wrote:
>> On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote:
>> > So, I'm looking at syncronizing 4 tables from one master database to
>> > several child databases. I'm thinking of doing the following with
>> > DBD::Multiplex:
>> >
>> > DELETE FROM TableA;
>> > INSERT INTO TableA (..) VALUES (...);
>> > 
>> >
>> > on all the child databases, but I'm not sure what kind of impact
>> > this would have on my servers.  My impression is that this would
>> > hammer the indexes, and might blow any memory optimization out
>> > the window.  Only a few records in my dataset will change from
>> > time-to-time, but just the process of determining what is
>> > different may take more effort than simply rebuilding.
>>
>> Keep a timestamp associated with each record.  Only update the records
>> with timestamps later than your last sync.
>
> I'm dealing with an existing database structure that, though I can
> change it, has a lot of impact on the rest of my infrastructure.  If
> I can find a way of doing this without resorting to timestamps, I'd
> much rather do it that way.

Would it be possible to add another table (changelog) that contained
the primary key of the record that has changed and a timestamp?  The
changelog table could be maintained via triggers (on update, insert,
or delete).  Your synchronization software could then be relatively
simple.  It would simply need to check your changelog table for rows
that have changed.

Jason

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Changing location of ORDER BY has large effect on performance, but not results...

2005-11-03 Thread Jason Turner
I have two queries that return the same results, but one is 6 times
slower than the other one, can anyone enlighten me as to why?

My initial guess is that it is not able to utilize the index on
foo.tracktitle to sort the result set after foo has been joined with
other tables. This seems kind of broken to me. I am running 8.0.4 on
Gentoo Linux.

Thanks,
Jason

--- First Query ---

select
  foo.*,
genre.genrename,
album.albumtitle,
(select performer.performername from performer as p, trackperformers as tp
where p.performerid = tp.performerid
and tp.trackid = foo.trackid
limit 1) AS performername,

  (SELECT coverartid
FROM trackcoverart
WHERE trackcoverart.trackid = foo.trackid
LIMIT 1) as trackcoverart,

 (SELECT albumcoverart.coverartid
  FROM albumcoverart, track
 WHERE foo.trackid = trackid
  AND  albumcoverart.albumid = foo.albumid
 LIMIT 1) as albumcoverart

FROM  (select * from track order by tracktitle) as foo, album, genre

where foo.albumid = album.albumid
and foo.genreid = genre.genreid

offset 2000
limit 20;

--- First Explain Analyze ---

Limit  (cost=20915.07..21123.71 rows=20 width=338) (actual
time=184.997..186.417 rows=20 loops=1)
  ->  Hash Join  (cost=50.81..131860.75 rows=12635 width=338) (actual
time=5.085..185.202 rows=2020 loops=1)
Hash Cond: ("outer".albumid = "inner".albumid)
->  Hash Join  (cost=13.07..938.94 rows=12635 width=318)
(actual time=1.317..34.143 rows=2020 loops=1)
  Hash Cond: ("outer".genreid = "inner".genreid)
  ->  Subquery Scan foo  (cost=0.00..736.34 rows=12635
width=288) (actual time=0.021..16.317 rows=2020 loops=1)
->  Index Scan using track_tracktitle on track 
(cost=0.00..609.99 rows=12635 width=332) (actual time=0.012..4.266
rows=2020 loops=1)
  ->  Hash  (cost=11.66..11.66 rows=566 width=34) (actual
time=1.267..1.267 rows=0 loops=1)
->  Seq Scan on genre  (cost=0.00..11.66 rows=566
width=34) (actual time=0.004..0.737 rows=566 loops=1)
->  Hash  (cost=33.59..33.59 rows=1659 width=24) (actual
time=3.646..3.646 rows=0 loops=1)
  ->  Seq Scan on album  (cost=0.00..33.59 rows=1659
width=24) (actual time=0.012..2.194 rows=1659 loops=1)
SubPlan
  ->  Limit  (cost=0.00..7.53 rows=1 width=4) (actual
time=0.021..0.021 rows=1 loops=2020)
->  Nested Loop  (cost=0.00..7.53 rows=1 width=4)
(actual time=0.019..0.019 rows=1 loops=2020)
  ->  Index Scan using albumcoverart_albumid on
albumcoverart  (cost=0.00..3.01 rows=1 width=4) (actual
time=0.007..0.007 rows=1 loops=2020)
Index Cond: (albumid = $1)
  ->  Index Scan using track_pkey on track 
(cost=0.00..4.51 rows=1 width=0) (actual time=0.007..0.007 rows=1
loops=2020)
Index Cond: ($0 = trackid)
  ->  Limit  (cost=0.00..2.78 rows=1 width=4) (actual
time=0.006..0.006 rows=0 loops=2020)
->  Index Scan using trackcoverart_trackid on
trackcoverart  (cost=0.00..27.80 rows=10 width=4) (actual
time=0.004..0.004 rows=0 loops=2020)
  Index Cond: (trackid = $0)
  ->  Limit  (cost=0.00..0.03 rows=1 width=17) (actual
time=0.028..0.028 rows=1 loops=2020)
->  Nested Loop  (cost=0.00..64.89 rows=2142 width=17)
(actual time=0.025..0.025 rows=1 loops=2020)
  ->  Nested Loop  (cost=0.00..6.05 rows=2
width=0) (actual time=0.019..0.019 rows=1 loops=2020)
->  Index Scan using
trackperformers_trackid on trackperformers tp  (cost=0.00..3.01 rows=1
width=4) (actual time=0.007..0.007 rows=1 loops=2020)
  Index Cond: (trackid = $0)
->  Index Scan using performer_pkey on
performer p  (cost=0.00..3.02 rows=1 width=4) (actual
time=0.007..0.007 rows=1 loops=2020)
  Index Cond: (p.performerid =
"outer".performerid)
  ->  Seq Scan on performer  (cost=0.00..18.71
rows=1071 width=17) (actual time=0.002..0.002 rows=1 loops=2020)
Total runtime: 186.706 ms

--- Second Query ---

select
  foo.*,
genre.genrename,
album.albumtitle,
(select performer.performername from performer as p, trackperformers as tp
where p.performerid = tp.performerid
and tp.trackid = foo.trackid
limit 1) AS performername,

  (SELECT coverartid
FROM trackcoverart
WHERE trackcoverart.trackid = foo.trackid
LIMIT 1) as trackcoverart,

 (SELECT albumcoverart.coverartid
  FROM albumcoverart, track
 WHERE foo.trackid = trackid
  AND  albumcoverart.albumid = foo.albumid
 LIMIT 1) as albumcoverart

FROM  track as foo, album, genre

where foo.albumid = album.albumid
and foo.genreid = genre.genreid

order by foo.tracktitle

offset 2000
limit 20;

--- Second Explain Ana

[SQL] Using In Clause For a Large Text Matching Query

2006-06-29 Thread Jason Farmer

Hello all, my first post to the pgsql mailing list!

There may be a completely better way to do this; if so please help point 
me in the right direction!


What I'm trying to do is run a query to partially match 300+ text fields 
to a PG table of about 200,000 rows. The idea is to pull out a portion 
of text from the original text fields and do a partial text match on 
anything in my PG table containing that substring.


I liked the idea of using a where IN(group) to do my comparisons, as in

select col1 from table1 where col1 in ('text1','text2')

however, this requires an exact string match. Is there any way to do a 
substring match inside of my IN group? Or can anyone think of a better 
way to do something like this?


Heres an example of something of how I'd like this to work:

Portion of 300 Original Text fields:
"brown cat"
"green dog"

2 rows of 200k+ Database table:
"brown kitty"
"green doggy"

We can assume a certain portion of the text is included in the DB table, 
so I want to be able to do a substring match on "brown" and "green" and 
in this case return both "brown kitty", and "green doggy". However the 
problem is, if I run the query on each of my 300 rows to scan 200,000 
rows in my DB is entirely too slow. So I was hoping to use the IN clause 
to create an IN group of about 300 items to scan the DB once.


I hope this makes sense, but if anything sounds confusing please let me 
know, and I will be sure to clarify! Thanks for any help or direction 
anyone can provide!!


- Jason Farmer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Using In Clause For a Large Text Matching Query

2006-06-30 Thread Jason Farmer
Ah, I do think that sounds about like what I want! Let me play with this 
one some, thanks so much!!


Richard Broersma Jr wrote:
Well, there is also: href="http://www.postgresql.org/docs/8.1/interactive/functions-comparisons.html#AEN13377";>


/expression/ /operator/ ANY (/array expression/).  So, if you have a way to 
preprocess you
input text fields that you want matched 
you could build a regex for each and feed them in an array to an '~ ANY' expression like so (or,
use ~* for case 
insensitive matching):


SELECT col1
FROM table
WHERE col1 ~ ANY (ARRAY['regex1', 'regex2', ...]);



Good point,  But don't forget to include the list in your response. :-)

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

  


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Error: out of memory in create table as

2006-08-03 Thread jason nordwick
I'm having memory problems. I needed to materialize a couple views 
(create table t as select * from v) to get acceptable performance from a 
small data storage system I'm building, but now I get out of memory 
issues on that statement. It is near the bottom of about a 40 line 
function, and it completes up to that point, but when I try to run the 
create with only about a million rows, it fails:


ERROR:  out of memory
DETAIL:  Failed on request of size 856.
CONTEXT:  SQL statement "create table gwt as select * from gworktick"
PL/pgSQL function "processadd" line 30 at SQL statement

I don't see any temp files being generated on disk. Attached is our 
postgresql.conf file. We have little Postgres experience (mostly either 
Oracle, Sybase IQ, etc..)


-jason

P.S., lack of an upsert is really killing us.

# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# "pg_ctl reload". Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'   # use data in another directory
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file

# If external_pid_file is not explicitly set, no extra pid file is written.
#external_pid_file = '(none)'   # write an extra pid file


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -

listen_addresses = '*'
#listen_addresses = 'localhost' # what IP address(es) to listen on; 
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
#port = 5432
max_connections = 100
# note: increasing max_connections costs ~400 bytes of shared memory per 
# connection slot, plus lock space (see max_locks_per_transaction).  You
# might also need to raise shared_buffers to support more connections.
#superuser_reserved_connections = 2
#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal
#bonjour_name = ''  # defaults to the computer name

# - Security & Authentication -

#authentication_timeout = 60# 1-600, in seconds
#ssl = off
#password_encryption = on
#db_user_namespace = off

# Kerberos
#krb_server_keyfile = ''
#krb_srvname = 'postgres'
#krb_server_hostname = ''   # empty string matches any keytab entry
#krb_caseins_users = off

# - TCP Keepalives -
# see 'man 7 tcp' for details

#tcp_keepalives_idle = 0# TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0# TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0   # TCP_KEEPCNT;
# 0 selects the system default


#---
# RESOURCE USAGE (except WAL)
#---

# - Memory -

# varlena says to set to ~8% of RAM
# shared_buffers = 30   # min 16 or max_connections*2, 
8KB each
shared_buffers = 8  # min 16 or max_connections*2, 8KB each
#temp_buffers = 1000# 

Re: [SQL] Change of data type

2006-08-07 Thread jason nordwick

On pg8, to change tab.col to type t:

 alter table tab alter column col type t

so:
 alter table tab alter column material type text

-j


Judith wrote:

   Hello everybody, excuse me how can I change de data type of a 
field, I currently have:
 material character(30) 
   but I now want the field in text type like this:
 
   material  text


   somebody knows if ALTER TABLE has some option to do this?, or How 
can I do that?



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] Recursively traversing a partially ordered set

2007-05-29 Thread Jason Grout
ults into a table (incurring the cost of an INSERT)
rather than recurse?

2. Is there a big difference in speed between using an array versus
using a SELECT in a WHERE condition?  In other words, which is generally
going to be faster:

SELECT * from table where field IN (some function returning a SETOF);

or

SELECT * from table where field = ANY(some function returning an array);

3. Is there a strong reason I should strip out duplicates in either of
the two cases in question 2?  Or is the performance about the same when
doing the queries whether or not the SETOF or arrays contain duplicates?

4. Can you see any obvious optimizations to the above functions
(particularly the last one)?

Thanks for your help. Thanks for the absolutely wonderful database and
solid documentation.  I originally did this project in MySQL and had the
weirdest errors (the errors turned out to be due to the default
case-insensitive collation of MySQL!).  That's when I decided to move to
postgresql when I updated the project.

Thanks,

Jason

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] SQL Server and C++ Data Types

2000-06-01 Thread Jason . Weiss



I am trying to convert my C++ data types into SQL Server data types. I
found a short list on Microsoft's website, but it did not list all of the
types. I was wondering if you could give me a list of the conversions or
could direct me where to go.