Re: [GENERAL] 50 MB Table

2000-03-07 Thread Howie

On Mon, 6 Mar 2000, JB wrote:

> [SNIP]
> CREATE TABLE info (
>   lastname char(50),
>   street_name char(50),
>   street_number char(5),
>   ... (a bunch of other stuff that works fine with '=')
> );
> 
> CREATE INDEX nx_info1 ON info (lastname);
> CREATE INDEX nx_info2 ON info (street_name);
> 
> The select is as simple as this in most cases...
> 
> SELECT * FROM info WHERE street_name LIKE 'MAIN%';
> [SNIP]
might want to try CLUSTERing the table based on one of the indexes.
that'll put everything in order ( physically ) and should speed it up a
bit.

id also suggest grabbing more ram while its (relatively) inexpensive.

> [SNIP]

---
Howie <[EMAIL PROTECTED]>   URL: http://www.toodarkpark.org
"You do not have the right to free health care.  That would be nice, but 
 from the looks of public housing, we're just not interested in health care."






Re: Re: [GENERAL] Regular expressions syntax: is \ the escape character ?

2000-03-07 Thread Gabriel Fernandez

Thanks a million to you all. Finally, as someone in the list suggested, i'm
using the double backslash (\\) and it seems it works ok.

The only problem i have is when i want to escape a single quote (') or the
backslash (\).

For example: i have one row with the value 'ONE\SECOND'

I try to recover it doing (from psql frontend):

select field1 from table1 where field1 ~* 'ONE\\\'  ;

But it doesn't work, and the parser seems to be confused (it asks me to
close again the quote ' ).

The problem is the same with the single quote '. I've tried this:

select field1 from table1 where field1 ~* 'D\\'ALEMA' ;

But it doesn't work neither.

Thanks for your help.

Gabi :-)







Re: [GENERAL] FOREIGN KEY syntax

2000-03-07 Thread Oliver Elphick

Bruce Momjian wrote:
  >Oliver Elphick wrote:
  >
  >> Andrzej Mazurkiewicz wrote:
  >>   >> And what is MATCH FULL?
  >>   >[Andrzej Mazurkiewicz] I would not like to speculate perhaps
  >>   >somebody else knows exact answer.
  >> 
  >> MATCH FULL:
  >>   Either all referencing columns must be null, or all must have values.
  >> 
  >> MATCH PARTIAL:
  >>   "if any columns are non-null, there must be at least one row in the 
  >>referenced table that could satisfy the constraint if the other null
  >>values were correctly substituted."
  >
  >As I read the SQL3 spec draft, it's that if at least one
  >of the columns is NULL, no check is done at all.
  
What I quoted was based on SQL92.  Perhaps it has been changed?

  >Could be wrong, will lookup at Thursday when back in
  >Hamburg.
  >
  >
  >Jan from Philly

Is this really you, Bruce?


-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
   PGP key from public servers; key ID 32B8FAA1
 
 "Go ye therefore, and teach all nations, baptizing them
  in the name of the Father, and of the Son, and of the 
  Holy Ghost; Teaching them to observe all things  
  whatsoever I have commanded you; and, lo, I am with 
  you alway, even unto the end of the world. Amen." 
 Matthew 28:19,20 







[GENERAL] locale & mb support

2000-03-07 Thread Gabriel Fernandez

Hi all:

I'm using Postgres-6.5.3 with Red Hat 6.0 and I've a problen using
LC_TYPE and LC_COLLATE ...

I compiled Postgres --wiht-locale --with-mb=LATIN1.

I did initdb -e LATIN1

And finally I createdb -E LATIN1 test.

I also have defined

LC_TYPE=LATIN1 and
LC_COLLATE=LATIN1

when postmaster starts.

So now, I try to select * from test1 order by ...

The problem is that I do not get the result sorted as I would.
What's wrong? What should I do?

Thanks to all,

jjprat





Re: [GENERAL] locale & mb support

2000-03-07 Thread Oleg Broytmann

On Tue, 7 Mar 2000, Gabriel Fernandez wrote:
> The problem is that I do not get the result sorted as I would.
> What's wrong? What should I do?

   First, test your locale. Look int src/test/locale directory. There
you'll find some test. You should create similar test for your locale (and
I'd be happy if you send your test to me).

   If your locale tested Ok - stop postmaster and rerun initdb -E LATIN1.
I heared this solution few times, and once it helped me.

   BTW, you don't nned to pass LC_CTYPE to postmaster - it can be defined
on per client basis - so define it before invoking psql or other client
(similar to what you'll find in locale tests)...

Oleg.
 
Oleg Broytmann http://members.xoom.com/phd2/ [EMAIL PROTECTED]
   Programmers don't die, they just GOSUB without RETURN.






[GENERAL] Trigger

2000-03-07 Thread Raigo Lukk

Hi

I am trying to create a trigger for a detail table to check that value
what
 is entered exists in master table.
For example into CustomerOrder table could not be enterd order for
Customer what does not exists.

How is it best done in PostgreSQL?

I believe a trigger is what I must do.
I looked the documentation in the part of triggers and there was an
example what had a part:
execute procedure procedurename
Does this mean that I should create a procedure for each trigger?
Going further I found no Create Procedure in the documentation ... so how
to create procedures in PostgreSQL?


Thanks
Raigo
[EMAIL PROTECTED]






[GENERAL] Trigger

2000-03-07 Thread Raigo Lukk

Hi

I am trying to create a trigger for a detail table to check that value
what is entered exists in master table.
For example into CustomerOrder table could not be enterd order for
Customer what does not exists.

How is it best done in PostgreSQL?

I believe a trigger is what I must do.
I looked the documentation in the part of triggers and there was an
trigger example what had a part:
execute procedure procedurename
Does this mean that I should create a procedure for each trigger?
Going further I found no Create Procedure in the documentation ... so
 how to create procedures in PostgreSQL?


Maybe, somebody could send me some example trigger for checking detail
record value's existance in master table.

Sorry, for simple question, but I just started using PostgreSQL.


Thanks
Raigo
[EMAIL PROTECTED]






RE: [GENERAL] Trigger

2000-03-07 Thread Andrzej Mazurkiewicz

Switch to v7.0 beta1 and use FOREIGN KEY. You will save a lot of time.
Before you really finish your development there will be 7.0 production (this
is my private estimation).
Andrzej Mazurkiewicz

> -Original Message-
> From: Raigo Lukk [SMTP:[EMAIL PROTECTED]]
> Sent: 7 marca 2000 13:55
> To:   [EMAIL PROTECTED]
> Subject:  [GENERAL] Trigger
> 
> Hi
> 
> I am trying to create a trigger for a detail table to check that value
> what is entered exists in master table.
> For example into CustomerOrder table could not be enterd order for
> Customer what does not exists.
> 
> How is it best done in PostgreSQL?
> 
> I believe a trigger is what I must do.
> I looked the documentation in the part of triggers and there was an
> trigger example what had a part:
> execute procedure procedurename
> Does this mean that I should create a procedure for each trigger?
> Going further I found no Create Procedure in the documentation ... so
>  how to create procedures in PostgreSQL?
> 
> 
> Maybe, somebody could send me some example trigger for checking detail
> record value's existance in master table.
> 
> Sorry, for simple question, but I just started using PostgreSQL.
> 
> 
> Thanks
> Raigo
> [EMAIL PROTECTED]
> 
> 
> 





Re: [GENERAL] Trigger

2000-03-07 Thread Alfred Perlstein

* Raigo Lukk <[EMAIL PROTECTED]> [000307 05:36] wrote:
> Hi
> 
> I am trying to create a trigger for a detail table to check that value
> what is entered exists in master table.
> For example into CustomerOrder table could not be enterd order for
> Customer what does not exists.
> 
> How is it best done in PostgreSQL?
> 
> I believe a trigger is what I must do.
> I looked the documentation in the part of triggers and there was an
> trigger example what had a part:
> execute procedure procedurename
> Does this mean that I should create a procedure for each trigger?
> Going further I found no Create Procedure in the documentation ... so
>  how to create procedures in PostgreSQL?
> 
> 
> Maybe, somebody could send me some example trigger for checking detail
> record value's existance in master table.
> 
> Sorry, for simple question, but I just started using PostgreSQL.

I'm pretty new to postgresql as well, but i think what you want to
look at is the RULES section of the documentation, it seems that
would be what you're looking for.

I haven't done it myself, but there's some decent examples there.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]





[GENERAL] RE: Trigger

2000-03-07 Thread Raigo Lukk

Hi

On Tue, 7 Mar 2000, Andrzej Mazurkiewicz wrote:

> Switch to v7.0 beta1 and use FOREIGN KEY. You will save a lot of time.
> Before you really finish your development there will be 7.0 production (this
> is my private estimation).
> Andrzej Mazurkiewicz

I am now using PostgreSQL 6.5.3 in RedHat 5.2
In my case the upgrade to v.7 is not possible because I am doing it for
university project and in university computers I have no rights to 
install stuff :-(

Still, I would appreciate some examples on triggers and procedures as 
I will need to write some procedures during this project.


Raigo






RE: [GENERAL] 50 MB Table

2000-03-07 Thread Culberson, Philip

JB,

The 20 seconds definitely sounds excessive.  Have you done the following?

1) Run "vacuum analyze info"?  If you have not, Postgres will not make use
of any indices.

2) Run an explain plan on your query to see what Postgres thinks it's going
to do?


In another post, Howie suggested more RAM.  If speed is a primary concern, I
second his recommendation.  Bump up your RAM and crank up the number of
shared memory buffers.  Here is an example of the command I use to start up
Postgres:

postmaster -i -B 12000 -d 2 -o "-F -S 4096 -s" >&! server.log &

The "-B 12000" tells Postgres to set aside 12,000 8k buffers.  With the size
of your table, you should easily be able to fit the whole thing into memory.
Keep in mind that your OS must have it's shared memory segment defined large
enough to handle all the buffers...  in this case, about 100 Meg.


Phil Culberson
DAT Services

-Original Message-
From: JB [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 06, 2000 5:52 PM
To: Paul Condon
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] 50 MB Table


Thanks for taking the time to reply. I think that I wasn't as clear as I
could be. This table is normalized and as far as I understand, what I'm
doing with it is not extraordinary. The schema is basically...

CREATE TABLE info (
  lastname char(50),
  street_name char(50),
  street_number char(5),
  ... (a bunch of other stuff that works fine with '=')
);

CREATE INDEX nx_info1 ON info (lastname);
CREATE INDEX nx_info2 ON info (street_name);

The select is as simple as this in most cases...

SELECT * FROM info WHERE street_name LIKE 'MAIN%';

.,,the table about 50MB worth, about 70,000 records. I have an index on
'lastname' and 'street_name' and I need to search on each of these with
'LIKE'. So I was wondering about ways to speed this up. It's very slow.
It takes about 20 seconds for the above query. I even uppercased all the
names, hoping tht would help. I wondered if I'd used the wrong index
type (btree), or if there were some flags that would help. Is there a
way to bust the indexes out alpha on the first letter say, or some other
such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no X
and no users (except me ;)

Paul Condon wrote:
> 
> JB wrote:
> 
> > I have a 50 MB +- table in postgres. The data is normalized so there's
> > not much I can do about the size. The tuples are about 512 bytes so
> > there's a pile of 'em. I need searching on of several fields, a couple
> > in particular are text fields that needs 'LIKE'. The problem is, the
> > thing is way too slow. So, I was wondering, before I go hunting for some
> > other solution, could anyone here point me to some ways to (hand)
> > optimize the searching in postgres? Different indexes, hashing and LIKE?
> > I'm not sure where to go with this.
> >
> > The basic criteria are:
> >  - sizes of indexes, etc, is not an issue. There's lot's of room on the
> > box.
> >  - the data is basically static so a read-only (if such a thing) is
> > fine.
> >  - it needs to be FAST
> >
> > cheers
> > jb
> >
> > 
> 
> It sounds as if you have several different kinds of information encoded in
> a single column using special words or letter combinations. This is a
> violation of the ideal that data items should be "atomic." You should make
> a catalog of all the things that you want to be able to say about each
> tuple, and design a relational schema in which atomic assertion is given
> its own column (attribute). Then you will be able to create indices on
> each, and you won't have to use LIKE in your WHERE clauses.
> 
> Paul

-- 
I'm in direct contact with many advanced fun CONCEPTS.

Paul Condon wrote:
> 
> JB wrote:
> 
> > I have a 50 MB +- table in postgres. The data is normalized so there's
> > not much I can do about the size. The tuples are about 512 bytes so
> > there's a pile of 'em. I need searching on of several fields, a couple
> > in particular are text fields that needs 'LIKE'. The problem is, the
> > thing is way too slow. So, I was wondering, before I go hunting for some
> > other solution, could anyone here point me to some ways to (hand)
> > optimize the searching in postgres? Different indexes, hashing and LIKE?
> > I'm not sure where to go with this.
> >
> > The basic criteria are:
> >  - sizes of indexes, etc, is not an issue. There's lot's of room on the
> > box.
> >  - the data is basically static so a read-only (if such a thing) is
> > fine.
> >  - it needs to be FAST
> >
> > cheers
> > jb
> >
> > 
> 
> It sounds as if you have several different kinds of information encoded in
> a single column using special words or letter combinations. This is a
> violation of the ideal that data items should be "atomic." You should make
> a catalog of all the things that you want to be able to say about each
> tuple, and design a relational schema in which atomic assertion is given
> its own column (attribute). Then you will be able to create indices on
> each, and you won't ha

Re: [GENERAL] Accounting/inventory systems

2000-03-07 Thread kaiq



On Mon, 6 Mar 2000, Ron Atkins wrote:

> > > Does somebody know some REALLY working accounting or inventory system on LAN
> > > (not via Internet)??? - where server is PostgreSQL .Because I have feeling that 
>PostgreSQL
> > > is used only with regards to Internet.
> 
> Sure, Kontor... it's GPLed and picking up steam! Go pick up a copy here:
> 
> http://www.Linux-Kontor.de/en/
you did not put it in:  http://www.pgsql.com/app-index/
BTW, seems the URL is not active now. why it is still not linked to the
official site? too ugly? who cares? -- and it is not ugly at all, beauty
of simplicity :-)
 






[GENERAL] Compiling 7.0 on Solaris

2000-03-07 Thread Henk van Lingen

Hi,

Has anyone managed to compile the 7.0 beta on a Solaris 2.6 system?

I keep getting errors. Lots of warnings concerning ecpglib.c and 
finally:

ecpglib.c:1259: `ECPG_UNKNOWN_DESCRIPTOR' undeclared (first use in this function)
gmake[3]: *** [ecpglib.o] Error 1   


I used this:

april:postgresql.snapshot-7maart/src-: cat myconfigure
./configure --prefix=/sw/pkg/postgresql --with-tcl --with-includes=/sw/include

This is the last snapshot. Same problem with the 'original beta'. 
GCC=gcc-2.95.2, the old egcs(2.91.66) gives the same probs although
that one works perfect on my Redhat 6.1 system. 
The same configure options do work on the same Solaris system with
postgresql version 6.5.3

Where to look now?

Regards,

PS: no ideas on my yesterday mentioned moddatetime prob?

+---+
| Henk van Lingen, Systems Administrator, <[EMAIL PROTECTED]> |
| Dept. of Computer Science, Utrecht University.  phone: +31-30-2535278 |
+- http://www.cs.uu.nl/people/henkvl/ --+






Re: [GENERAL] Trigger

2000-03-07 Thread Bruce Momjian

> Switch to v7.0 beta1 and use FOREIGN KEY. You will save a lot of time.
> Before you really finish your development there will be 7.0 production (this
> is my private estimation).
> Andrzej Mazurkiewicz
> 
> > I am trying to create a trigger for a detail table to check that value
> > what is entered exists in master table.
> > For example into CustomerOrder table could not be enterd order for
> > Customer what does not exists.


Not only that he'll save some time. Without the deferred trigger
queue manager, watching that one and the same row cannot be modified
twice inside of one transaction (WRT key changes), it's possible to
fool the trigger and end up with inconsistent data in a multiuser
environment.

And don't forget that the INSERT of the FK and the DELETE of the PK
can happen simultaneously. Due to MVCC, both transactions don't
know about each other (thanks to Vadim).


Jan from Philly

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026





Re: [GENERAL] 50 MB Table

2000-03-07 Thread JB

Thank you both for the suggestions. I did not realize that 'vacuum
analyse' was nesessary to get the indexes working. That alone cut the
search time almost in half. I'll do the ram bit as well.

cheers
jb

"Culberson, Philip" wrote:
> 
> JB,
> 
> The 20 seconds definitely sounds excessive.  Have you done the following?
> 
> 1) Run "vacuum analyze info"?  If you have not, Postgres will not make use
> of any indices.
> 
> 2) Run an explain plan on your query to see what Postgres thinks it's going
> to do?
> 
> In another post, Howie suggested more RAM.  If speed is a primary concern, I
> second his recommendation.  Bump up your RAM and crank up the number of
> shared memory buffers.  Here is an example of the command I use to start up
> Postgres:
> 
> postmaster -i -B 12000 -d 2 -o "-F -S 4096 -s" >&! server.log &
> 
> The "-B 12000" tells Postgres to set aside 12,000 8k buffers.  With the size
> of your table, you should easily be able to fit the whole thing into memory.
> Keep in mind that your OS must have it's shared memory segment defined large
> enough to handle all the buffers...  in this case, about 100 Meg.
> 
> Phil Culberson
> DAT Services
> 
> -Original Message-
> From: JB [mailto:[EMAIL PROTECTED]]
> Sent: Monday, March 06, 2000 5:52 PM
> To: Paul Condon
> Cc: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] 50 MB Table
> 
> Thanks for taking the time to reply. I think that I wasn't as clear as I
> could be. This table is normalized and as far as I understand, what I'm
> doing with it is not extraordinary. The schema is basically...
> 
> CREATE TABLE info (
>   lastname char(50),
>   street_name char(50),
>   street_number char(5),
>   ... (a bunch of other stuff that works fine with '=')
> );
> 
> CREATE INDEX nx_info1 ON info (lastname);
> CREATE INDEX nx_info2 ON info (street_name);
> 
> The select is as simple as this in most cases...
> 
> SELECT * FROM info WHERE street_name LIKE 'MAIN%';
> 
> .,,the table about 50MB worth, about 70,000 records. I have an index on
> 'lastname' and 'street_name' and I need to search on each of these with
> 'LIKE'. So I was wondering about ways to speed this up. It's very slow.
> It takes about 20 seconds for the above query. I even uppercased all the
> names, hoping tht would help. I wondered if I'd used the wrong index
> type (btree), or if there were some flags that would help. Is there a
> way to bust the indexes out alpha on the first letter say, or some other
> such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no X
> and no users (except me ;)

...etc...
-- 
If everything is coming your way then you're in the wrong lane.





Re: [GENERAL] Regular expressions syntax: is \ the escape character ?

2000-03-07 Thread Chris Jones

Gabriel Fernandez <[EMAIL PROTECTED]> writes:

> The only problem i have is when i want to escape a single quote (') or the
> backslash (\).
> 
> For example: i have one row with the value 'ONE\SECOND'
> 
> I try to recover it doing (from psql frontend):
> 
> select field1 from table1 where field1 ~* 'ONE\\\'  ;

Heh.  This will probably work if you use *four* backslashes.  The
first time it gets parsed, 'ONE' -> 'ONE\\', because each '\\'
sequence will reduce to a single '\'.  Then the second time it gets
parsed, the '\\' -> '\', which is what you want.

(Haven't tested it, but I've seen similar things in other places.)

Chris

-- 
 [EMAIL PROTECTED]
Chris Jones
   System Administrator, Right Now Technologies, Inc.
"Is this going to be a stand-up programming session, sir, or another bug hunt?"





[GENERAL] Re: DHCP and pg_hba.conf

2000-03-07 Thread Ron Atkins

Bryan Henderson wrote:

> So just specify wildcards for the IP address: *.*.*.*  .

I'll have to include that, looks like a nice blanket statement. I will also
look into the password stuff. My intention is to switch to DHCP so I can
test this stuff myself.

Thank you for your help.

-Ron







Re: [GENERAL] Accounting/inventory systems

2000-03-07 Thread Ron Atkins

*** IMPORTANT NOTE *** : I made an error, Kontor is not GPL'ed it is LGPL'ed... 
commercial
developers might be interested in that :-)

[EMAIL PROTECTED] wrote:

> > Sure, Kontor... it's GPLed and picking up steam! Go pick up a copy here:
> > http://www.Linux-Kontor.de/en/

> you did not put it in:  http://www.pgsql.com/app-index/

Huh... didn't know about it, thanks! I'll forward this to the original Kontor team...

>  and it is not ugly at all, beauty of simplicity :-)

I take it you linked it to the site?

-Ron






Re: [GENERAL] 50 MB Table

2000-03-07 Thread Paul Condon

The example you give, LIKE 'MAIN%', should be able to use the index, but I'm
not that expert on the internals of PostgreSQL. I was speaking from a general
knowledge of SQL. I was supposing you were trying to do something, such as,
LIKE '%BLVD%'. In this latter case, an index will bring you no benefit.

Perhaps it is working faster than it would have, had there been no index. Have
you tried dropping the index and seeing if the performance gets worse? To make
this test more powerful, choose a pattern that matches very few rows in your
table.


JB wrote:

> Thanks for taking the time to reply. I think that I wasn't as clear as I
> could be. This table is normalized and as far as I understand, what I'm
> doing with it is not extraordinary. The schema is basically...
>
> CREATE TABLE info (
>   lastname char(50),
>   street_name char(50),
>   street_number char(5),
>   ... (a bunch of other stuff that works fine with '=')
> );
>
> CREATE INDEX nx_info1 ON info (lastname);
> CREATE INDEX nx_info2 ON info (street_name);
>
> The select is as simple as this in most cases...
>
> SELECT * FROM info WHERE street_name LIKE 'MAIN%';
>
> .,,the table about 50MB worth, about 70,000 records. I have an index on
> 'lastname' and 'street_name' and I need to search on each of these with
> 'LIKE'. So I was wondering about ways to speed this up. It's very slow.
> It takes about 20 seconds for the above query. I even uppercased all the
> names, hoping tht would help. I wondered if I'd used the wrong index
> type (btree), or if there were some flags that would help. Is there a
> way to bust the indexes out alpha on the first letter say, or some other
> such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no X
> and no users (except me ;)
>
> Paul Condon wrote:
> >
> > JB wrote:
> >
> > > I have a 50 MB +- table in postgres. The data is normalized so there's
> > > not much I can do about the size. The tuples are about 512 bytes so
> > > there's a pile of 'em. I need searching on of several fields, a couple
> > > in particular are text fields that needs 'LIKE'. The problem is, the
> > > thing is way too slow. So, I was wondering, before I go hunting for some
> > > other solution, could anyone here point me to some ways to (hand)
> > > optimize the searching in postgres? Different indexes, hashing and LIKE?
> > > I'm not sure where to go with this.
> > >
> > > The basic criteria are:
> > >  - sizes of indexes, etc, is not an issue. There's lot's of room on the
> > > box.
> > >  - the data is basically static so a read-only (if such a thing) is
> > > fine.
> > >  - it needs to be FAST
> > >
> > > cheers
> > > jb
> > >
> > > 
> >
> > It sounds as if you have several different kinds of information encoded in
> > a single column using special words or letter combinations. This is a
> > violation of the ideal that data items should be "atomic." You should make
> > a catalog of all the things that you want to be able to say about each
> > tuple, and design a relational schema in which atomic assertion is given
> > its own column (attribute). Then you will be able to create indices on
> > each, and you won't have to use LIKE in your WHERE clauses.
> >
> > Paul
>
> --
> I'm in direct contact with many advanced fun CONCEPTS.
>
> Paul Condon wrote:
> >
>






Re: [GENERAL] Accounting/inventory systems

2000-03-07 Thread kaiq



On Tue, 7 Mar 2000, Ron Atkins wrote:

> *** IMPORTANT NOTE *** : I made an error, Kontor is not GPL'ed it is LGPL'ed... 
>commercial
> developers might be interested in that :-)
> 
> [EMAIL PROTECTED] wrote:
> 
> > > Sure, Kontor... it's GPLed and picking up steam! Go pick up a copy here:
> > > http://www.Linux-Kontor.de/en/
> 
> > you did not put it in:  http://www.pgsql.com/app-index/
> 
> Huh... didn't know about it, thanks! I'll forward this to the original Kontor team...
> 
> >  and it is not ugly at all, beauty of simplicity :-)
> 
> I take it you linked it to the site?
no, I do not have the authority ;-). before realPGguys do it, we need to
keep the precious url in our notes .







Re: [GENERAL] 50 MB Table

2000-03-07 Thread Marten Feldtmann

> CREATE INDEX nx_info1 ON info (lastname);
> CREATE INDEX nx_info2 ON info (street_name);
> 
> The select is as simple as this in most cases...
> 
> SELECT * FROM info WHERE street_name LIKE 'MAIN%';
> 
> .,,the table about 50MB worth, about 70,000 records. I have an index on
> 'lastname' and 'street_name' and I need to search on each of these with
> 'LIKE'. So I was wondering about ways to speed this up. It's very slow.
> It takes about 20 seconds for the above query. I even uppercased all the

 I think, that psql can not use any indicces when doing queries like
this. The indices would work if you would compare with "=".

 Actually I even do not even know, how one culd create a simple index 
helping for searches like this.

 Marten









Re: [GENERAL] 50 MB Table

2000-03-07 Thread JB

I've been shuffling thru the 'LIKE' code in pg, but whilst perusing, it
occurred to me that I could use a function for this thing. Since i'm
only looking at the first part of the string I use...

SELECT * FROM info WHERE substring(street_name from 1 to 4) = 'MAIN' 

...and the query completes in under 2 secs. Stupidly obvious now but I
thought I would share this blinding insight with the list ;)

Thanks again for all your help.

cheers
jb

Paul Condon wrote:
> 
> The example you give, LIKE 'MAIN%', should be able to use the index, but I'm
> not that expert on the internals of PostgreSQL. I was speaking from a general
> knowledge of SQL. I was supposing you were trying to do something, such as,
> LIKE '%BLVD%'. In this latter case, an index will bring you no benefit.
> 
> Perhaps it is working faster than it would have, had there been no index. Have
> you tried dropping the index and seeing if the performance gets worse? To make
> this test more powerful, choose a pattern that matches very few rows in your
> table.
> 
> JB wrote:
> 
> > Thanks for taking the time to reply. I think that I wasn't as clear as I
> > could be. This table is normalized and as far as I understand, what I'm
> > doing with it is not extraordinary. The schema is basically...
> >
> > CREATE TABLE info (
> >   lastname char(50),
> >   street_name char(50),
> >   street_number char(5),
> >   ... (a bunch of other stuff that works fine with '=')
> > );
> >
> > CREATE INDEX nx_info1 ON info (lastname);
> > CREATE INDEX nx_info2 ON info (street_name);
> >
> > The select is as simple as this in most cases...
> >
> > SELECT * FROM info WHERE street_name LIKE 'MAIN%';
> >
> > .,,the table about 50MB worth, about 70,000 records. I have an index on
> > 'lastname' and 'street_name' and I need to search on each of these with
> > 'LIKE'. So I was wondering about ways to speed this up. It's very slow.
> > It takes about 20 seconds for the above query. I even uppercased all the
> > names, hoping tht would help. I wondered if I'd used the wrong index
> > type (btree), or if there were some flags that would help. Is there a
> > way to bust the indexes out alpha on the first letter say, or some other
> > such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no X
> > and no users (except me ;)
> >
> > Paul Condon wrote:
> > >
> > > JB wrote:
> > >
> > > > I have a 50 MB +- table in postgres. The data is normalized so there's
> > > > not much I can do about the size. The tuples are about 512 bytes so
> > > > there's a pile of 'em. I need searching on of several fields, a couple
> > > > in particular are text fields that needs 'LIKE'. The problem is, the
> > > > thing is way too slow. So, I was wondering, before I go hunting for some
> > > > other solution, could anyone here point me to some ways to (hand)
> > > > optimize the searching in postgres? Different indexes, hashing and LIKE?
> > > > I'm not sure where to go with this.
> > > >
> > > > The basic criteria are:
> > > >  - sizes of indexes, etc, is not an issue. There's lot's of room on the
> > > > box.
> > > >  - the data is basically static so a read-only (if such a thing) is
> > > > fine.
> > > >  - it needs to be FAST
> > > >
> > > > cheers
> > > > jb
> > > >
> > > > 
> > >
> > > It sounds as if you have several different kinds of information encoded in
> > > a single column using special words or letter combinations. This is a
> > > violation of the ideal that data items should be "atomic." You should make
> > > a catalog of all the things that you want to be able to say about each
> > > tuple, and design a relational schema in which atomic assertion is given
> > > its own column (attribute). Then you will be able to create indices on
> > > each, and you won't have to use LIKE in your WHERE clauses.
> > >
> > > Paul
> >

-- 
If everything is coming your way then you're in the wrong lane.





[GENERAL] International Address Format Standard

2000-03-07 Thread Ron Peterson

Is there any such thing as a standard schema for international
addresses?  Maybe I'm grasping at straws, but one can always hope.  I
can find information about individual countries easily enough.  But how
about a general solution?  Or is this just pie in the sky?

Ron Peterson
[EMAIL PROTECTED]





Re: [GENERAL] 50 MB Table

2000-03-07 Thread Bruce Bantos

> I've been shuffling thru the 'LIKE' code in pg, but whilst perusing, it
> occurred to me that I could use a function for this thing. Since i'm
> only looking at the first part of the string I use...
>
> SELECT * FROM info WHERE substring(street_name from 1 to 4) = 'MAIN'
>
> ...and the query completes in under 2 secs. Stupidly obvious now but I
> thought I would share this blinding insight with the list ;)

It was not so obvious to me. I was struggling with the same thing. This is a
good workaround for the case of "searchstring%". I wonder if the LIKE code
could be enhanced to perform the query in the same way.

>
> Thanks again for all your help.
>
> cheers
> jb
>
> Paul Condon wrote:
> >
> > The example you give, LIKE 'MAIN%', should be able to use the index, but
I'm
> > not that expert on the internals of PostgreSQL. I was speaking from a
general
> > knowledge of SQL. I was supposing you were trying to do something, such
as,
> > LIKE '%BLVD%'. In this latter case, an index will bring you no benefit.
> >
> > Perhaps it is working faster than it would have, had there been no
index. Have
> > you tried dropping the index and seeing if the performance gets worse?
To make
> > this test more powerful, choose a pattern that matches very few rows in
your
> > table.
> >
> > JB wrote:
> >
> > > Thanks for taking the time to reply. I think that I wasn't as clear as
I
> > > could be. This table is normalized and as far as I understand, what
I'm
> > > doing with it is not extraordinary. The schema is basically...
> > >
> > > CREATE TABLE info (
> > >   lastname char(50),
> > >   street_name char(50),
> > >   street_number char(5),
> > >   ... (a bunch of other stuff that works fine with '=')
> > > );
> > >
> > > CREATE INDEX nx_info1 ON info (lastname);
> > > CREATE INDEX nx_info2 ON info (street_name);
> > >
> > > The select is as simple as this in most cases...
> > >
> > > SELECT * FROM info WHERE street_name LIKE 'MAIN%';
> > >
> > > .,,the table about 50MB worth, about 70,000 records. I have an index
on
> > > 'lastname' and 'street_name' and I need to search on each of these
with
> > > 'LIKE'. So I was wondering about ways to speed this up. It's very
slow.
> > > It takes about 20 seconds for the above query. I even uppercased all
the
> > > names, hoping tht would help. I wondered if I'd used the wrong index
> > > type (btree), or if there were some flags that would help. Is there a
> > > way to bust the indexes out alpha on the first letter say, or some
other
> > > such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no
X
> > > and no users (except me ;)
> > >
> > > Paul Condon wrote:
> > > >
> > > > JB wrote:
> > > >
> > > > > I have a 50 MB +- table in postgres. The data is normalized so
there's
> > > > > not much I can do about the size. The tuples are about 512 bytes
so
> > > > > there's a pile of 'em. I need searching on of several fields, a
couple
> > > > > in particular are text fields that needs 'LIKE'. The problem is,
the
> > > > > thing is way too slow. So, I was wondering, before I go hunting
for some
> > > > > other solution, could anyone here point me to some ways to (hand)
> > > > > optimize the searching in postgres? Different indexes, hashing and
LIKE?
> > > > > I'm not sure where to go with this.
> > > > >
> > > > > The basic criteria are:
> > > > >  - sizes of indexes, etc, is not an issue. There's lot's of room
on the
> > > > > box.
> > > > >  - the data is basically static so a read-only (if such a thing)
is
> > > > > fine.
> > > > >  - it needs to be FAST
> > > > >
> > > > > cheers
> > > > > jb
> > > > >
> > > > > 
> > > >
> > > > It sounds as if you have several different kinds of information
encoded in
> > > > a single column using special words or letter combinations. This is
a
> > > > violation of the ideal that data items should be "atomic." You
should make
> > > > a catalog of all the things that you want to be able to say about
each
> > > > tuple, and design a relational schema in which atomic assertion is
given
> > > > its own column (attribute). Then you will be able to create indices
on
> > > > each, and you won't have to use LIKE in your WHERE clauses.
> > > >
> > > > Paul
> > >
>
> --
> If everything is coming your way then you're in the wrong lane.
>
> 
>






Re: [GENERAL] database corruption?

2000-03-07 Thread Bruce Momjian

Drop index and recreate.


> Hi, all.
> 
> I'm relatively new to PostgreSQL, but I've been quite impressed with
> it so far.  This may be due to too much experience with MySQL.  :)
> 
> I'm currently getting this error on my nightly vacuum.  These two
> indices (as you may have guessed already) are on columns named
> interface and ewhen, on a table named error.  The error table is
> constantly being updated.  (No comments about the implications of
> that, please.)
> 
> NOTICE:  Index error_interface_idx: NUMBER OF INDEX' TUPLES (226766) IS NOT THE SAME 
>AS HEAP' (226765)
> NOTICE:  Index error_ewhen_idx: NUMBER OF INDEX' TUPLES (226766) IS NOT THE SAME AS 
>HEAP' (226765)
> NOTICE:  Index error_interface_idx: NUMBER OF INDEX' TUPLES (226766) IS NOT THE SAME 
>AS HEAP' (226765)
> NOTICE:  Index error_ewhen_idx: NUMBER OF INDEX' TUPLES (226766) IS NOT THE SAME AS 
>HEAP' (226765)
> 
> Is this indicative of a serious problem?  If this were MySQL, I'd
> assume I had a corrupt table, and I'd run isamchk on it.  But I don't
> see a similar utility for PgSQL.
> 
> AFAIK, nothing really bad has happened to the host; it hasn't crashed
> or anything, though it was rebooted ungracefully a few weeks ago.
> 
> This is on an i386 running NetBSD 1.4.1, if that makes a difference.
> 
> TIA for any help.
> 
> Chris
> 
> -- 
>  [EMAIL PROTECTED]
> Chris Jones
>System Administrator, Right Now Technologies, Inc.
> "Is this going to be a stand-up programming session, sir, or another bug hunt?"
> 
> 
> 
> 


-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026