[GENERAL] Huge backends / self joins / long queries

1999-08-19 Thread Dirk Lutzebaeck


Hi,

I don't know if this fits into the current thread but I have also a
problem with backends growing up to 100MB and more. This is on 6.5:


CREATE TABLE docobj (
envelopeOID NOT NULL,
docspec OID,
docindexINTEGER,
attrid  INTEGER,
val_int INTEGER,
val_str TEXT
);

This table saves values with differnet base types for doing self
joins. A typical select is the following:

SELECT DISTINCT d0.envelope, d0.docindex
FROM envelope e0, docobj d0,
 envelope e1, docobj d1,
 envelope e2, docobj d2,
 envelope e3, docobj d3
WHERE d0.docspec=734675 AND d0.envelope=e0.oid AND d0.attrid=0 AND
  d0.val_int='700050' AND d0.docindex=d0.docindex AND e0.oid=e0.oid AND

  d1.docspec=734675 AND d1.envelope=e1.oid AND d1.attrid=1 AND
  d1.val_str='01' AND d1.docindex=d0.docindex AND e1.oid=e0.oid AND

  d2.docspec=734675 AND d2.envelope=e2.oid AND d2.attrid=5 AND
  d2.val_str='00' AND d2.docindex=d0.docindex AND e2.oid=e0.oid AND

  d3.docspec=734675 AND d3.envelope=e3.oid AND d3.attrid=6 AND
  d3.val_str='UG' AND d3.docindex=d0.docindex AND e3.oid=e0.oid

ORDER BY boxinfo.time DESC

Indices are generated on all joined attributes.

This particular select needs 38MB on the backend. There is not much
data. Docobj has 1300 rows. This query is a bit of a fake. The real
query takes some additional joins with other tables. So the
backend reaches 100Mb.

I have also set pg_geqo to

Pool_Size128
Effort   low
Generations  200
Random_Seed  830518260
Selection_Bias   1.75


EXPLAIN on the query above gives:

Unique  (cost=59.40 rows=1000 width=52)
 -> Sort (cost=59.40 rows=1000 width=52)
-> Nested Loop (cost=59.40 rows=1000 width=52)
   -> Nested Loop (cost=16.40 rows=1 width=48)
  -> Index Scan using docobj_spec_index on docobj d2 (cost=2.05 rows=1 width=8)
  -> Seq Scan (cost=14.35 rows=2 width=40)
 -> ??? (cost=14.35 rows=2 width=40)
-> Nested Loop (cost=14.35 rows=2 width=40)
   -> Nested Loop (cost=12.30 rows=1 width=36)
  -> Index Scan using docobj_spec_index on docobj d3 (cost=2.05 
rows=1 width=8)
  -> Seq Scan (cost=10.25 rows=2 width=28)
 -> ??? (cost=10.25 rows=2 width=28)
-> Nested Loop (cost=10.25 rows=2 width=28)
   -> Nested Loop (cost=8.20 rows=1 width=24)
  -> Nested Loop (cost=6.15 rows=1 width=20)
 -> Index Scan using docobj_spec_index on docobj 
d0 (cost=2.05 rows=1 width=8)
 -> Seq Scan (cost=4.10 rows=2 width=12)
-> ??? (cost=4.10 rows=2 width=12)
   -> Nested Loop (cost=4.10 rows=2 width=12)
  -> Index Scan using docobj_spec_index on 
docobj d1 (cost=2.05 rows=1 width=8)
  -> Index Scan using envelope_oid_index 
on envelope e1 (cost=2.05 rows=1000 width=4)
  -> Index Scan using envelope_oid_index on envelope 
e0 (cost=2.05 rows=101 width=4)
   -> Index Scan using envelope_oid_index on envelope e3 
(cost=2.05 rows=1000 width=4)
   -> Index Scan using envelope_oid_index on envelope e2 (cost=2.05 
rows=1000 width=4)
   -> Seq Scan on boxinfo (cost=43.00 rows=1000 width=4)


So, I guess it has something todo with the optimizer. What should I do?

Thanks for help,

Dirk





[GENERAL] Character Constants

1999-08-19 Thread Bruce Tong

I found in the documention where you can use \tab and from playing I see I
can use \n in a string and \ooo and use an octal value, but I've not found
a way to include a character based on a hex representation of its ascii
code...

\x03
\X03
\0x03
\0X03

...all failed in cases like this ...

INSERT INTO mytable ( mystring ) VALUES ( 'This is a \x03 character.' );

... so what's the trick? I hope I'm not missing something obvious.

--

Bruce Tong |  Got me an office; I'm there late at night.
Systems Programmer |  Just send me e-mail, maybe I'll write.
Electronic Vision / FITNE  |  
[EMAIL PROTECTED]|  -- Joe Walsh for the 21st Century







[GENERAL] Trigger documentation? Need more examples.. pleeeze.. ;-)

1999-08-19 Thread Henrique Pantarotto

Hello PostgreSQL friends!

I'm new to PostgreSQL, but I've been using MySQL/mSQL for a long time now.  I
switched to PostgreSQL mostly because of it's features: triggers, specially.

I've studied, compiled and tried the examples from the crontrib/spi directory. 
They all worked, but without some kind of documentation it is really hard to
understand how things really work, and how I can create my own stuff.

In the "Programmer's Guide", there's a whole chapter documenting the SPI
library, but I didn't find any documentation of the functions directly related
to TRIGGERS (library "triggers.h"), like: "TRIGGER_FIRED_BY_UPDATE()",
"CurrentTriggerData->tg_newtuple", etc...

Is this documentation available somewhere?

-

I have this table:

CREATE TABLE users (
userid INT4 NOT NULL,
fullname CHAR(40),
username CHAR(20),
password CHAR(13)
)


Using the examples from contrib/spi, I guess I can manage to create an INSERT
and DELETE trigger.  But I'm not sure how to create an UPDATE one.

Let's suppose someone updates both fields "username" and "password".  HOW from
the trigger I know which fields changed.  Is it possible to get the older
and newer values and compare them?

This UPDATE trigger doesn't need to modify it's own new inserted data, but it
has to modify external data (create/delete files at my Linux filesystem,
rename directories, etc..).

Is such thing possible?  Someone has an example of this?  ;-)


Thanks a lot!

Regards from Brazil!


-- 
Henrique Pantarotto
CEPAnet Internet Provider
webmaster / analista de sistemas
Email: [EMAIL PROTECTED]
Tel: (011) 5506-8477
Cel: (011) 9706-3444
LINUX FRIEND





[GENERAL] Error during 'vacuum analyze'

1999-08-19 Thread Mark Dalphin

Hi,

Running PostgreSQL 6.5.1 on SGI Irix 6.5.

When I ran vacuum analyze over my database, I had an unexpected error occur:

htg=> vacuum analyze;
NOTICE:  AbortTransaction and not in in-progress state
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
impossible.  Terminating.

I wondered about the "NOTICE: not in transaction", so I repeated the operation
from within a transaction:

htg=> begin;
BEGIN
htg=> vacuum analyze;
NOTICE:  Rel pg_statistic: TID 0/72: DeleteTransactionInProgress 48643 - can't
shrink relation
NOTICE:  Rel pg_statistic: TID 0/73: DeleteTransactionInProgress 48643 - can't
shrink relation
NOTICE:  Rel pg_statistic: TID 0/74: DeleteTransactionInProgress 48643 - can't
shrink relation

 ... many of these lines deleted ...

NOTICE:  Rel pg_statistic: TID 1/85: InsertTransactionInProgress 48643 - can't
shrink relation
NOTICE:  Rel pg_statistic: TID 1/86: InsertTransactionInProgress 48643 - can't
shrink relation
NOTICE:  Rel pg_statistic: TID 1/87: InsertTransactionInProgress 48643 - can't
shrink relation
VACUUM
htg=> commit;
END

I looked at my log file and for the first vacuum, it contains the following
lines just before it died:

Aug 19 12:35:42 5V:mahunui postgres: DEBUG:  --Relation pga_reports--
Aug 19 12:35:42 5V:mahunui postgres: DEBUG:  Pages 0: Changed 0, Reapped 0,
Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen
0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.
Aug 19 12:35:42 5V:mahunui postgres: DEBUG:  --Relation pga_layout--
Aug 19 12:35:42 5V:mahunui postgres: DEBUG:  Pages 1: Changed 0, Reapped 0,
Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 87, MaxLen
135; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0
sec.
Aug 19 12:35:42 5V:mahunui postgres: ERROR:  vacuum: can't destroy lock file!
Aug 19 12:35:42 5V:mahunui postgres: NOTICE:  AbortTransaction and not in
in-progress state

Looking at the second vaccum, I see many normal looking DEBUG notices with some
of the notices in the middle:

Aug 19 13:20:49 5V:mahunui postgres: DEBUG:  --Relation pg_index--
Aug 19 13:20:49 5V:mahunui postgres: DEBUG:  Pages 1: Changed 0, Reapped 1,
Empty 0, New 0; Tup 51: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 2, MinLen 116,
MaxLen 116; Re-using: Free/Avail. Space 1852/0; EndEmpty/Avail. Pages 0/0.
Elapsed 0/0 sec.
Aug 19 13:20:49 5V:mahunui postgres: DEBUG:  --Relation pg_statistic--
Aug 19 13:20:49 5V:mahunui postgres: NOTICE:  Rel pg_statistic: TID 0/72:
DeleteTransactionInProgress 48643 - can't shrink relation
Aug 19 13:20:49 5V:mahunui postgres: NOTICE:  Rel pg_statistic: TID 0/73:
DeleteTransactionInProgress 48643 - can't shrink relation
... many lines deleted ...

Any ideas about what is happening?  Should I be worried? It looks to me like a
lock was created on a table in order to vacuum it, and then when the lock could
not be removed, the vacuum crashed, leaving the system table, pg_statistic,
corrupted. Is this correct, and if so, what should I do about it?  There were no
other users accessing the DB when I ran VACUUM.

Also, should VACUUM ANALYZE be run within a BEGIN/COMMIT transaction? I would
have thought that it didn't matter as Postgresql would create a default
transaction for me.

TIA,
Mark

--
Mark Dalphin  email: [EMAIL PROTECTED]
Mail Stop: 29-2-A phone: +1-805-447-4951 (work)
One Amgen Center Drive   +1-805-375-0680 (home)
Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)








Re: [GENERAL] Trigger documentation? Need more examples.. pleeeze.. ;-)

1999-08-19 Thread Henrique Pantarotto

Yeah, I really need to do "external stuff" (messing with external files,
executing programs), and if I understood it correctly, PL/pgSQL can't do
that, right?

Thanks!

On qui, 19 ago 1999, you wrote:
> perhaps using PL/pgSQL is easier? in the doc there are examples,
> also, the one in the /test is rather complex (too complex?).
> 
> However, PL/pgSQL may not be able to to external stuff, even C
> can do that. -- I have the same problem, anybody'd ideas?
> 
> 
> 
> 
> >From: Henrique Pantarotto <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Subject: [GENERAL] Trigger documentation?  Need more examples.. pleeeze.. 
> >;-)
> >Date: Thu, 19 Aug 1999 11:40:11 -0300
> >
> >Hello PostgreSQL friends!
> >
> >I'm new to PostgreSQL, but I've been using MySQL/mSQL for a long time now.  
> >I
> >switched to PostgreSQL mostly because of it's features: triggers, 
> >specially.
> >
> >I've studied, compiled and tried the examples from the crontrib/spi 
> >directory.
> >They all worked, but without some kind of documentation it is really hard 
> >to
> >understand how things really work, and how I can create my own stuff.
> >
> >In the "Programmer's Guide", there's a whole chapter documenting the SPI
> >library, but I didn't find any documentation of the functions directly 
> >related
> >to TRIGGERS (library "triggers.h"), like: "TRIGGER_FIRED_BY_UPDATE()",
> >"CurrentTriggerData->tg_newtuple", etc...
> >
> >Is this documentation available somewhere?
> >
> >-
> >
> >I have this table:
> >
> >CREATE TABLE users (
> > userid INT4 NOT NULL,
> > fullname CHAR(40),
> > username CHAR(20),
> > password CHAR(13)
> >)
> >
> >
> >Using the examples from contrib/spi, I guess I can manage to create an 
> >INSERT
> >and DELETE trigger.  But I'm not sure how to create an UPDATE one.
> >
> >Let's suppose someone updates both fields "username" and "password".  HOW 
> >from
> >the trigger I know which fields changed.  Is it possible to get the older
> >and newer values and compare them?
> >
> >This UPDATE trigger doesn't need to modify it's own new inserted data, but 
> >it
> >has to modify external data (create/delete files at my Linux filesystem,
> >rename directories, etc..).
> >
> >Is such thing possible?  Someone has an example of this?  ;-)
> >
> >
> >Thanks a lot!
> >
> >Regards from Brazil!
> >
> >
> >--
> >Henrique Pantarotto
> >CEPAnet Internet Provider
> >webmaster / analista de sistemas
> >Email: [EMAIL PROTECTED]
> >Tel: (011) 5506-8477
> >Cel: (011) 9706-3444
> >LINUX FRIEND
> >
> >
> >
> 
> 
> ___
> Get Free Email and Do More On The Web. Visit http://www.msn.com
-- 
Henrique Pantarotto
CEPAnet Internet Provider
webmaster / analista de sistemas
Email: [EMAIL PROTECTED]
Tel: (011) 5506-8477
Cel: (011) 9706-3444
LINUX FRIEND





[GENERAL] long object names vs object description

1999-08-19 Thread Yury Don

Hi All.

Once I asked already on similar subject but haven't got any reply. Try
to formulate my question on other way.
I would like to show table names and field names to users who not
familiar with databases and programming etc. Therefore I need to show
them something understandable. Postgres allows to create table and
fields with names, which contains national specific (e.g. russian)
characters and spaces, something like

create table "table number one"(
"field number one" serial,
"field number two" text
)

Also in postgres database exists table pg_description, so I can give to
object "normal" name (consisting from english characters and digits) and
then put into pg_description. But in this case I will forced to use
joins in order to show description to user.
So, my questions: Is it safe to create objects with names, containing
spaces? And is it safe to use national specific characters in names? Or
better to use pg_description for solving my task?
Thank you in advance.

Sincerely yours, Yury.
don.web-page.net, ICQ 11831432





Re: [GENERAL] Huge backends / self joins / long queries

1999-08-19 Thread Jérome Knöbl

Hello,

I had the same trouble! And It's why I install the version 6.5 of postgres. In the 
6.4, you cannot make some INTERSECT.
Try to use the Intersection It's very powerfull

JK

 Dirk Lutzebaeck wrote:

> Hi,
>
> I don't know if this fits into the current thread but I have also a
> problem with backends growing up to 100MB and more. This is on 6.5:
>
> CREATE TABLE docobj (
> envelopeOID NOT NULL,
> docspec OID,
> docindexINTEGER,
> attrid  INTEGER,
> val_int INTEGER,
> val_str TEXT
> );
>
> This table saves values with differnet base types for doing self
> joins. A typical select is the following:
>
> SELECT DISTINCT d0.envelope, d0.docindex
> FROM envelope e0, docobj d0,
>  envelope e1, docobj d1,
>  envelope e2, docobj d2,
>  envelope e3, docobj d3
> WHERE d0.docspec=734675 AND d0.envelope=e0.oid AND d0.attrid=0 AND
>   d0.val_int='700050' AND d0.docindex=d0.docindex AND e0.oid=e0.oid AND
>
>   d1.docspec=734675 AND d1.envelope=e1.oid AND d1.attrid=1 AND
>   d1.val_str='01' AND d1.docindex=d0.docindex AND e1.oid=e0.oid AND
>
>   d2.docspec=734675 AND d2.envelope=e2.oid AND d2.attrid=5 AND
>   d2.val_str='00' AND d2.docindex=d0.docindex AND e2.oid=e0.oid AND
>
>   d3.docspec=734675 AND d3.envelope=e3.oid AND d3.attrid=6 AND
>   d3.val_str='UG' AND d3.docindex=d0.docindex AND e3.oid=e0.oid
>
> ORDER BY boxinfo.time DESC
>
> Indices are generated on all joined attributes.
>
> This particular select needs 38MB on the backend. There is not much
> data. Docobj has 1300 rows. This query is a bit of a fake. The real
> query takes some additional joins with other tables. So the
> backend reaches 100Mb.
>
> I have also set pg_geqo to
>
> Pool_Size128
> Effort   low
> Generations  200
> Random_Seed  830518260
> Selection_Bias   1.75
>
> EXPLAIN on the query above gives:
>
> Unique  (cost=59.40 rows=1000 width=52)
>  -> Sort (cost=59.40 rows=1000 width=52)
> -> Nested Loop (cost=59.40 rows=1000 width=52)
>-> Nested Loop (cost=16.40 rows=1 width=48)
>   -> Index Scan using docobj_spec_index on docobj d2 (cost=2.05 rows=1 
>width=8)
>   -> Seq Scan (cost=14.35 rows=2 width=40)
>  -> ??? (cost=14.35 rows=2 width=40)
> -> Nested Loop (cost=14.35 rows=2 width=40)
>-> Nested Loop (cost=12.30 rows=1 width=36)
>   -> Index Scan using docobj_spec_index on docobj d3 (cost=2.05 
>rows=1 width=8)
>   -> Seq Scan (cost=10.25 rows=2 width=28)
>  -> ??? (cost=10.25 rows=2 width=28)
> -> Nested Loop (cost=10.25 rows=2 width=28)
>-> Nested Loop (cost=8.20 rows=1 width=24)
>   -> Nested Loop (cost=6.15 rows=1 width=20)
>  -> Index Scan using docobj_spec_index on docobj 
>d0 (cost=2.05 rows=1 width=8)
>  -> Seq Scan (cost=4.10 rows=2 width=12)
> -> ??? (cost=4.10 rows=2 width=12)
>-> Nested Loop (cost=4.10 rows=2 width=12)
>   -> Index Scan using docobj_spec_index 
>on docobj d1 (cost=2.05 rows=1 width=8)
>   -> Index Scan using envelope_oid_index 
>on envelope e1 (cost=2.05 rows=1000 width=4)
>   -> Index Scan using envelope_oid_index on envelope 
>e0 (cost=2.05 rows=101 width=4)
>-> Index Scan using envelope_oid_index on envelope e3 
>(cost=2.05 rows=1000 width=4)
>-> Index Scan using envelope_oid_index on envelope e2 (cost=2.05 
>rows=1000 width=4)
>-> Seq Scan on boxinfo (cost=43.00 rows=1000 width=4)
>
> So, I guess it has something todo with the optimizer. What should I do?
>
> Thanks for help,
>
> Dirk
>
>