Re: [GENERAL] Finding uniques across a big join

2005-12-01 Thread Marcin Inkielman
John D. Burger napisał(a):

>
> select v1.pkey1, v1.field2, v1.field3, v1.field4
>   from view as v1
>   join
>   (select v2.field1, v2.field2, v2.field3
> from view as v2
> group by v2.field2,  v2.field3, v2.field4
> having count(*) = 1)
>   using (field2, field3, field4);
>
> This is the one that takes eight hours. :(  Another way to express
> what I want is this:
>
> select v1.pkey1, v1.field2, v1.field3, v1.field4
>   from view as v1
>   where not exists
> (select true from view as v2
>   where v1.field2 = v2.field2
>and v1.field3 = v2.field3
>and v1.field4 = v2.field4
>and v1.pkey1 <> v2.pkey1);
>
> That looks like a horrible nested loop, but I suppose I should try it
> to make sure it is indeed slower then the previous query.
>
Hi!

Did you try the second query? I guess I should take consirerably less
time than the first one. Usualy I do "these things" like this...
This is the only possibility for the planner to use indexes. The query
plan you send us shows that are mostly seq scans are used.

Regards,

Marcin Inkielman

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] alter table schema in 8.0

2005-12-10 Thread Marcin Inkielman
Hi!

I am migrating a relativly huge database from postgres 7.2 to 8.0. I
need to alter schema
for sth about 500 tables and twice as many other objects. Do I need to
sth other on a new database than:

create schema ... ;
update pg_class set relnamespace =$oid_of_new_schema where
relnamespace=$oid_of_old_schema;
update pg_proc set pronamespace =$oid_of_new_schema where
pronamespace=$oid_of_old_schema and not proname in(select a.proname from
pg_proc a, pg_proc b where a.pronamespace=$oid_of_new_schema and
b.pronamespace=$oid_of_old_schema and a.proname=b.proname);

where $oid_of_old_schema is the oid of the public namespace.

I didn't found any other reference to pg_catalog.pg_namespace.oid.
Did anybody try something like this?

I examined the sources of  AlterTableNamespace() defined in tablecmds.c
but it wasn't realy helpful.
I will appreciate any help.

Best regards,

-- 
Marcin Inkielman

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] index problem

2000-06-01 Thread Marcin Inkielman


hi!


I created an index using pgaccess rescently. the name of the index was
long:

"oceny_stud_numer_albumu_protokoloceny_stud"

now i am unable to vacuum my database. i obtain something like this when
i try:

NOTICE:  Pages 310: Changed 0, reaped 2, Empty 0, New 0; Tup 48611: Vac 3,
Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 48, MaxLen
48; Re-using: Free/Avail. Space 3096/116; EndEmpty/Avail. Pages 0/1. CPU
0.02s/0.01u sec.
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
connection to server was lost
vacuumdb: vacuum failed

pg_dump works, so i am able to backup my db and to restore it under
another name.

i tryed to delete my index:

nat=# drop index oceny_stud_numer_albumu_protokoloceny_stud;
NOTICE:  identifier "oceny_stud_numer_albumu_protokoloceny_stud" will be
truncated to "oceny_stud_numer_albumu_protoko"
ERROR:  index "oceny_stud_numer_albumu_protoko" nonexistent

then i did:

nat=# select * from pg_class where
relname='oceny_stud_numer_albumu_protoko';
 relname | reltype | relowner | relam | relpages | reltuples |
rellongrelid | relhasindex | relisshared | relkind | relnatts | relchecks
| reltriggers | relukeys | relfkeys | relrefs | relhaspkey | relhasrules |
relacl 
-+-+--+---+--+---+--+-+-+-+--+---+-+--+--+-++-+
(0 rows)

however:

nat=# select * from pg_class where
relname~'oceny_stud_numer_albumu_protoko';
   relname   | reltype | relowner | relam |
relpages | reltuples | rellongrelid | relhasindex | relisshared | relkind
| relnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs |
relhaspkey | relhasrules | relacl 
-+-+--+---+--+---+--+-+-+-+--+---+-+--+--+-++-+
 oceny_stud_numer_albumu_protokol_id | 6580575 |   32 |   403 |
122 | 48611 |0 | f   | f   | i   |
2 | 0 |   0 |0 |0 |   0 | f  |
f   | (1 row)


then i tryed:

nat=# drop index oceny_stud_numer_albumu_protokol_id;
NOTICE:  identifier "oceny_stud_numer_albumu_protokol_id" will be
truncated to "oceny_stud_numer_albumu_protoko"
ERROR:  index "oceny_stud_numer_albumu_protoko" nonexistent


my question is:
~~~~~~~
how may i delete this index in my original database???


thanks for any help..

marcin inkielman




[GENERAL] vacuumdb problem

2000-07-02 Thread Marcin Inkielman


Hi!
I have a problem with vacuumdb on one of my tables.

(spiral:3)-[~]$vacuumdb -t osoby -v dziekanat
NOTICE:  --Relation osoby--
NOTICE:  Pages 229: Changed 0, reaped 16, Empty 0, New 0; Tup 4427: Vac 5,
Keep/VTL 0/0, Crash 0, UnUsed 70, MinLen 64, MaxLen
616; Re-using: Free/Avail. Space 18176/2288; EndEmpty/Avail. Pages
1/10. CPU 0.02s/0.00u sec.
NOTICE:  Index osoby_Imię_key: Pages 44; Tuples 4427: Deleted 0. CPU
0.01s/0.01u sec.
NOTICE:  Index osoby_pkey: Pages 15; Tuples 4427: Deleted 0. CPU
0.00s/0.00u sec.
NOTICE:  Rel osoby: Pages: 229 --> 228; Tuple(s) moved: 0. CPU 0.01s/0.00u
sec.
NOTICE:  FlushRelationBuffers(osoby, 228): block 223 is referenced
(private 0, global 1)
FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
connection to server was lost
vacuumdb: vacuum failed


this table is referenced in my db by a tree of FOREIGN KEYs. 

however my db seems to work and I am able to do pg_dump
Rescently, I dumped and restored it and for a few days I was able to
do vacuumdb. Today, the problem is here again.

any ideas?

thanks for any help.


(I use Postgres ver 7.0.2 on Debian Potato/ dual PII  box)




Re: [GENERAL] table count limitation

2000-08-27 Thread Marcin Inkielman

On Sun, 27 Aug 2000, Jurgen Defurne wrote:

> Any time your design is heading in this direction, take a good hard look
> at
> it. Proper organization with the appropriate indexes is the way to go.
> 
> With tens of hundreds of tables, how will you decide which to use?
> How will you write your queries? Customize them for the different
> tables?
> Will you be generating a lot of data, thereby creating a lot of tables?
> How
> long will they take to create and populate?
> 
> With fewer, large tables you are appending data at the end, and
> maintaining
> indexes. An inherently simpler operation. Queries are written to a known
> 
> design and structure. You will, admittedly, have large index files, but
> you
> will not have hundreds to thousands of tables, each with indexes.
> 
> The Fishcart ecommerce system, which can be implemented in PostgreSQL,
> has
> only 20 tables, four of which have any degree of traffic.
> 
> A proprietary system done in here in Halifax for the employer's
> association
> has about 16 core tables, two of them are regularly updated, the rest
> contain
> relatively static information on members, rates, tax rates, piers, etc.
> 
> Rethink your design, talk it over with the fencepost, draw little
> pictures,
> ask "what if", do some rough data storage calculations -- but the
> general rule
> of thumb, with proper normalization, is "fewer is better".
> 
> Regards - Miles Thompson

Thank you for the comments!

I think You are right in most situations, however do you think that "fewer
is better" in situation like this:

I have to construct a catalogue of several types of products (1
types). Each type is defined by a different set of features.

If I do it like You suggest I have to create a table:

CREATE TABLE products(
product_id int4,
product_type_id int4,
feature_id int4,
value text/float/...
)

of course, it is relatively simple to describe any number of products
having different types in this table.

however...  how may I select a set of product having the same type using
this table EFFECTIVELY. For example:w 
I have to select:
 - all PC with PII/450Mhz and 128MB 
or
 - all red Renault Megane / 1600GL

Note that each product is described by several rows in the table (each
type of products is characterised by other number of features) and I dont
have to compare (select) products having other types (i.e. cars and
computers in 1 querry).

If i could have 1 tables - one table for each type of products this
queries would be a lot simplier - don't you think?


PS. sorry for my English - I hope I was understood


-- 
Marcin Inkielman




Re: [GENERAL] table count limitation

2000-08-27 Thread Marcin Inkielman

On Sun, 27 Aug 2000, Miles Thompson wrote:
> 
> Each table you would create for a different type of product can be replaced
> with a single column, say "product_type", which you could use to broadly
> classify your items: cars, boats, computers, appliances, etc.

don't you think that this is exectly the 2nd column in my table I
previousely defined?

> CREATE TABLE products(
> product_id int4,
> product_type_id int4,
> feature_id int4,
> value text/float/...
> )

> Other columns could simply be labelled "descrip1", "descrip2", "descrip3", as
> many as you need, for the different attributes of each item. So "descrip1" may
> contain what you may describe as the brand, or the make, or the model, or the
> processor type of an item in your user interface.

that's a possibility, however remember that I don't know right now how
many descriptors I will have for each product type, and how many types I
will have. Also, if I use a general form for a descriptor I will have to
chose a text field and it is not very effective to examine that type of
fields if they contains numbers, dates, etc... Indexes put on the
descriptor columns wont be very effective as one column will describe
different products features - dont you think?

> The difficult questions to answer are: How many types of products?

That's exactly I don't know! I hope to to exceed 1 ;)

> How many
> meaningful attributes for each? Base your table design on the answers, and I'd
> bet you start to run out at about 6 or so. How fine grained do you want to
> make your search?

I agreed. 6 up to 10 should suffice.
 
> You could take this further, and have a "product_type" table. For each major
> item, you would have a standard set of attributes you could use to populate
> pick lists, so that users would not ask for porcelain sailboats with PIII
> processors of the Ming dynasty. These same lists would be used in the
> maintenace forms you use to add items, so you have a consistency.

in fact, the problem I presented is only a part of bigger one... I have to
construct a full system of product classification. Of course I will have
to construct a table of product types and a table of product type
descriptors. 
 
> This table could also contain the labels, the applicable descriptions for each
> product_type. When a user selected a particular product you could fetch the
> descriptive text for each field and load it into your interface as the
> descriptor for the field.

that is exactly my system is doing now... ;)

> Contact managers do this all the time - the "user
> definable" fields have standard names and the user just supplies the label.
> 
> This saves a lot of work - one or two standard forms displayed with "custom"
> labels, standard queries which run when you click on "Submit", and standard
> results forms. (I'm calling them forms, they could be web pages.) Even
> background colours and logos could change according to product type.
> 
> > If i could have 1 tables - one table for each type of products this
> > queries would be a lot simplier - don't you think?

My question sent to the mailing list was if it is possible to work with
postgres+a lot of tables. In fact I am planning to use this tables only
for selects purpose just like multi-cross tables in M$ Access and
internally to use a table of product types, a table of descriptors of
product types (feature vectors) and table of products as defined above.
 
> You're doing great. 
Thx ;))
> I had a siimlar discussion a short while ago with someone
> who wanted to do classified ads for agricultural products: livestock, feed,
> tractors, etc. Thinking out a good design for your database is hard work.

I am not sure if SQL is the best language to use with that types of
problems, I use it because SQL-dbases are the most developped and
reliable, however I think that a hierarchical db should be more
appropriate.
 
> PS You are in Poland? What part? My wife's grandparents emigrated from around
> Lodz in 1904~06 and settled in Winnipeg, Manitoba in Western Canada. /mt

I live near Warsaw. LÓdź (Lodz - if your e-mail reader do not support
iso-8859-2 ;) ) is ~100km from here. You must be a very happy man if your
wife is partially from Poland. Polish women are really beautifull! ;))

regards,

Marcin Inkielman






Re: [GENERAL] ERROR: nodeRead: Bad type 0

1999-09-27 Thread Marcin Inkielman


I have a similar problem, i think it worked with previous postrgesql versions (6.4.x)
when i typed:



[marn@mi marn]$ createdb pr
[marn@mi marn]$ psql pr
Welcome to the POSTGRESQL interactive sql monitor:
Please read the file COPYRIGHT for copyright terms of POSTGRESQL
[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3]

type \? for help on slash commands
type \q to quit
type \g or terminate with semicolon to execute query
You are currently connected to the database: pr
pr=> create table "Jêzyk obcy"(a text);
CREATE
pr=> select * from "Jêzyk obcy";
a
-
(0 rows)

pr=> create view "Jêzyk obcy1" as select * from "Jêzyk obcy";
CREATE
pr=> select * from "Jêzyk obcy1";
ERROR:  nodeRead: Bad type 0
pr=> vacuum;
ERROR:  nodeRead: Bad type 0
pr=> drop view "Jêzyk obcy1";
ERROR:  nodeRead: Bad type 0  

---

note that the name of the view contains a space and a national character

there is no problem with:

---
[marn@mi marn]$ destroydb pr;
[marn@mi marn]$ createdb pr
[marn@mi marn]$ psql pr
Welcome to the POSTGRESQL interactive sql monitor:
Please read the file COPYRIGHT for copyright terms of POSTGRESQL
[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3]

type \? for help on slash commands
type \q to quit
type \g or terminate with semicolon to execute query
You are currently connected to the database: pr

pr=> create table "Jêzyk_obcy"(a text);
CREATE
pr=> select * from "Jêzyk_obcy";
a
-
(0 rows)

pr=> create view "Jêzyk_obcy1" as select * from "Jêzyk_obcy";
CREATE
pr=> select * from "Jêzyk_obcy1";
a
-
(0 rows)


just a killing space??
-- 

Marcin Inkielman


 sibi omino similis?







Re: [GENERAL] Postgres concurrency : urgent

1999-11-11 Thread Marcin Inkielman

On Fri, 12 Nov 1999, V Krishnaraj wrote:

> Hi,
> 
> I'm having a postgres database accessed from perl using DBI and 
> DBD.
> 
> This application works fine when we are in single user. When we 
> go into multi-user, the app has concurrency problems.
> 
> The application has a sql query and updates onto the base table in 
> the sql query. We are using Select FOR UPDATE to open the 
> cursor and multiple update statements to update values on different 
> base tables. When multiple users are running the program, after a 
> few iterations postmaster startsputting out messages that there are 
> concurrency problems due to which the execute before the fetch 
> fails.
> What I want to do is to wait for the other transactions updates to 
> get over and retry opening the cursor. This does not happen.
> The moment I try to open the cursor again (using execute), 
> postmaster says that there is no connection !!
> 
> If I try and reconnecting again it starts giving pqreaddata() problems.
> After sometime server crashes !!
> 
> We are using postgres 6.5.1, DBI-1.13, DBD 0.92
> 
> Please somebody help.
> 
> Thanks and regards,
> 
> Kimi

I 'm using :

LOCK <> IN ACCESS EXCLUSIVE MODE;

to control concurrent transactions - it works for me
(postgres 6.5.2)

 

Marcin Inkielman


   .~.
   /V\
  // \\
 /(   )\
  ^`~'^

 powered by Linux
   

sibi omino similis?







[GENERAL] a problem with single char recoding

1999-12-17 Thread Marcin Inkielman

hi all!

i have 2 boxes in my local net:
192.168.1.1 - linux 2.2.12/postgres 6.5.3
192.168.1.2 - windows 98+postgres_odbc+MS Access 97 

i want use my postgres database in 3 ways:
- via psql,
- with libpq,CGI,HTTP -> MS Explorer on my Win boxe
- with ODBC on MS Access

the problem is that my linux box runs with LATIN2 and Windows 
with 1250 charset.

I know that Postgres offers a method of character recoding. 
Following docs and hints from this list I configured my Postgres with
apropriate options and this recoding works quite well but

when I start postmaster and access my database 
1. with libpq, CGI: LATIN2 on my browser works fine - OK
2. with ODBC+ACCESS : WIN 1250 - OK  

and then I retry it with

3. libpq, CGI, browser - WIN1250 is still there - WRONG 
4. psql - has also WIN1250 !!!

when I restart postmaster psql/libpq work fine again 

I tryed to connect libpq via lo and eth0 interface with the same result
and my charset.conf is:

--
BaseCharset iso

RecodeTable iso win iso-win.tab
RecodeTable win iso win-iso.tab

HostCharset 192.168.1.1 iso
HostCharset 192.168.1.2 win
HostCharset localhost   iso
--

i think files iso-win.tab/win-iso.tab are ok.

thanks for any hint...





    Marcin Inkielman








[GENERAL] logs

2000-01-11 Thread Marcin Inkielman

hi!

sorry for my stupid question...
is it possible to view logs of postmaster?

i remarked a file named pg_log - what is it for?



Marcin Inkielman



   .~.
   /V\
  // \\
 /(   )\
  ^`~'^

 powered by Linux
   

sibi omino similis?








[GENERAL] logs

2000-01-11 Thread Marcin Inkielman


hi!

sorry for my stupid question...
is it possible to view logs of postmaster?

i remarked a file named pg_log - what is it for?



Marcin Inkielman



   .~.
   /V\
  // \\
 /(   )\
  ^`~'^

 powered by Linux
   

sibi omino similis?










Re: [GENERAL] row numbering

2000-05-10 Thread Marcin Inkielman

Hi!

thanks for your respose!

sorry if i was not very clear with my question...

On Wed, 10 May 2000, Peter Eisentraut wrote:

> Marcin Inkielman writes:
> 
> > how may i easyly obtain row numbers in a query:
> 
> In the SQL data model, rows don't have numbers, because rows aren't
> ordered -- a query may return the rows in any order. So if you are

sure, i know that 

> building on that assumption you might have to rethink slightly.

> 
> If you need to number your records for some reason, say invoice numbers,
> you can use a sequence:

that is exactly i want to obtain. i would like to have:

1 | first item
2 | second,
3 | ... and so on.

each time i do a select on a table and aways starting with a "1".

> 
> create table my_tbl (
>   nr serial, -- creates implicit sequence
>   ... more fields ...
> );
> 
> Then you can select them like any other field. You say you had
> unsatisfying results, would you care to explain why?

the main problem i have with sequences (or temporary tables - as i think
you suggest) is that i have multiples concurrent transactions at he same
time. as they are generated by a single cgi script, if i use a sequence it
will have the same name for each transaction, and therefore only the first
trasaction will work. of course i may lock it just like a simple table but
i am not sure this is a simpliest way. i am just searching something
simplier

> 
> Lastly, there is the OID which every row gets automatically assigned by
> the system. This is a unique number across the whole installation. You can
> access it as
>   select oid, your, fields, here from table ...
> like a regular column. The OID may be good enough to get some sort of
> number on a row but be aware that it is wildly non-portable.

i thought to use oids but i am no sure if it is possible to obtain the
result i require  

> 
> 
> 

Marcin Inkielman




[GENERAL] PG 7.0 vacuum problem

2000-05-25 Thread Marcin Inkielman

HI!

i rescently upgraded my system from PG6.53 to PG7.0. after a few days of
work i am unable to do a vacuum on one of tables:

nat=# VACUUM verbose analyze osoby;
NOTICE:  --Relation osoby--
NOTICE:  Pages 189: Changed 0, reaped 24, Empty 0, New 0; Tup 3436: Vac
29, Keep/VTL 0/0, Crash 0, UnUsed 180, MinLen 68, MaxLen
608; Re-using: Free/Avail. Space 67568/4196; EndEmpty/Avail. Pages
7/14. CPU 0.01s/0.95u sec.
NOTICE:  Index osoby_Imię_key: Pages 33; Tuples 3436: Deleted 0. CPU
0.00s/0.01u sec.
NOTICE:  Index osoby_pkey: Pages 12; Tuples 3436: Deleted 0. CPU
0.00s/0.01u sec.
NOTICE:  Rel osoby: Pages: 189 --> 182; Tuple(s) moved: 0. CPU 0.00s/0.00u
sec.
NOTICE:  FlushRelationBuffers(osoby, 182): block 186 is referenced
(private 0, global 3)
FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.


everything other (including pg_dump) work fine 


any ideas anybody?

do i risk anything if i do:

pg_dump nat> tmp
dropdb nat
createdb nat
psql nat 


Re: [GENERAL] PG 7.0 vacuum problem

2000-05-25 Thread Marcin Inkielman

On Thu, 25 May 2000, Tom Lane wrote:

> Date: Thu, 25 May 2000 19:49:00 -0400
> From: Tom Lane <[EMAIL PROTECTED]>
> To: Marcin Inkielman <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] PG 7.0 vacuum problem 
> 
> Marcin Inkielman <[EMAIL PROTECTED]> writes:
> > i rescently upgraded my system from PG6.53 to PG7.0. after a few days of
> > work i am unable to do a vacuum on one of tables:
> 
> > nat=# VACUUM verbose analyze osoby;
> > NOTICE:  FlushRelationBuffers(osoby, 182): block 186 is referenced
> > (private 0, global 3)
> > FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
> 
> Hmm.  Have you had any backend crashes?  What seems to be happening here
> is that there are some leftover reference counts on one of the shared
> disk buffers for that relation.  That should never be true while VACUUM
> is running, because no other backend is supposed to be referencing that
> table.
> 
> > do i risk anything if i do:
> 
> > pg_dump nat> tmp
> > dropdb nat
> > createdb nat
> > psql nat  
> Probably won't work either.  Instead, try stopping and restarting the
> postmaster --- if my theory is right, that should get rid of the
> leftover reference counts.  But the real question is how did it get
> into this state in the first place...

thanks, it worked! before this, i tried to recreate my database using
another name (and without destroying the old one) - it worked too!

-- 
mi