Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread gnari
From: "Nikola Milutinovic" <[EMAIL PROTECTED]>


> There is one DB and one DB user. The DB is cleared and loaded with the 
> data of same volume each month (monthly report). The volume is not small 
> and it usually takes 3 hours to load. Loading is done with SQL files 
> which use transactions, 10,000 SQL statements per transaction.
> 
> A couple of days ago, disk became full, since we were not doing VACUUM 
> on the DB at all. So, I deleted all records from the 3 tables the DB has 
> and performed "VACUUM FULL ANALYZE". This reclaimed the space.
> 
> My problem is that the load is now taking (to my estimate) 20 times more!

it sounds from your description, that you are not analyzing the table
after the batch loads.
a analyze on an empty table effectively destroys all statistics.

gnari




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-26 Thread Jerome Macaranas
it did..  thanks.. generally a weeks process turned out to be less than a 
day..



On Thursday 25 November 2004 15:06, Christopher Kings-Lynne wrote:
> > update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in 
> > (select
> > mobile_num from LOADED_MOBILE_NUMBERS)
>
> Change to:
>
> update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where exists (select 1 from
> LOADED_MOBILE_NUMBERS lmn where
> lmn.mobile_num=SUBSCRIPTIONTABLE.mobile_num);
>
> That should run a lot faster.
>
> Make sure you have indexes on both mobile_num columns.
>
> Chris
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html


-- 

Jerome Macaranas
Systems/Network Administrator
GMA New Media, Inc.
Phone: (632) 9254627 loc 202
Fax: (632) 9284553
Mobile: (632) 918-9336819
[EMAIL PROTECTED]

Sanity is the playground for the unimaginative.


DISCLAIMER: This Message may contain confidential information intended only 
for the use of the addressee named above. If you are not the intended 
recipient of this message you are hereby notified that any use, 
dissemination, distribution or reproduction of this message is prohibited. If 
you received this message in error please notify your Mail Administrator and 
delete this message immediately. Any views expressed in this message are 
those of the individual sender and may not necessarily reflect the views of 
GMA New Media, Inc.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread Arjen van der Meijden
Hi Nix,
The problem is, that while doing the vacuum full ANALYZE the table was 
empty. It therefore gathered statistics of a situation which isn't there 
anymore when you fill up the table. In an empty or small table, it is 
normal to do sequential scans. Which you most of the time don't want in 
a large table.
My suggestion is to VACUUM (FULL) the table after you've deleted the 
data. Then fill up the table and do a ANALYZE when you're done filling it.

That way, the analysis of the data will be much more accurate. My guess 
is, it'll use indexes much sooner and be much faster. Have a look at the 
output of EXPLAIN ANALYZE yourstatement; before doing ANALYZE and after.

Best regards,
Arjen
On 26-11-2004 7:35, Nikola Milutinovic wrote:
Hi all.
I have the following:
- Mandrake Linux 9.1
- PostgreSQL 7.3.2 MDK5
There is one DB and one DB user. The DB is cleared and loaded with the 
data of same volume each month (monthly report). The volume is not small 
and it usually takes 3 hours to load. Loading is done with SQL files 
which use transactions, 10,000 SQL statements per transaction.

A couple of days ago, disk became full, since we were not doing VACUUM 
on the DB at all. So, I deleted all records from the 3 tables the DB has 
and performed "VACUUM FULL ANALYZE". This reclaimed the space.

My problem is that the load is now taking (to my estimate) 20 times more!
Anything I could do to find out what's going on? There is nothing in the 
logs that I can see.

Nix.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread Greg Stark

Arjen van der Meijden <[EMAIL PROTECTED]> writes:

> My suggestion is to VACUUM (FULL) the table after you've deleted the data.
> Then fill up the table and do a ANALYZE when you're done filling it.

For now you might be able to improve matters by doing an ANALYZE even while
it's busy doing the import. Depending on how the import is coded it may or may
not have an immediate effect.

VACUUM and ANALYZE (and VACUUM ANALYZE) are fairly quick and don't block
access to the tables, though they can slow it down somewhat. You probably want
to be running them frequently so you don't have to resort to VACUUM FULL.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] PostgreSQL Config.

2004-11-26 Thread Ramesh Patel
Dear AllI have Redhat 9.0 and  i use Postgresql and PHP
.This Database ranning last two year. now i  have some speed
problem.Now i  am new in Postgresql so  How to  Config a
PostgreSQL.conf file.my Hardver is P IV server, 1 GB RAM ,
72 GB SCGI HDD.   and  I am use Web Application.if Possible
then give me  Config file. Thanking YouRamesh
Patel System Dept. Banasdairy, Palanpur(B.K.) E-Mail :
[EMAIL PROTECTED] Mobile : 9426 515251 


Re: [GENERAL] PostgreSQL Config.

2004-11-26 Thread Richard Huxton
Ramesh Patel wrote:
Dear All
I have Redhat 9.0 and  i use Postgresql and PHP .
This Database ranning last two year. now i  have some speed problem.
Now i  am new in Postgresql so  How to  Config a PostgreSQL.conf file.
my Hardver is
P IV server, 1 GB RAM , 72 GB SCGI HDD.  
and  I am use Web Application.
if Possible then give me  Config file. 
The best I can do is to point you to:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
That's a good introduction to performance tuning, and just the first 
section can help with lots of problems.

Before that though, how often do you run vacuum/analyse? H you've 
noticed any problems there?

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread Pierre-Frédéric Caillaud
It seems this issue has been mentionned several times lately...
I'd propose something to do to avoid it :
	* When TRUNCAT'ing a table :
		- the row-count is reset to 0 (of course !)
		- however, the column stats are kept, on the basis that the data which  
will be inserted later in the table will most likely have at least some  
correlation to the data that was in it before. Of course, VACUUM ANALYZE  
will update the statistics with real ones when it is run.
	This would make TRUNCATE behave more like DELETE FROM...

	* When INSERT'ing into a table :
	The OP's problem was that the query is planned as if the table was almost  
empty (hence seq scan), well it is in fact empty, but many rows are  
inserted. Of course, the query can't be replanned in the middle of its  
execution (although that'd be a nice feature to ad on the withlist for  
postgres version 2020)...
	When planning for inserts, the planner currently uses  the estimated  
number of rows in the target table, taken from the statistics.
	However, at this time, the planner has an estimation of how many rows  
it's going to insert :
		- INSERT ... SELECT, or CREATE TABLE ... AS SELECT :
			The SELECT from which the rows will come is planned, so there
			should be an estimated number of rows readily available.
		- COPY FROM...
			The number of rows could be estimated by peeking at the first N
			rows in the file (not on COPY FROM stdin though), or an additional
			parameter to COPY could be give,.

	Then, the INSERT can be planned as if the target table contained  
(estimated number of rows in the target table) + (estimated number of rows  
to be inserted), and it will choose a good plan for populating empty  
tables...

What do you think ?


The problem is, that while doing the vacuum full ANALYZE the table was  
empty. It therefore gathered statistics of a situation which isn't there  
anymore when you fill up the table. In an empty or small table, it is  
normal to do sequential scans. Which you most of the time don't want in  
a large table.
My suggestion is to VACUUM (FULL) the table after you've deleted the  
data. Then fill up the table and do a ANALYZE when you're done filling  
it.

That way, the analysis of the data will be much more accurate. My guess  
is, it'll use indexes much sooner and be much faster. Have a look at the  
output of EXPLAIN ANALYZE yourstatement; before doing ANALYZE and after.

Best regards,
Arjen
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread Martijn van Oosterhout
On Fri, Nov 26, 2004 at 02:00:48PM +0100, Pierre-Frédéric Caillaud wrote:
> 
>   It seems this issue has been mentionned several times lately...
>   I'd propose something to do to avoid it :
> 
>   * When TRUNCAT'ing a table :
>   - the row-count is reset to 0 (of course !)
>   - however, the column stats are kept, on the basis that the 
>   data which  will be inserted later in the table will most 
> likely have at 

Currently, TRUNCATE doesn't affect the statistics, nor does it set the
row count to zero. Which means that when new data is inserted it will
plan as if the table were still full. So no change required here.

>   * When INSERT'ing into a table :
>   The OP's problem was that the query is planned as if the table was 
>   almost  empty (hence seq scan), well it is in fact empty, but many rows 
> are 
> inserted. Of course, the query can't be replanned in the middle of its  
> execution (although that'd be a nice feature to ad on the withlist for  
> postgres version 2020)...

The size of the table you are inserting to is irrelevent to the
planner. All that matters is where the data is coming from. Think about
it, UPDATE, DELETE and INSERT are just fancy wrappers around SELECT to
do something special with the rows that are finally selected. Which you
use has very little effect on the plan finally used. I guess as a
special case, DELETE can optimise the fact that no data need be
returned, only a list of rows...

The original user's problem stemmed from the fact they were running
ANALYZE on an empty table, *that* was killing the statistics. Stop
doing that and the statistics will remain as if the table was full. The
VACUUM will set the rowcount back to zero, but that's it.

Seems PostgreSQL is already doing what you suggest anyway...
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgptXWeppH5rj.pgp
Description: PGP signature


[GENERAL] Inserting greek letters

2004-11-26 Thread Adam Witney

Hi,

I have a database fronted with PHP. I want the user to be able to put
scientific notation characters (greek letters really) and store them in the
database.

Do I have to have created the database with UNICODE encoding to do this?
Also, is there anything in the PHP I have to adjust to be able to store the
data and display the data?

Thanks for any help

Adam


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Inserting greek letters

2004-11-26 Thread Martijn van Oosterhout
On Fri, Nov 26, 2004 at 02:52:18PM +, Adam Witney wrote:
> Do I have to have created the database with UNICODE encoding to do this?
> Also, is there anything in the PHP I have to adjust to be able to store the
> data and display the data?

Obviously, your database needs to be able to store the characters.
Unicode is good for this because it can store almost anything, but
depending on the specific character the encoding you're using may be
fine.

As for PHP, you need to have the browser and PHP agree on what
character set they're going to use. Then you set the client encoding
appropriately and PostgreSQL will make sure you get the information you
expect.

As a final note, you could always store your fancy characters as HTML
entities, then you could store them anywhere, but then also PostgrSQL
will not be able to do any conversion for you...

hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpF1MaCF7ajd.pgp
Description: PGP signature


Re: [GENERAL] Inserting greek letters

2004-11-26 Thread Adam Witney

Hi Martijn, thanks for your quick reply...

> As for PHP, you need to have the browser and PHP agree on what
> character set they're going to use. Then you set the client encoding
> appropriately and PostgreSQL will make sure you get the information you
> expect.

Im not sure, where do I set the client encoding?

Thanks

adam


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Inserting greek letters

2004-11-26 Thread Martijn van Oosterhout
On Fri, Nov 26, 2004 at 03:16:25PM +, Adam Witney wrote:
> 
> Hi Martijn, thanks for your quick reply...
> 
> > As for PHP, you need to have the browser and PHP agree on what
> > character set they're going to use. Then you set the client encoding
> > appropriately and PostgreSQL will make sure you get the information you
> > expect.
> 
> Im not sure, where do I set the client encoding?

Execute the command:

SET CLIENT_ENCODING = 

Maybe:

Latin1
Latin9
UTF-8
Windows1238

You need to configure one, don't ask me how though, I don't know PHP or
HTML.

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpbbGRvWjpKM.pgp
Description: PGP signature


[GENERAL] Query on exception handling in PL/pgSQL

2004-11-26 Thread diya das
Hi,

  I am just a beginner in postgreSQL and writing some
functions in PL/pgSQL. I use the libpq interface in
the client to call this funtions.  My pgsql function
does an insert to a table and I have opened a
transaction block before calling my function from the
client. When the insert operation fails due to unique
key violation the whole transaction aborts. Is there a
way I can handle this error in my pgsql funtion rather
that aborting and not executing the rest of the
operations?. I have a workaround , But I was wondering
if there is an inexpensive way of doing it.

Another query I have is related to returning values
from funtions. How can I return multiple values from a
function? Is there any support for OUT variable in
pgsql functions? 
Any pointers on this is greatly appreciated.

Thanks
Diya.



__ 
Do you Yahoo!? 
Yahoo! Mail - Helps protect you from nasty viruses. 
http://promotions.yahoo.com/new_mail

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Errors when deleting cords and droping table at the same time

2004-11-26 Thread Cheng Shan


Hello!
   Every one.
   I met some difficult questions when I was developing an application.
   We have an old table object whose definition is:
  CREATE TABLE object (
 name name, -- object’s name
 pid oid,    -- parent’s object ID
 PRIMARY KEY(oid),
 FOREIGN KEY(pid) 
REFERENCES object(oid)
ON DELETE SET NULL
  );
The table “object” is used as a base table and there are many tables(over 1500) that is derived from it for we treat almost every thing as object. Though the idea isn’t bad and it can really simplify our conceptual mode, it also results in low performance. When we want to find an object from table “object”, it will perform searching in 1500 tables, the long time to wait is insufferable. And the request that gives a pid to find childen oid is called frequently.
So we decide to reimplement the system and now we treat parent-child relationship is a common relationship. The relationship is named “containment”. For the “pid” field is wild used by other programmers, we can’t delete it. Then we have a task to keep the two table existent. To reduce the possibility to make mistakes, we define triggers on every table.
Now the system can work well when we do “insert”, “delete” or “update” operation. But when I drop a table that is inherits from “object” or the table “object” itself, the triggers on the table will not work. To avoid such cases, I wrote a function “_test_delete_and_drop()” that will delete all columns in table “object” and then drop the table. But I met puzzling information when I do that.
 
I have two questions:
   1. Why don’t the triggers work when drop a table?
   2. I can do “delete” and “drop” without any problem. Why do the error come out when I combine the two operation together?
Note: The source code is in the accessory(test.sql).
 
   Could any one help?
   Any effort will be appreciated. 
   Thanks very much!Do You Yahoo!?
嫌邮箱太小?雅虎电邮自助扩容!/
Function Description:
_test_setup(): Initialize the environment
_test_teardown(): Clean up
_test_object_insert(): Insert records into "object"
_test_object_update(): Update records of "object"
_test_object_delete(): Delete records from "object"
_test_containment_insert(): Insert records into "containment"
_test_containmnet_delete(): Delete records from "containment"
_trigger_update_containment(): The trigger on table "object" 
that is used to update table "containment"
_trigger_update_object(): The trigger on table "containment" 
that is used to update table "object"
_test_delete_and_drop(): Delete records from "object" and drop table 
"object"

Usage:
Before running any test, you should reload test.sql use the command 
"\i test.sql" for the function will be analysed and cached once it run. 
If the table was deleted, there will be errors.
See the results use the command "SELECCT * FROM object;" and 
"SELECT * FROM containment" to examine the records in table "object"
and table "containment".

Test inserting records into table "object":
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT * FROM object;
SELECT * FROM containment;

Test updating table "object":
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_object_update();
SELECT * FROM object;
SELECT * FROM containment;

Test deleting records from "object":
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_object_delete();
SELECT * FROM object;
SELECT * FROM containment;

Test inserting records into table "containment":
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_containment_insert();
SELECT * FROM object;
SELECT * FROM containment;

Test deleting records from table "containment":
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_containment_delete();
SELECT * FROM object;
SELECT * FROM containment;

!!  All above test cases can pass. But the following is an exception:
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_delete_and_drop();

/

-
-- Tests
-

-- Initialize the test environment
CREATE OR REPLACE

[GENERAL] Error after adding Foreign Keys

2004-11-26 Thread j0rd1 adame
Hi,

When I add a FK to a table and then try to do \d table
i get this

ERROR:  Function pg_catalog.pg_get_triggerdef(oid) does not exist
Unable to identify a function that satisfies the given argument types
You may need to add explicit typecasts

i have googled for this without any luck

thanx for any help

-- 
Jordi Adame V
jordi (.a.) jordi.ent
http://www.lawaloca.com


pgpXORSWrmmnd.pgp
Description: PGP signature


Re: [GENERAL] pg_dump and languages

2004-11-26 Thread Tom Lane
Adrian Klaver <[EMAIL PROTECTED]> writes:
> I used the pg_dump from Postgres 8.0beta5 to dump the data from a version 
> 7.4.0 database. Both databases are located on SuSE Linux machines. The 
> pg_restore to the 8.0 version went very well with the exception of the 
> languages. I have plpgsql and plsh installed in the 7.4.0 database. pg_dump 
> handled the dependencies for plpgsql alright but not for plsh. Searching the 
> archives showed a previous problem with languages installed to pg_catalog 
> which is were plsh is installed. However, I thought this had been solved. Is 
> this correct? The other thought that occurred to me is that plsh is an 
> untrusted language. I am dumping and restoring as user postgres so my 
> understanding is that this should overcome any permissions issues. Again am I 
> correct in thinking this?

This would be a more useful report had you said exactly what the problem
was ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL slow after VACUUM

2004-11-26 Thread Tom Lane
Nikola Milutinovic <[EMAIL PROTECTED]> writes:
> - PostgreSQL 7.3.2 MDK5
> ...
> A couple of days ago, disk became full, since we were not doing VACUUM 
> on the DB at all. So, I deleted all records from the 3 tables the DB has 
> and performed "VACUUM FULL ANALYZE". This reclaimed the space.

The subsequent discussion pointed out that you probably shouldn't have
ANALYZEd right at that point, but I didn't see anyone suggest that you
should have done TRUNCATEs rather than delete all/vacuum full.  The
TRUNCATE way is a good deal faster, and it will also eliminate index
bloat while vacuum full won't.

BTW, 7.3.2 has several known serious bugs; I'd recommend an update to
7.3.8, if not upgrading to 7.4.*.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ok] [GENERAL] Error after adding Foreign Keys

2004-11-26 Thread j0rd1 adame
Nevermind, I just upgraded to 7.4 and everything is just fine now

thanx anyway

El Vie 26 Nov 2004 12:24, j0rd1 adame escribió:
> Hi,
>
> When I add a FK to a table and then try to do \d table
> i get this
>
> ERROR:  Function pg_catalog.pg_get_triggerdef(oid) does not exist
> Unable to identify a function that satisfies the given argument
> types You may need to add explicit typecasts
>
> i have googled for this without any luck
>
> thanx for any help

-- 
Jordi Adame V
jordi (.a.) jordi.ent
http://www.lawaloca.com


pgpin02mB6spb.pgp
Description: PGP signature


Re: [GENERAL] Query on exception handling in PL/pgSQL

2004-11-26 Thread Tom Lane
diya das <[EMAIL PROTECTED]> writes:
> My pgsql function
> does an insert to a table and I have opened a
> transaction block before calling my function from the
> client. When the insert operation fails due to unique
> key violation the whole transaction aborts. Is there a
> way I can handle this error in my pgsql funtion rather
> that aborting and not executing the rest of the
> operations?

Not before PG 8.0.  In earlier versions you might be able to work around
the problem by testing for existence of the target key before you
insert; though this has obvious race-condition problems if you expect
multiple clients to be doing it simultaneously.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Problems setting up slony-1

2004-11-26 Thread Glen Eustace
I am trying to setup slony-1 and need some off list assistance from
someone who has got things going ( or at least knows more than me :-).

I am running 7.4.6 on both master and slave but when I run the slonik
script, I get the error.

:5: Initializing the cluster
:7: PGRES_FATAL_ERROR create schema "_GZ"; - ERROR:  permission
denied for database admin
:9: Could not initialize the cluster!

I am using a 'slony' user which has createdb rights but the database
admin was created by a different user.

Any assistance, as always, much appreciated.
-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Glen and Rosanne Eustace,
GodZone Internet Services, a division of AGRE Enterprises Ltd.,
P.O. Box 8020, Palmerston North, New Zealand 5301
Ph/Fax: +64 6 357 8168, Mob: +64 27 5 424 015, Web: www.godzone.net.nz

"A Ministry specialising in providing low-cost professional Internet
Services to NZ Christian Churches, Ministries and Organisations"


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Inserting greek letters

2004-11-26 Thread Keary Suska
on 11/26/04 8:16 AM, [EMAIL PROTECTED] purportedly said:

>> As for PHP, you need to have the browser and PHP agree on what
>> character set they're going to use. Then you set the client encoding
>> appropriately and PostgreSQL will make sure you get the information you
>> expect.
> 
> Im not sure, where do I set the client encoding?

AFAIK, you can't and/or don't need to set character encoding. The issues
would be display and data submission. Your best bet for display is to set
the "default_charset" directive, so PHP will always tell the browser the
correct character set to use. PHP shouldn't care about form data content,
and should handle data literally--i.e. decode it into an octet stream. You
may, however, need to call pg_set_client_encoding() (or set the appropriate
environment variable, as the case may be), to ensure that PHP talks to PG
properly. This might mean that all queries must be properly encoded (i.e.
including keywords), and there may be implications here as well. I am not
sure how PG handles commands vs data.

Best,

Keary Suska
Esoteritech, Inc.
"Demystifying technology for your home or business"


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pg_dump and languages

2004-11-26 Thread Adrian Klaver
On Friday 26 November 2004 10:26 am, Tom Lane wrote:
> Adrian Klaver <[EMAIL PROTECTED]> writes:
> > I used the pg_dump from Postgres 8.0beta5 to dump the data from a version
> > 7.4.0 database. Both databases are located on SuSE Linux machines. The
> > pg_restore to the 8.0 version went very well with the exception of the
> > languages. I have plpgsql and plsh installed in the 7.4.0 database.
> > pg_dump handled the dependencies for plpgsql alright but not for plsh.
> > Searching the archives showed a previous problem with languages installed
> > to pg_catalog which is were plsh is installed. However, I thought this
> > had been solved. Is this correct? The other thought that occurred to me
> > is that plsh is an untrusted language. I am dumping and restoring as user
> > postgres so my understanding is that this should overcome any permissions
> > issues. Again am I correct in thinking this?
>
> This would be a more useful report had you said exactly what the problem
> was ...
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
Sorry should have been more explicit. The dump loaded functions depending on 
plsh before the function that created the language. In fact it did not 
instalI the language at all. I have installed both plpgsql and plsh into the 
7.4 database as well as the template1 database for the 7.4 cluster. Running 
createlang -l shows both installed but only plpgsql is installed by the 
dump/restore procedure into the 8.0beta cluster. I compiled plsh on the 
machine with the 8.0beta so the files were in place on the new installation. 
At this point only two functions depend on plsh and Peter Eisentraut includes 
a script to load the language so this more of a wonder why question than a 
'problem'. 
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Invalid Character Data Problem

2004-11-26 Thread Hunter Hillegas
I am having some trouble pulling some data out from the database. I have
tried against both 7.4.5, 8.0b4 and 8.0b5 and get the same result.

I have used both the v308 and CVS HEAD of the JDBC driver.

When I SELECT from a certain table, I see this JDBC exception:

"Invalid character data was found.  This is most likely caused by stored
data containing characters that are invalid for the character set the
database was created in.  The most common example of this is storing 8bit
data in a SQL_ASCII database."

The database is indeed of type SQL_ASCII. The table stores mailing list data
and has about 400,000 rows.

Looking at the data via psql, I see that some of the rows have strange
characters in them, such as question marks where I would not expect them,
etc...

What are my options? Is there a way to identify the 'bad' records, or the
ones causing trouble?

Thanks,
Hunter



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Bulk data insertion

2004-11-26 Thread Jonathan Daugherty
Hello,
I have a PL/PgSQL function that I need to call with some ARRAY 
parameters.  These array values are very large -- typically thousands of 
elements.  Each element is a 4-element array.  This function is called 
to do some sanity checking on the array data and use the individual 
elements to do inserts where appropriate.

The problem is that I don't want to spend a lot of time and memory 
building such a query (in C).  I would like to know if there is a way to 
take this huge chunk of data and get it into the database in a less 
memory-intensive way.  I suppose I could use COPY to put the data into a 
table with triggers that would do the checks on the data, but it seems 
inelegant and I'd like to know if there's a better way.

Thoughts?  Thanks for your time.
--
  Jonathan Daugherty
  Command Prompt, Inc. - http://www.commandprompt.com/
  PostgreSQL Replication & Support Services, (503) 667-4564
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Regexp matching: bug or operator error?

2004-11-26 Thread Kenneth Tanzer
OK.  I've been trying to get my mind around this, and think about ways 
to improve the documentation (more about that below).  I'm pretty sure 
that I can see the general concept now, and am almost convinced that it 
really does work as described.  I guess I really don't like the whole RE 
inheriting it's preference from the first preference it encounters.  It 
seems like it would be better to use a switch preceeding the whole 
expression, as is done with (?i) for case-insensitivity.  Designating 
(?g) for greedy and (?G) for non-greedy seems like it would be less 
inviting of human error.  But that's mostly a quibble, as long as it 
works consistently as described!

I've still got a few problems, though.  Per 9.6.3.5, "An RE consisting 
of two or more branches connected by the | operator prefers longest match."

Apply that to this example:
   SELECT substring('abc' FROM '.*?|.*?');
which returns a greedy 'abc'.  In this case, the whole RE is then 
supposed to be greedy (because of the "|").  So I suppose that _might_ 
be said to work as described, even though the | in this case overrides 
the expressed preference of both of its components (in which case I'm 
just griping about not liking the way this was implemented. :) )

But what about these two queries:
   SELECT substring('a' FROM 'a?|a?');
This returns a greedy 'a', similar to the example above.  But then why does
   SELECT substring('ba' FROM 'a?|a?');
return a non-greedy empty string?  Using the logic from the previous 
examples, this should do a greedy match and return 'a' as well.  If this 
isn't a bug, please explain why!

With regard to the documentation, after re-reading it many times I'd 
have to say the information is all there, but it's hard to absorb.  I 
think the main problem is that the term "preference" is used to discuss 
greedy/non-greediness, as well as the words greedy & non-greedy.  This 
makes it easier for humans to fail to connect the dots.  People are more 
likely to be familiar with greedy & non-greedy (especially those who've 
used other regex's before), whereas preference isn't as clear.  Perhaps 
the term "greediness preference" could be used in place of "preference", 
or "preference" could be dropped altogether.

In general, I think "prefers shortest match" could be replaced with "is 
non-greedy", and "prefers longest match" could be replaced with "is 
greedy".  A little bit more context might be helpful as well, as in 
example c) below.

As an example, here's a couple of different possibilities for the second 
sentence of the section:

a) If the RE could match more than one substring starting at that point, 
its choice is determined by its greediness /preference/: either the 
longest substring (greedy), or the shortest (non-greedy).

b) If the RE could match more than one substring starting at that point, 
the match can be either greedy (matching the longest substring) or 
non-greedy (matching the shortest substring).  Whether an RE is greedy 
or not is determined by the following rules...

c)  Like individual components of an RE, the entire RE can be either 
greedy (matching the longest substring) or non-greedy (matching the 
shortest substring). 

Do you think an edit along these lines would be helpful?  If so, I'd be 
willing to take a shot at re-writing that section.  Let me know.  Thanks.

Ken Tanzer

Tom Lane wrote:
Ken Tanzer <[EMAIL PROTECTED]> writes:
 

Thanks for the quick responses yesterday.  At a minimum, it seems like 
this behavior does not match what is described in the Postgres 
documentation (more detail below).
   

After looking at this more, I think that it is actually behaving as
Spencer designed it to.  The key point is this bit from the fine print
in section 9.6.3.5:
A branch has the same preference as the first quantified atom in it
which has a preference.
("branch" being any regexp with no outer-level | operator)
What this apparently means is that if the RE begins with a non-greedy
quantifier, then the matching will be done in such a way that the whole
RE matches the shortest possible string --- that is, the whole RE is
non-greedy.  It's still possible for individual items within the RE to
be greedy or non-greedy, but that only affects how much of the shortest
possible total match they are allowed to eat relative to each other.
All the examples I've looked at seem to work "properly" when seen in
this light.
I can see that this behavior could have some usefulness, and if need be
you can always override it by writing (...){1,1} around the whole RE.
So at this point I'm disinclined to vary from the Tcl semantics.
This does leave us with a documentation problem though, because this
behavior is surely not obvious from what it says in 9.6.3.5.  If you've
got any thoughts about a better explanation, I'm all ears.
 

Here's the actual regex we're working on--any help 
reformulating this would be great!
   

 

select substring('Searching for log 5376, referenced in this text'
  

Re: [GENERAL] Problems setting up slony-1

2004-11-26 Thread Christopher Browne
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Glen Eustace) 
wrote:
> I am trying to setup slony-1 and need some off list assistance from
> someone who has got things going ( or at least knows more than me :-).
>
> I am running 7.4.6 on both master and slave but when I run the slonik
> script, I get the error.
>
> :5: Initializing the cluster
> :7: PGRES_FATAL_ERROR create schema "_GZ"; - ERROR:  permission
> denied for database admin
> :9: Could not initialize the cluster!
>
> I am using a 'slony' user which has createdb rights but the database
> admin was created by a different user.

The problem likely falls into one of two things:

 a) pg_hba.conf is configured to deny access from the host you are
running slonik on;

 b) The 'slony' user is not a superuser which, it is quite clearly
documented, it must be...
-- 
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))

Outside of a dog,  a book is man's best friend. Inside  of a dog, it's
too dark to read. -Groucho Marx

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] UnSubEscrib

2004-11-26 Thread Carlos Roberto Chamorro Mostacilla
Place, erase of the group;

_
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Calling function dynamic

2004-11-26 Thread Carlos Roberto Chamorro Mostacilla
Aguien puede ayudarme a obtener el resultado del
llamado dinamico de una funcion?.
Este es el caso, Resulta que tengo tres funciones 
F1(), F2(), F3() y quiero ejecutar dinamicamente una
de ellas.
Inicialmente construi una sentencia Update a una Tabla
y luego recuperaba el valor de la tabla asi:
  
  -- Determino la funcion
  MyFunction := ''F1'';

  -- Armo el Update
  sbSQL := ''UPDATE MyTabla SET MyCampo = '' ||
MyFunction || ''();'';

  -- Ejecuto el Update
  EXECUTE sbSQL;

  --Recupero el valor 
   Select MyCampo INTO MyValor FROM  MyTabla;
   

El problema es que cuando esto se hace maxivamente
como en mi caso y si alguna de las funciones tambien  
 actualiza MyTabla, la tabla es bloqueada y no aplica 
 
el Update.

Alguien sabe como podria hacer lo mismo sin usar
tablas para recuperar el Valor de la funcion o como
desbloquear la tabla?;








_
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_dump and languages

2004-11-26 Thread Tom Lane
Adrian Klaver <[EMAIL PROTECTED]> writes:
> Sorry should have been more explicit. The dump loaded functions depending on 
> plsh before the function that created the language. In fact it did not 
> instalI the language at all.

Ah.  Looking back at your first message, I see you'd installed plsh into
the pg_catalog schema rather than a user schema.  pg_dump specifically
doesn't dump anything that appears in pg_catalog; it assumes all of that
stuff is supplied by the system.  So the above is expected behavior.

I haven't looked at plsh, but if it installs stuff directly into
pg_catalog, I'd call that a mistake.  At least it shouldn't be the
default behavior.

(Note for pedantry's sake: a language doesn't really belong to any
particular schema; but its support functions do, and pg_dump treats
the language as belonging to the same schema as the support functions.)

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Regexp matching: bug or operator error?

2004-11-26 Thread Tom Lane
Kenneth Tanzer <[EMAIL PROTECTED]> writes:
> But what about these two queries:
> SELECT substring('a' FROM 'a?|a?');
> This returns a greedy 'a', similar to the example above.  But then why does
> SELECT substring('ba' FROM 'a?|a?');
> return a non-greedy empty string?

You're ignoring the first rule of matching: when there is more than one
possible match, the match starting earliest in the string is chosen.
The longer-or-shorter business only applies when there are multiple
legal ways to form a match starting at the same place.  In this case
'a?' can form a legal match at the beginning of the string (ie, match
to no characters) and so the fact that a longer match is available later
in the string doesn't enter into it.

> With regard to the documentation, after re-reading it many times I'd 
> have to say the information is all there, but it's hard to absorb.

I'd agree.  This section was taken nearly verbatim from Henry Spencer's
man page for the regexp package, and with all due respect to Henry,
it's definitely written in geek reference-page-speak.  Maybe a few
examples would help.

On the other hand, I don't want to try to turn the section into a regexp
tutorial --- there are entire books written about regexps (I quite like
the O'Reilly one, btw).  So there's a bulk-vs-friendliness tradeoff to
be made.

> I think the main problem is that the term "preference" is used to
> discuss greedy/non-greediness, as well as the words greedy &
> non-greedy.

Good point.  It would help to use only one term.

> As an example, here's a couple of different possibilities for the second 
> sentence of the section:

I like this one:

> b) If the RE could match more than one substring starting at that point, 
> the match can be either greedy (matching the longest substring) or 
> non-greedy (matching the shortest substring).  Whether an RE is greedy 
> or not is determined by the following rules...

Given that intro, there's no need to use the word "preference" at all.
Or almost --- what term will you use for "RE with no preference"?
Perhaps you can avoid the question by pointing out that greediness
only matters for quantifiers, since unquantified REs can only match
fixed-length strings.

The point you make here:

> c)  Like individual components of an RE, the entire RE can be either 
> greedy (matching the longest substring) or non-greedy (matching the 
> shortest substring). 

is also important, but probably needs to be a completely separate
paragraph containing its own example.

> Do you think an edit along these lines would be helpful?  If so, I'd be 
> willing to take a shot at re-writing that section.  Let me know.  Thanks.

Fire away.  Please send whatever you come up with to the pgsql-docs
list.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Bulk data insertion

2004-11-26 Thread Tom Lane
Jonathan Daugherty <[EMAIL PROTECTED]> writes:
> The problem is that I don't want to spend a lot of time and memory 
> building such a query (in C).  I would like to know if there is a way to 
> take this huge chunk of data and get it into the database in a less 
> memory-intensive way.  I suppose I could use COPY to put the data into a 
> table with triggers that would do the checks on the data, but it seems 
> inelegant and I'd like to know if there's a better way.

Actually I'd say that is the elegant way.  SQL is fundamentally a
set-oriented (table-oriented) language, and forcing it to do things in
an array fashion is just misusing the tool.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] row-level deadlock problem

2004-11-26 Thread Tom Lane
Kamil Kaczkowski <[EMAIL PROTECTED]> writes:
> I have problems with deadlocks caused by(at least I think so) row-level
> locks and I can't find the reason.

The failure seems clearly a deadlock on row-level locks.  Are you
certain you've removed all relevant FKs (those pointing to the table
as well as out of it)?  Another possible explanation is if the UPDATE
command can update more than one row --- in that case different backends
might happen to reach the target rows in different orders.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] row-level deadlock problem

2004-11-26 Thread Kamil Kaczkowski
On Fri, 26 Nov 2004, Tom Lane wrote:

> Kamil Kaczkowski <[EMAIL PROTECTED]> writes:
> > I have problems with deadlocks caused by(at least I think so) row-level
> > locks and I can't find the reason.
>
> The failure seems clearly a deadlock on row-level locks.  Are you
> certain you've removed all relevant FKs (those pointing to the table
> as well as out of it)?
Yes, I browsed whole database schema, all FKs has been dropped.
> Another possible explanation is if the UPDATE
> command can update more than one row --- in that case different backends
> might happen to reach the target rows in different orders.
This could be it.
Yes, this UPDATE changes several rows, I didn't know this can be a
problem.
My understanding was that row-level lock at UPDATE statement is somehow
atomic and it locks all rows matched at once.
But what's the solution? How can I force UPDATEs to lock rows in the same
order? There's no ORDER BY clause for UPDATE.
Thanks for help.
--
Kamil Kaczkowski
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] pg_dump and languages

2004-11-26 Thread Adrian Klaver
On Friday 26 November 2004 03:11 pm, Tom Lane wrote:
> Adrian Klaver <[EMAIL PROTECTED]> writes:
> > Sorry should have been more explicit. The dump loaded functions depending
> > on plsh before the function that created the language. In fact it did not
> > instalI the language at all.
>
> Ah.  Looking back at your first message, I see you'd installed plsh into
> the pg_catalog schema rather than a user schema.  pg_dump specifically
> doesn't dump anything that appears in pg_catalog; it assumes all of that
> stuff is supplied by the system.  So the above is expected behavior.
>
> I haven't looked at plsh, but if it installs stuff directly into
> pg_catalog, I'd call that a mistake.  At least it shouldn't be the
> default behavior.
>
> (Note for pedantry's sake: a language doesn't really belong to any
> particular schema; but its support functions do, and pg_dump treats
> the language as belonging to the same schema as the support functions.)
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Problem solved. The language handler function was in both the pg_catalog and 
public schema. The pg_catalog version was masking the public version. 
Removing the pg_catalog version allowed the pg_dump program to sort things 
out properly and the restore completed with out a problem. Thanks for the 
insight.
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] row-level deadlock problem

2004-11-26 Thread Tom Lane
Kamil Kaczkowski <[EMAIL PROTECTED]> writes:
>> Another possible explanation is if the UPDATE
>> command can update more than one row --- in that case different backends
>> might happen to reach the target rows in different orders.

> Yes, this UPDATE changes several rows, I didn't know this can be a
> problem.

> My understanding was that row-level lock at UPDATE statement is somehow
> atomic and it locks all rows matched at once.

Nope.

> But what's the solution? How can I force UPDATEs to lock rows in the same
> order? There's no ORDER BY clause for UPDATE.

Change things so you don't need to update more than one row per query,
perhaps?  The lack of any primary key on that table was already pretty
disturbing from a database-theory point of view.  Maybe you should
rethink the table layout.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] row-level deadlock problem

2004-11-26 Thread Kamil Kaczkowski
On Fri, 26 Nov 2004, Tom Lane wrote:

> > My understanding was that row-level lock at UPDATE statement is somehow
> > atomic and it locks all rows matched at once.
>
> Nope.
>
> > But what's the solution? How can I force UPDATEs to lock rows in the same
> > order? There's no ORDER BY clause for UPDATE.
>
> Change things so you don't need to update more than one row per query,
> perhaps?  The lack of any primary key on that table was already pretty
> disturbing from a database-theory point of view.  Maybe you should
> rethink the table layout.
Yes, I know. I'm not the developer of this application, my job is
only to find reasons behind those deadlocks and suggest solution.
Anyway I'm suprised that UPDATE statements are so locking sensitive.
Consider more general case with following table:
CREATE TABLE test(id serial not null primary key,val integer not null);

Does it mean that there's no simple way(without explicit locking) to run:
UPDATE test SET val=1;
safely in concurrent transactions(let's name them T1,T2) assuming that
other transactions are modifing table frequently, e.g.:
T1: UPDATE started
... some other activity on table test ...
T2: UPDATE started(with different order then T1's update)
DEADLOCK

If it looks like this, I'm probably lucky it didn't bite me before.
Best regards.
--
Kamil Kaczkowski
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Front Ends

2004-11-26 Thread mstory


I am new to the databasing market, and am currently working on a complex
database that relies heavily on dynamically created tables for data storage.  My
partner and I have therefore been trying to find a front end utility that can
handle using tables with names that aren't known at compile time.  A few 
questions:

1. is this architecture (relying heavily on dynamically created tables) sound
enough to yield a production quality database?
2. What front end language/utility could handle the dynamic environment of this
database ( my partner and i have looked into ruby on rails/zope/zul and java),
and still yield a production quality product?  Are any of the above proposed
solutions better than the others? And if anyone has experience with these, are
they capable of handling multiple dynamically created tables?

Thanks

Matthew Story

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Front Ends

2004-11-26 Thread Richard Welty
On Fri, 26 Nov 2004 22:35:45 -0600 [EMAIL PROTECTED] wrote:
> 1. is this architecture (relying heavily on dynamically created tables) sound
> enough to yield a production quality database?

i've seen this method used in _extremely large_ production databases, such
as the ones at my current contracting gig (at the firm which handles NYS
and much NYC tax processing on an outsourced basis.)

so yes, i'd say you can make this sort of stuff work well in production.

richard
-- 
Richard Welty [EMAIL PROTECTED]
Averill Park Networking 518-573-7592
Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html