[GENERAL] ECPG problem with 8.3

2008-01-11 Thread Peter Wilson
I've just tried compiling our project against the 8.3RC1 code. This is the first 
time I've tried any release of 8.3.


Several components use ECPG. I'm now getting an ECPG error. Compiling on 8.2.3 
is fine. I've checked the 8.3 release documentation and there don't seem to be 
any that change the ECPG interface - all the changes seem to be behind the 
scenes. The biggest being a change to the backend protocol.


# gcc --version
gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

# /usr/local/pgsql/bin/ecpg --version
ecpg (PostgreSQL 8.3RC1) 4.4.0

- The error :
# make
make[1]: Entering directory `/var/build/whitebeam/templates/pgsql/common'
/usr/local/pgsql/bin/ecpg -t -o crbembsql.cxx -I /usr/local/pgsql/include 
crbembsql.pgC

g++-c crbembsql.cxx -o crbembsql.o
g++-c -g -O2 -I/var/build/whitebeam/include -DHAVE_CONFIG_H 
-I/usr/local/pgsql/include -I/usr/local/pgsql/include/server 
-I/var/build/whitebeam/common -I/var/build/whitebeam/presentation 
-I/usr/include/openssl -I/usr/local/pgsql/include -I 
/var/build/whitebeam/templates -I ../common  crbembsql.cxx -o crbembsql.o
crbembsql.pgC: In member function `BOOL crbembsql::gensql(const char*, int, 
BOOL, CrbString*)':

crbembsql.pgC:254: error: invalid conversion from `int' to `ECPG_statement_type'
crbembsql.pgC:254: error:   initializing argument 6 of `bool ECPGdo(int, int, 
int, const char*, char, ECPG_statement_type, const char*, ...)'

make[1]: *** [crbembsql.o] Error 1
make[1]: Leaving directory `/var/build/whitebeam/templates/pgsql/common'
make: *** [all] Error 2

--- The relevant lines in crbembsql.pgQ - the second is line 254
EXEC SQL PREPARE U1 FROM :sl_sql;
EXEC SQL EXECUTE U1;

--- The output for those lines from the ECPG preprocessor :
{ ECPGprepare(__LINE__, NULL, 0, "u1", sl_sql);}
#line 253 "crbembsql.pgC"

{ ECPGdo(__LINE__, 0, 1, NULL, 0, 1, "u1", ECPGt_EOIT, ECPGt_EORT);}
#line 254 "crbembsql.pgC"

--- PROTOTYPE for ECPGdo taken from 
/usr/local/pgsql/include/ecpglib.h bool		ECPGdo(const int, const int, const int, 
const char *, const char, const enum ECPG_statement_type, const char *,...);

--
The changes to the ECPGdo prototype were made during 8.3 development 
(REL8_2_STABLE) and were checked in 2007/08/14 (version 1.71 of ecpglib.h) by 
user 'meskes'.


--
Any suggestions very much appreciated!

Peter Wilson
--
http://www.whitebeam.org - OpenSource Web Application Server

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ECPG problem with 8.3

2008-01-13 Thread Peter Wilson

Michael Meskes wrote:

On Fri, Jan 11, 2008 at 11:51:08PM +, Peter Wilson wrote:
I've just tried compiling our project against the 8.3RC1 code. This is 
the first time I've tried any release of 8.3.

...
crbembsql.pgC:254: error: invalid conversion from `int' to `ECPG_statement_type'
crbembsql.pgC:254: error:   initializing argument 6 of `bool ECPGdo(int, 
int, int, const char*, char, ECPG_statement_type, const char*, ...)'


It seems that some compilers don't like int/enum aliasing here. I
changed this in CVS could you please re-try? 


Michael

Thank you Michael,
that fixes that problem. My build now gets further, but I get an error and a 
seg-fault later in the build.


I have a file that contains the following line :


EXEC SQL FETCH ABSOLUTE :count SEARCHCURSOR INTO
   :db.contact_id, :db.uname, :db.type, :db.parent,
   :db.name, :db.phone, :db.fax, :db.email, :db.description,
   :db.custom_data, :db.value, :db.relevance,
   :db.parentName :vl_parentName,
   :db.keywords :vl_keywords,
   :membOfRecordCount;

this has worked in every version of ECPG since 7.4 (when we started using 
Postgres). I now get the following error :


$ /usr/local/pgsql/bin/ecpg -t -o contactRecord.cxx -I 
/usr/local/pgsql/pgsql/include contactRecord.pgC


Starting program: /usr/local/pgsql.8.3.rc1.patch/bin/ecpg -t -o 
contactRecord.cxx -I /usr/local/pgsql/include contactRecord.pgC

contactRecord.pgC:1338: ERROR: fetch/move count must not be a variable.
gmake[1]: *** [contactRecord.cxx] Segmentation fault
gmake[1]: *** Deleting file `contactRecord.cxx'
gmake[1]: Leaving directory 
`/var/build/whitebeam/templates/pgsql/contacts-pgsql'
gmake: *** [all] Error 2

-
Running under GDB gives a stack trace as :
Program received signal SIGSEGV, Segmentation fault.
0x00bd0da3 in strlen () from /lib/tls/libc.so.6
(gdb) i s 5
#0  0x00bd0da3 in strlen () from /lib/tls/libc.so.6
#1  0x080494b1 in cat2_str (str1=0x969bae0 "fetch", str2=0x0) at preproc.y:105
#2  0x0804955e in cat_str (count=4) at preproc.y:128
#3  0x0805027e in base_yyparse () at preproc.y:2299
#4  0x08067f12 in main (argc=7, argv=0xfef93284) at ecpg.c:457
(gdb) i s
#0  0x00bd0da3 in strlen () from /lib/tls/libc.so.6
#1  0x080494b1 in cat2_str (str1=0x969bae0 "fetch", str2=0x0) at preproc.y:105
#2  0x0804955e in cat_str (count=4) at preproc.y:128
#3  0x0805027e in base_yyparse () at preproc.y:2299
#4  0x08067f12 in main (argc=7, argv=0xfef93284) at ecpg.c:457

---
Apart from the seg-fault, is there any particular reason I can't use a variable 
in the FETCH anymore? It's always worked in the past and would seem to be an 
important capability.


Pete

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ECPG problem with 8.3

2008-01-14 Thread Peter Wilson

Michael Meskes wrote:

On Sun, Jan 13, 2008 at 03:01:04PM +, Peter Wilson wrote:
that fixes that problem. My build now gets further, but I get an error 
and a seg-fault later in the build.


Whow, you're really stress testing it. Thanks a lot! This is what we
need.


I have to say I didn't write the original code - so I'm not particularly an 
expert in this area. I just get to maintain it and keep it working with newer 
releases of Postgres!




Apart from the seg-fault, is there any particular reason I can't use a 


The segfault is fixed in CVS. Reason was that on finding the variable it
set an error message but not the normal return value and then tried to
proceed anyway.

variable in the FETCH anymore? It's always worked in the past and would 
seem to be an important capability.


Yes. ECPG move to the latest backend protocol version to be able to
prepare statements correctly. However, with this protocol my own
addition to the standard, namely a variable as fetch count, is not
supported anymore. But there is a simple workaround. Just sprintf the
statement to a string and thereby replace the count variable with its
content and then EXEC SQL EXECUTE the string variable should do the job.


Fetch with a variable seems to be almost the only useful way of using FETCH 
ABSOLUTE (or any of the variants that have count parameter).


For backwards compatibility wouldn't it be better to do the sprintf in the ECPG 
preprocessor if the count is a variable rather than generate an error? In that 
way none of the existing applications would break. I think it's always better to 
keep the application interface the compatible with existing applications, even 
if that means a little behind the scenes glue!




Hope this helps.

Michael

Thanks again for your help :-)

Pete

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ECPG problem with 8.3

2008-01-15 Thread Peter Wilson
That it had been her opinion, till now, she was not guilty of
Adam's sin, nor any way concerned in it, because she was not active in
it; but that now she saw she was guilty of that sin, and all over
defiled by it; and the sin which she brought into the world with her,
was alone sufficient to condemn her. 

On the Sabbath-day she was so ill, that her friends thought it best that
she should not go to public worship, of which she seemed very desirous:
but when she went to bed on the Sabbath night, she took up a resolution,
that she would the next morning go to the minister, hoping to find some
relief there. As she awakened on Monday morning, a little before day,
she wondered within herself at the easiness and calmness she felt in her
mind, which was of that kind she never felt before. As she thought of
this, such words as these were in her mind: The words of the Lord are
pure words, health to the soul, and marrow to the bones: and then these
words, The blood of Christ cleanses from all sin; which were accompanied
with a lively sense of the excellency of Christ, and His sufficiency to
satisfy for the sins of the whole world. She then thought of that
expression, It is a pleasant thing for the eyes to behold the sun; which
words then seemed to her to be very app



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ECPG problem with 8.3

2008-01-15 Thread Peter Wilson
them. Every one can say this; every one can call
himself a prophet. But I see that Christian religion wherein prophecies are
fulfilled; and that is what every one cannot do.

694. And what crowns all this is prediction, so that it should not be said
that it is chance which has done it?

Whosoever, having only a week to live, will not find out that it is
expedient to believe that all this is not a stroke of chance...

Now, if the passions had no hold on us, a week and a hundred years would
amount to the same thing.

695. Prophecies.--Great Pan is dead.

696. Susceperunt verbum cum omni aviditate, scrutantes Scripturas, si ita se
haberent.[138]

697. Prodita lege. Impleta cerne. Implenda collige.[139]

698. We understand the prophecies only when we see the events happen. Thus
the proofs of retreat, discretion, silence, etc., are proofs only to those
who know and believe them.

Joseph so internal in a law so external.

Outward penances dispose to inward, as humiliations to humility. Thus the...

699. The synagogue has preceded the church; the Jews, the Christians. The
pr



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Replication Using Triggers

2008-01-18 Thread Peter Wilson

[EMAIL PROTECTED] wrote:



On Fri, 18 Jan 2008, Erik Jones wrote:


This is what I have in mind:

Have a plperl function that creates connections to all servers in the 
cluster (replication partners), and issues the supplied write query 
to them, possibly with a tag of some sort to indicated it is a 
replicated query (to prevent circular replication).


Have a post execute trigger that calls the above replication function 
if the query was issued directly (as opposed to replicated), and 
passes it the query it just executed if it was successful.


Not sure here if you mean literally the SQL query that was executed - in which 
case you have all sorts of problems with sequences and functions returning 
different values.




If the replication failed on any node, the whole thing gets rolled back.




This would effectively give star topology synchronous replication 
with very little effort, and no need for any external code. Am I 
missing something obvious that would prevent this from working? It 
would give replication capabilities better than MySQL's (which can 
only handle ring based multi-master replication) for the sake of 
about 100 lines of code. None of the required functionality required 
is new to PostgreSQL, either.


But there are plenty of solutions that do a lot better than this. Slony-I is the 
most polular. My favourite is a spin on the old db_mirror that used to be part 
of the Postgres distribution.


I can't talk about how Slony works, but db_mirror uses a very fast 'C' function 
to capture changes in a set of simple replication tables. A replication process 
then takes data from those tables and replicates (using actual values not the 
SQL statement) to any number of other servers. If one of the servers is down, 
the data remains in the replication tables until that node returns (or is removed).


The problem with db_mirror was that the replication process was written in Perl. 
This worked fine for simple tests but was ridiculously slow for replicating 
tables holding big BYTEA structures. I re-wrote the replication code in 'C' and 
it can replicate just about arbitrarily complex transactions is close to real-time.


You seem to be re-inventing the wheel, and the re-invention is not quite as 
round as the existing wheel :-)




Is there an existing implementation of this? Perhaps a perl program 
that creates the required triggers and stored procedures from looking 
at a schema?


What you've described here would be pretty simple to implement.  
However, I think you've greatly underestimated the performance issues 
involved.  If you need to push data to multiple databases before each 
transaction commits I think you'll find that pretty slow.


Only if transactions are used. I'm basing the requirements on "at least 
as good as MySQL", which this would meet without transactions. If 
transactions are wanted they could be enabled, otherwise it could just 
be fire and forget asynchronous replication a-la MySQL. Having a choice 
between transactions and speed is good. :-)


Synchronous replication tends to imply it works on all servers simultaneously or 
not on any. If any server fails a transaction it's rolled back on all servers. 
What you're describing sounds asynchronous to me.




One thing I haven't quite thought of a good way to do with this approach 
is the equivalent of the useful (albeit deprecated) LOAD DATA FROM 
MASTER command, that gets the server in sync by dropping and re-loading 
all the tables from the master(s) (or rather, peers in a multi-master 
star replication), and enables it in the replication. It would be neater 
than requiring downtime or global write locks. But I guess that could 
wait until version 2. :)


That's one thing. The other problem that most trigger based replication systems 
have problems with is propogating schema changes - because (I think) you can 
attach triggers to schema changes.




Thanks.
Gordan



Pete
--
Peter Wilson : http://www.whitebeam.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-30 Thread Peter Wilson

Joshua D. Drake wrote:

Tom Hart <[EMAIL PROTECTED]> wrote:

[...]

I find that the best way to get what you need, is to read the fine
manual from postgresql. Yes, its massive, unwieldy and in a lot of
ways counter-intuitive (to a newbie) but if you have the terminology
down you aren't going to find a more comprehensive text.
  
I find the manual answers just about everything I've needed to answer. 
Personally though I find

the on-line version somewhat slow/cumbersome to find what I'm looking for.

Using Windows as my desktop machine (servers running Linux) I found the 
most accessible form
of the manual was that distributed with pgAdminIII. Until recently they 
shipped a fully

searchable Windows Help version of the latest manual which was fantastic.

Unfortunately pgAdmin has now removed that section of the manual and 
simply links to the
Postgres web-site. I can understand it was some work to put it in each 
time - but it was
very useful. So much so when I get a chance I intend to find an older 
copy of pgAdmin and

install just the manual from it.

Has anyone else generated a Windows Help version of the manual?

Is there a source version of the files used to generate it (pgAdmin 
people?)? I'd be interested in
the amount of work needed to create the file - if not excessive I might 
volunteer to get it done

again if people other than me might find it useful.

Pete

Plus, when you find things that don't quite make sense you can submit
a doc patch to make the docs that much better.

Sincerely,


Joshua D. Drake
 



- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHn2oKATb/zqfZUUQRAtgCAKCOigLpBd9/EcYVPF/QsDvCYS3JugCggtS9
9QK5xte33f0/2+N/0pWOQvY=
=ioXq
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend
  



--
--------
Peter Wilson
T: 01707 891840
M: 07796 656566
http://www.yellowhawk.co.uk 	The information in this email is 
confidential and is intended for the addressee/s only. Access to this 
email by anyone else is unauthorised. If you are not the intended 
recipient, you must not read, use or disseminate the information 
contained in or attached to this email.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-30 Thread Peter Wilson

Dave Page wrote:

On Jan 30, 2008 1:34 PM, Peter Wilson <[EMAIL PROTECTED]> wrote:
  

 Dave Page wrote:

 On Jan 30, 2008 12:45 PM, Peter Wilson <[EMAIL PROTECTED]> wrote:


 Has anyone else generated a Windows Help version of the manual?


 Is it only distributed with the Window distribution?



Yes, at present. I guess it's something we could add to the website though.

/D
  

That would get my vote - maybe as an extra column on the following page
   http://www.postgresql.org/docs/manuals/

Pete

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ECPG problem with 8.3

2008-01-30 Thread Peter Wilson

Michael Meskes wrote:

On Mon, Jan 14, 2008 at 10:57:45AM -0500, Tom Lane wrote:
  

I'm concerned about this too.  We'll at least have to call this out as
an incompatibility in 8.3, and it seems like a rather unnecessary step
backwards.



Given that people seem to use this feature I'm more than willing to
implement it, although it might become a bit hackish. Given that fetch
is not a preparable statement we can live with the slightly inconsistent
variable handling I think.

Expect a patch soon.

Michael
  
I've just tested my original un-tweaked ECPG application code against 
8.3RC2 and everything

compiles and runs fine - including the variable count argument.

Thanks very much Michael

Pete


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-30 Thread Peter Wilson

Dave Page wrote:

On Jan 30, 2008 12:45 PM, Peter Wilson <[EMAIL PROTECTED]> wrote:

  

Has anyone else generated a Windows Help version of the manual?



We distribute it with PostgreSQL - it's just not integrated with the
pgAdmin help any more. You can even tell pgAdmin to use that if you
don''t wish to use the online help.
  

Hi Dave,
good to know it still exists.

Is it only distributed with the Window distribution?

I only run Postgres on Linux boxes, but use a Windows desktop machine.

Is there a place where I can just download the .chm file without having 
to install Postgres on Windows?


All the best
Pete

Regards, Dave.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  



--
----
Peter Wilson
T: 01707 891840
M: 07796 656566
http://www.yellowhawk.co.uk 	The information in this email is 
confidential and is intended for the addressee/s only. Access to this 
email by anyone else is unauthorised. If you are not the intended 
recipient, you must not read, use or disseminate the information 
contained in or attached to this email.




Re: [GENERAL] Is PostGreSql's Data storage mechanism "inferior"?

2008-01-30 Thread Peter Wilson

Swaminathan Saikumar wrote:

Hi all,

I'm new to PostGreSql.

http://searchyourwebhost.com/web-hosting/articles/insight-database-hosting-using-sql
What a wonderful article - it's almost worth keeping a copy. It's so bad 
it's difficult to know where to start. I think my favourite has to be :
+ MSSql being massive is considered to have the maximum scalability for 
database hosting.


And I just love the comment on both MySQL and Postgres about GPL and BSD 
being 'too open' - meaning?


Pete


Check out the link. I am starting out on a new personal project & had 
zeroed in on PostGreSql with Mono-ASP.NET as ideal for my needs, 
mainly owing to a PostGreSql whitepaper.
Now, I chanced upon the article above. I've pasted the cons as 
mentioned in the article, and would like the community feedback on it, 
especially with regards to the "inferior Data Storage mechanism".


The cons of PostgreSql Hosting
* Performance considerations: Inserts and Updates into the PostgreSql 
database is much slower compared to MySql. PostgreSql hosting thus 
might slow down the display of the web page online.
* BSD license issues: Since PostgreSql comes under the Berkeley 
license scheme, this is again considered to be too open.
* Availability of inferior Data Storage mechanism: PostgreSql uses 
Postgres storage system, which is not considered to be transaction sae 
during PostgreSql hosting.
* Its not far-flung: While MySql hosting and MSSql hosting have deeply 
penetrated into the market, PostgreSql hosting still remains to be 
passive in the database hosting market.
* Non-availability of required assistance for PostgreSql hosting: 
Assistance is being provided via mailing lists. However there is no 
guarantee that the issue faced during PostgreSql hosting would be 
resolved.


Thanks!



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Storing images as BYTEA or large objects

2008-02-13 Thread Peter Wilson

Koen Vermeer wrote:

Hi,

I would like to store binary data in a PostgreSQL database. The size of
the data is about 2 to 20 MB and is always stored or retrieved as a
block (i.e., I do not need to get only part of the data). As I
understand, I have two options for storing this data: As BYTEA or as
large objects. As I understand, the disadvantage of the first method is
that I need to write a custom routine to escape some binary values - or
is there some code available to do this? The disadvantage of large
objects is their existence outside of the main database, which may be
problematic when backing up a database. In addition, I need special
routines to store/retrieve the data.
  
I've used both methods. The only real problem is that none of the 
trigger based replication schemes

such as Slony can't deal with large objects.

Depending on what programming language you're using you do *not* need to 
escape the binary
data for BYTEA. Using libpq from C/C++ you can pass the binary data 
straight into the database. See

PQexecParams :

 
http://www.postgresql.org/docs/8.3/interactive/libpq-exec.html#LIBPQ-EXEC-MAIN


We use server-side JavaScript here that makes use of that interface so I 
can take an image directly

from the web-server and move it into the database with no escape overhead.

My preference : if I don't need the file-like interface to large objects 
I'd use BYTEA every time.


Pete
--
http://www.whitebeam.org - JavaScript web application server.
http://www.yellowhawk.co.uk

My two questions are: Is this summary correct? And: Which method should
I choose?

Best,
Koen


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Storing images as BYTEA or large objects

2008-02-13 Thread Peter Wilson

Koen Vermeer wrote:

On Wed, 2008-02-13 at 09:35 +, Peter Wilson wrote:
  
My preference : if I don't need the file-like interface to large objects 
I'd use BYTEA every time.



Right, so that basically means that when 'large objects' are files,
which should be saved and restored as a whole, it may be more natural to
use the large objects. I guess that applies to some uses of media
storage (music, photos, video).
  
No - I meant the Postgres large object interface allows you to read and 
write sections of a

large object. It provides a seek/read/write interface.

If you're only going to read or write the whole contents as a single 
block then use BYTEA. In
my case I store uploaded images as BYTEA - I only every need to 
read/write the image as a whole.
If you were dealing with very large images/music/video in a web 
environment then I could see a
web application wanting to read a chunk - write to the web client - read 
next chunk etc and thus
avoid the overhead of the entire contents being in memory at one time. 
That probably doesn't

help with upload though.

Pete
--
http://www.whitebeam.org - JavaScript web application server.
http://www.yellowhawk.co.uk

The large-objects-are-actually-files thing applies to my situation, so
unless there is some 'large objects are / will be deprecated' argument,
I guess I stick with large objects.

Thanks!

Koen


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

  



--
--------
Peter Wilson
T: 01707 891840
M: 07796 656566
http://www.yellowhawk.co.uk 	The information in this email is 
confidential and is intended for the addressee/s only. Access to this 
email by anyone else is unauthorised. If you are not the intended 
recipient, you must not read, use or disseminate the information 
contained in or attached to this email.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Postgres on shared network drive

2008-04-12 Thread Peter Wilson

Pavan Deolasee wrote:
[...]




I am not suggesting one read-write and many read-only architecture. I am
rather suggesting all read-only systems. I would be interested in this
setup if I run large read-only queries on historical data and need easy
scalability. With read-only setup, you can easily add another machine to
increase computing power. Also, we may come up with cache-sharing
systems so that if a buffer is cached on some other node, that can
be transfered on a high speed interconnect, rather than reading from a
relatively slower disk.

You can have infinite scalability of a read-only database simply by copying the 
database to each system. If it's historical data it's not "up to the minute". If 
you want to periodically update the read-only databases then that's pretty 
straightforward - with various options trading speed against ease - depending on 
your system requirements.







Yes. I was mostly assuming read-only scalability. What are the other
better ways to do so ?


 A known implementation of such a set up would be Oracle RAC, where
 you have a shared storage and N machines using it.



Oracle RAC is a multi-master kind of architecture where each node has
access to the shared storage and can directly read/write data.

Thanks,
Pavan



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL injection, php and queueing multiple statement

2008-04-12 Thread Peter Wilson

paul rivers wrote:

Ivan Sergio Borgonovo wrote:

Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
  


Code reviews are about the only way to enforce this.


That's not entirely true - if you have a policy that says thou-shalt-not-use 
non-prepared statements then an automated (nightly) grep of the source tree is 
going to find calls that don't follow that policy - these can be automatically 
emailed to you/your project enforcer. Couple that with a process step that 
enforces a grep for offending statements as part of transferring files to the 
live servers and you're about done.


I'm assuming that the current  policy would not allow just anyone to upload any 
code to critical live systems anyway.






If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable cost, why not providing a safety net that
will at least raise the bar for the attacker at a very cheap cost?
  


How do you do this? Disallow string concatenation and/or variable 
interpolation for any string that's going to be shipped off to the 
database? Do you parse the SQL string according to the rules of any 
backend database you might be talking to, to see if you have a where 
clause not using a prepared statement? i.e. - Nothing is going to work 
here.


You're stuck with making sure developers know the most rudimentary 
things about talking to a database.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DBmirror replication - replacement for DBMirror.pl

2005-05-02 Thread Peter Wilson
Not sure whether this is any use to anyone, or whether this is the right 
list to post to but...

I've just released a C++ implementation of the DBMirror.pl script as 
part of Whitebeam (http://www.whitebeam.org). We had *real* performance 
issues with the Perl implementation replicating large fields. It's an 
almost drop in replacement - except configuration file format is 
difffernt. It's also more fault tolerant than the Perl implementation 
and has some parameters to help distribute the replication load over 
time (if you delete 10,000 rows from a table, you don't really want 
you're main application to grind to a halt as replication soak up most 
of your resource!)

I needed to do this quickly - so it links to the utility classes in 
Whitebeam, things like a string class etc. It wouldn't be too difficult 
to decouple these and if there is any interest I'll do that when I get a 
little spare time. Once it's built though it's entirely generic and 
doesn't use anything else from Whitebeam.

If anyone has contact information for the original DBMirror author then 
I'd like to get in touch.

Would like to get feedback from anyone using DBmirror or thinks this 
would be useful..

Background

Our Whitebeam application server uses PostgreSQL for back-end data 
storage. We originally used IBM DB2, but ported to Postgres about 3 
years ago, but we never sorted out replication (partly because we were 
using Large Objects). I recently sorted that out and looked around for a 
replication scheme to use - first choice being Slony-I. I couldn't get 
it to work - the configuration scheme just kept claiming half our schema 
tables didn't have primary keys (they did!). Also the documentation was 
virtually non-existent.

Dropping back to DBmirror (which comes with the PostgreSQL distribution 
in the 'contrib' directory) worked fine-ish. Unfortunately the 
replication Perl script was un-usably slow, taking minutes to replicate 
a 100K BYTEA field, which we used to store images. The replication Perl 
script seemed to be rather inefficient, using a lot of regular 
expressions to decode field values etc. Perl isn't something I felt too 
confident in - and I needed a solution quickly and hence the C++ 
implementation

Pete
--
Peter Wilson
YellowHawk : http://www.yellowhawk.co.uk
Whitebeam : http:/www.whitebeam.org
-
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] DBmirror replication - replacement for DBMirror.pl

2005-05-02 Thread Peter Wilson




Just to add - the replacement for DBMirror.pl automatically works out
the schema in use and will replicate using DBmirror from 7.x or from
8.x (table names all changed!). This is done transparently.

I also wrote a short article at
http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm

Pete

Peter Wilson wrote:
Not
sure whether this is any use to anyone, or whether this is the right
list to post to but...
  
  
I've just released a C++ implementation of the DBMirror.pl script as
part of Whitebeam (http://www.whitebeam.org). We had *real* performance
issues with the Perl implementation replicating large fields. It's an
almost drop in replacement - except configuration file format is
difffernt. It's also more fault tolerant than the Perl implementation
and has some parameters to help distribute the replication load over
time (if you delete 10,000 rows from a table, you don't really want
you're main application to grind to a halt as replication soak up most
of your resource!)
  
  
I needed to do this quickly - so it links to the utility classes in
Whitebeam, things like a string class etc. It wouldn't be too difficult
to decouple these and if there is any interest I'll do that when I get
a little spare time. Once it's built though it's entirely generic and
doesn't use anything else from Whitebeam.
  
  
If anyone has contact information for the original DBMirror author then
I'd like to get in touch.
  
  
Would like to get feedback from anyone using DBmirror or thinks this
would be useful..
  
  
Background
  

  
  
Our Whitebeam application server uses PostgreSQL for back-end data
storage. We originally used IBM DB2, but ported to Postgres about 3
years ago, but we never sorted out replication (partly because we were
using Large Objects). I recently sorted that out and looked around for
a replication scheme to use - first choice being Slony-I. I couldn't
get it to work - the configuration scheme just kept claiming half our
schema tables didn't have primary keys (they did!). Also the
documentation was virtually non-existent.
  
  
Dropping back to DBmirror (which comes with the PostgreSQL distribution
in the 'contrib' directory) worked fine-ish. Unfortunately the
replication Perl script was un-usably slow, taking minutes to replicate
a 100K BYTEA field, which we used to store images. The replication Perl
script seemed to be rather inefficient, using a lot of regular
expressions to decode field values etc. Perl isn't something I felt too
confident in - and I needed a solution quickly and hence the C++
implementation
  
  
Pete
  
--
  
Peter Wilson
  
YellowHawk : http://www.yellowhawk.co.uk
  
Whitebeam : http:/www.whitebeam.org
  
-
  
  
  
---(end of
broadcast)-------
  
TIP 8: explain analyze is your friend
  
  



-- 


  

   Peter Wilson
T: 01707 891840
M: 07796 656566
  http://www.yellowhawk.co.uk
  
  
  
  
  

  






<>

Re: [GENERAL] postgresql replication

2005-05-05 Thread Peter Wilson
Vlad wrote:
Hello,
in need to increase reliability of the service (and perhaps eventually
offload main DB server) we are looking to setup replication for the
database server. I found two solutions:
Slony ( http://gborg.postgresql.org/project/slony1/projdisplay.php )
PGCluster ( http://pgfoundry.org/projects/pgcluster )
I found that PgCluster supports multi-muster mode, which we can
benefit from, but it's not required for "backup" which is #1 goal at
the moment.
If anyone used those solutions, compared performance, reliability, etc
- please share your experience / thoughts.
There is a 3rd option - DBMirror, which comes as part of the Postgres 
distribution (look in the contrib/dbmirror directory).

I looked at Slony, which seems to be a current favourite -but I couldn't 
get it working on my database (claimed my tables didn't have relevant 
keys - which they do). Slony-I had almost non-existent documentation 
which I always find prety unacceptable unless everything goes very 
smoothly..

I then found DBMirror - which I'm now using to replicate replicate from 
master to slave database, and then cascade replicating my slave to a 3rd 
off-site server.

Performance of DBMirror.pl (a Perl script that does the actual 
replication) was very poor - so I re-wrote it in C++. Some details at:
http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm

DBmirror doesn't replicate schema changes, which wasn't a problem for me 
since our schema changes very infrequently.

Hope that helps
Pete
--
http://www.yellowhawk.co.uk
http://www.whitebeam.org
-
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] postgresql replication

2005-05-05 Thread Peter Wilson




Vlad wrote:

  Hello,

in need to increase reliability of the service (and perhaps eventually
offload main DB server) we are looking to setup replication for the
database server. I found two solutions:

Slony ( http://gborg.postgresql.org/project/slony1/projdisplay.php )
PGCluster ( http://pgfoundry.org/projects/pgcluster )

I found that PgCluster supports multi-muster mode, which we can
benefit from, but it's not required for "backup" which is #1 goal at
the moment.

If anyone used those solutions, compared performance, reliability, etc
- please share your experience / thoughts.
  


There is a 3rd option - DBMirror,
which comes as part of the Postgres distribution (look in the
contrib/dbmirror directory).

I looked at Slony, which seems to be a current favourite -but I
couldn't get it working on my database (claimed my tables didn't have
relevant keys - which they do). Slony-I had almost non-existent
documentation which I always find prety unacceptable unless everything
goes very smoothly..

I then found DBMirror - which I'm now using to replicate replicate from
master to slave database, and then cascade replicating my slave to a
3rd off-site server.

Performance of DBMirror.pl (a Perl script that does the actual
replication) was very poor - so I re-wrote it in C++. Some details at:
    http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm

DBmirror doesn't replicate schema changes, which wasn't a problem for
me since our schema changes very infrequently.

Hope that helps
Pete
--
http://www.yellowhawk.co.uk
http://www.whitebeam.org
-






Re: [GENERAL] Postgres vs Firebird?

2005-05-05 Thread Peter Wilson
Joshua D. Drake wrote:
Benjamin Smith wrote:
As a long-time user of Postgres, (First started using it at 7.0) I'm 
reading recently that Firebird has been taking off as a database.
Perhaps this is not the best place to ask this, but is there any 
compelling advantage to using Firebird over Postgres? 
Firebird is a nice database but I don't think it can scale as well as 
PostgreSQL. IIRC they just added support for SMP. Also, although their 
community is very active I do not believe it is as large nor does it 
have the commercial backing like PostgreSQL.

Sincerely,
Joshua D. Drake
Command Prompt, Inc.
---(end of broadcast)---
TIP 8: explain analyze is your friend
Haven't used it, but your question sparked a little interest so I spent 
some time trawling through their web-site.

Like many open-source projects, documentation seems to be lagging a 
*very* long way behind the code. One of the things that has really 
impressed me with Postgres is the quality and thoroughness of the 
documentation. As a developer I need API documentation, not just a quick 
start guide and list of errors. I tend to avoid projects that don't 
bother with the documentation (I gave up on Slony-I very quickly for 
exactly the same reason!)

Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] postgresql replication

2005-05-05 Thread Peter Wilson
Christopher Browne wrote:
Martha Stewart called it a Good Thing when Peter Wilson <[EMAIL PROTECTED]> 
wrote:
I looked at Slony, which seems to be a current favourite -but I
couldn't get it working on my database (claimed my tables didn't have
relevant keys - which they do). Slony-I had almost non-existent
documentation which I always find prety unacceptable unless everything
goes very smoothly..

I updated the copy of the documentation that I keep online (URL below)
to reflect the latest CVS updates just yesterday.  I have to say
"nonsense!"
There are things I would like to be better documented, but the notion
that the documentation is "almost nonexistent" is just nonsense.  And
the problem you describe is indeed discussed in the documentation.
Admittedly, it is not all in the version 1.0.5 tarball, but that's
because a lot of it was written after that release.
Nonsense? h
Without wanting to be hyper-critical, documentation that's hidden isn't 
a whole lot of use. The link in your email to
	http://linuxdatabases.info/info/slony.html

does indeed seem to have some more documentation, but why isn't it 
linked from the main Slony site 
(http://gborg.postgresql.org/project/slony1)?

Why is this documentation on 'Christoper B. Browns homepage rather than 
the Slony web pages? The 'official' Slony documentation I had available 
was at :
	http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx

and it *really* didn't help with the problems I had.
Having now taken a look at the documentation you reference, it's still 
not wonderfully comprehensive. The problem I had was that despite the 
fact that my tables had primary keys the Slony configuration refused to 
recognise them. The documentation says simply that primary or candidate 
primary keys are a requirements.

So - as a potential 'slony user' it is *not* nonsense that there was 
negligable documentation - from my perspective it was a fact and I found 
an alternative solution.

Pete
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Slony v. DBMirror

2005-05-05 Thread Peter Wilson
Vlads thread on Slony against PGcluster made me go back to take another 
look at Slony. I'd tried to get it going back in February when I needed 
to build some replicated databases. Slony was my first choice because it 
seemed to be the current 'hot topic'.

I couldn't get it to work - and having tried another couple of solutions 
I settled on DBMirror which comes with Postgres in the 'contrib' directory.

Looking at Slony now, can someone tell me what the benefits of Slony are 
over DBmirror? As far as I can see:
+ both are async Master->multiple slaves
+ both (I think) can do cascaded replication
+ neither replicate large objects
+ both require all tables to have primary keys
+ neither replicate schema changes
+ nether do automatic switch-over

All slony seems to offer is a different configuration system and the 
ability to automatically propogate configuration changes. It seems this 
could be added to DBmirror pretty easily so why a whole new project?

Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
---
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Slony v. DBMirror

2005-05-06 Thread Peter Wilson
Grant McLean wrote:
On Thu, 2005-05-05 at 14:16 -0400, Jeff - wrote:
One of the biggest things for Slony is that you can install slony,  
set things up and it will bring the slave(s) "up to speed".  You  
don't need to do an initial data dump (I think you still need to load  
the schema on the slaves, but not the data).  That is a BIG win for  
us folks who can't take a machine down while pg_dump runs (and while  
it is restored on hte slave)

Why would you need to take anything down to run pg_dump?  And surely
bringing a slave up to speed using Slony would be much slower than
dump/restore?
You don't need to take Postgres down to use pg_dump - it works just fine.
The problem with replication (with DBmirror at least) is that you have 
to create a backup in a very specific order to make sure your new backup 
ends up in sync and transactions are neither replicated more than once, 
or not replicated at all:

1. Stop client access to the database (so you don't create any more 
transactions to replicate)

2. Stop the replication script when the dbmirror tables are empty
3. pd_dump the master
4. pg_restore the slave
5. Restart client apps and replication (doesn't matter which order)
If you don't do this then there is a chance of missing or more likely 
duplicating transactions which can obviously cause problems.

Having said that - it would be fairly straight-forward to write a 
recover script that avoided these problems by taking note of the 
transaction sequence IDs in the replication tables. If I get a chance 
I'll look into doing that - doesn't feel like a huge job!

Pete
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Slony v. DBMirror

2005-05-06 Thread Peter Wilson
Andrew Sullivan wrote:
On Thu, May 05, 2005 at 03:35:27PM +0100, Peter Wilson wrote:
Looking at Slony now, can someone tell me what the benefits of Slony are 
over DBmirror? As far as I can see:
+ both are async Master->multiple slaves
+ both (I think) can do cascaded replication

This isn't really true of dbmirror.  You can do it, but if you lose
the intermediate system, you also lose the cascaded ones (the
downstream one doesn't know about the state of the top-level origin). 
Slony is designed to cover that case (it was one of several criteria
we had to satisfy).
That's true. The alternative is to simply replicate from master to both 
slaves, but this places an additional load on the master.

+ neither replicate schema changes

This is sort of false, too.  Slony has a facility for injecting the
schema changes at just the right spot in the replication sets, so
that you really can push your schema changes through Slony.  (This
isn't to say you should regularly do this -- it's designed for
production systems, where schema changes should be relatively rare.)
I got the lack of schema changes from the Slony documentation. I guess 
there must be some manual intervention though to say 'make these schema 
chagnes now'?


+ nether do automatic switch-over

Any async replication system which routinely does automatic _fail_
over is, in my opinion, a product not suited for production.  This is
a decision that generally needs to be made at Layer 9 or so -- when
you kill a data source, you are potentially walking away from data. 
That wasn't meant to be a critisism of either :-)
(Naturally, some special cases will call for such fail over anyway. 
It's possible to set up Slony for this, of course, just dangerous. 
You'd need some external scripts, but they're not impossible to
build.  There's been a recent discussion of this topic on the slony
lists.)  Slony _does_ have pretty good facilities for controlled
switch over (as well as a good mechanism for fail over, where you
abandon the old origin).  With the right arrangements with pgpool,
you ought to be able to do a controlled switch over of a data origin
without a client outage.  This is part of the system by design.  That
design turns out to be harder than you'd think.

Slony also replicates sequences, has extensive guards against data
loss, and can deal with triggers that need to be working or not
depending on the current use of a table in a set.  The data loss
problem due to unreliable nodes is quite a bit nastier than it might
seem.  The problem isn't just to replicate.  The problem is to
replicate in a provably reliable way.
FYI DBmirror with Postgres version 8 also replicates sequences.

All slony seems to offer is a different configuration system and the 
ability to automatically propogate configuration changes. It seems this 
could be added to DBmirror pretty easily so why a whole new project?

We looked at the options when we launched the project, believe me. At
the time, we were using erserver, the development of which we'd also
subsidized (starting in 2001).  We learned a lot from the troubles we
had with that system (some of which were addressed in what is the
current commercial erserver code), and the result was that we
concluded we could not "backport" several of the features we wanted
into either erserver or dbmirror (aside from the
frustrating-but-mostly-works original Java engine in the
first-released community erserver, there isn't much to distinguish
dbmirror and the community erserver).  If you want to see the results
of our investigation, and (by inference) the criteria we used to
decide what would satisfy our requirements, you can see Jan's concept
paper; it's at
<http://developer.postgresql.org/~wieck/slony1/Slony-I-concept.pdf>.
A
Fair enough - thanks Andrew. When I get some less busy time I'll have 
another got with Slony - you've made a good case for it, even though I 
probably don't need it for my applications right now (not running a 
domanin name registry!). It would probably ease my admin overhead when 
things do go wrong though which is good.

thanks
Pete
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Hosting options on Postgres - what's best?

2005-05-11 Thread Peter Wilson




Just Someone wrote:

  Hi,

I'm looking into creating a hosted application with Postgres as the
SQL server. I would like to get some ideas and oppinions about the
different ways to separate the different clients, using postgres.

The options I had in mind:

1) Create a different database per client. How much overhead will this add?

2) Use schemas and authentication. So each client is a different
schema on the database.

3) Use application level security (per object security maintained by the app).

4) 

Any ideas? Opinnions?

js.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

  

We use a single schema or multiple clients in a web environment using
whitebeam. We ridgedly enforce secure access through a client_id model.
In a web environment with lots of clients this has the benefit over
other appoaches that you have far fewer database connections that
otherwise, and that the connections can safely be persistent.

If you can't safely control the access of course (because you're not
writing the application maybe) then other solutions are more secure.

Pete
--
http://www.whitebeam.org


-- 


  

   Peter Wilson
T: 01707 891840
M: 07796 656566
  http://www.yellowhawk.co.uk
  
  
  
  
  

  






<>

Re: [GENERAL] Indexes

2005-08-02 Thread Peter Wilson

Jake Stride wrote:

Hi,

I have a table set up:

\d companycontactmethod
  Table "public.companycontactmethod"
  Column   |   Type|   Modifiers
---+---+
 tag   | character varying | not null
 contact   | character varying | not null
 type  | character(1)  | not null
 companyid | bigint| not null
 name  | character varying | not null
 main  | boolean   | not null default false
 billing   | boolean   | not null default false
 shipping  | boolean   | not null default false
 payment   | boolean   | not null default false
 technical | boolean   | not null default false
Indexes:
"companycontactmethod_pkey" PRIMARY KEY, btree (companyid, name, "type")
"companycontactmethod_companyid" btree (companyid)
"companycontactmethod_main_type" btree (main, "type")
Foreign-key constraints:
"$1" FOREIGN KEY (companyid) REFERENCES company(id) ON UPDATE
CASCADE ON DELETE CASCADE

and am running the following:

explain analyse SELECT companycontactmethod.tag,
companycontactmethod.contact, companycontactmethod."type",
companycontactmethod.companyid FROM companycontactmethod WHERE
companycontactmethod.main AND companycontactmethod.type = 'E';
   QUERY PLAN

 Seq Scan on companycontactmethod  (cost=0.00..181.10 rows=2079
width=40) (actual time=0.027..17.068 rows=2134 loops=1)
   Filter: (main AND ("type" = 'E'::bpchar))
 Total runtime: 25.965 ms

why is it not using the companycontactmethod_main_type index on the
query? Am I missing something obvious here?

Thanks

Jake

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



The index is of no use when you specify no value for main. You want any 
row that has any value for main, and a value of 'E' for type. Because 
you haven't specified a value for 'main' the only solution is to scan 
the entire set.


Pete
--
Peter Wilson, YellowHawk Ltd, http://www.yellowhawk.co.uk

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] feeding big script to psql

2005-08-02 Thread Peter Wilson

Dan Sugalski wrote:

At 1:57 PM +0200 8/2/05, Havasvölgyi Ottó wrote:

Hi,

Now I am at 7 MB, and the reading speed is 3-4KB/sec.


Have you checked to see if you're swapping as this goes on, either in 
the client or on the server?


- Original Message - From: "Havasvölgyi Ottó" 
<[EMAIL PROTECTED]>

To: 
Sent: Tuesday, August 02, 2005 1:31 PM
Subject: Re: [GENERAL] feeding big script to psql


Hi,

The effect is the same even if I redirect the output to file with the 
-o switch.

At the beginning 200 KB/sec, at 1.5 MB the speed is less than 20 KB/sec.

Best Regards,
Otto



- Original Message - From: "Havasvölgyi Ottó" 
<[EMAIL PROTECTED]>

To: "Tom Lane" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, August 02, 2005 11:24 AM
Subject: Re: [GENERAL] feeding big script to psql


Tom,

Thanks for the suggestion. I have just applied both switch , -f (I 
have applied this in the previous case too) and -n, but it becomes 
slow again. At the beginning it reads about 300 KB a second, and 
when it has read 1.5 MB, it reads only about 10 KB a second, it 
slows down gradually. Maybe others should also try this scenario. 
Can I help anything?


Best Regards,
Otto


- Original Message - From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Havasvölgyi Ottó" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, August 02, 2005 3:54 AM
Subject: Re: [GENERAL] feeding big script to psql


=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[EMAIL PROTECTED]> writes:
I know it would be faster with COPY, but this is extremly slow, 
and the

bottleneck is psql.
What is the problem?


Hmm, does the Windows port have readline support, and if so does 
adding

the "-n" switch to the psql invocation fix the problem?  Or you could
try feeding the script with -f switch or \i rather than "psql 

Readline adds a fair amount of overhead, which is completely invisible
at human typing speeds but can be annoying when reading scripts.

regards, tom lane





---(end of 
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Have you tried inserting VACUUM commands into the script every now and then?

I found a while ago that after inserting a lot of rows into a clean 
Postgres table it would take several minutes just to analyse a command, 
not even starting the execution. That was on version 8.0. On version 
7.4.x the query never returned at all.


Pete
--
Peter Wilson - YellowHawk Ltd : http://www.yellowhawk.co.uk

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] feeding big script to psql

2005-08-02 Thread Peter Wilson
I was a little busy with deadlines at the time but I saved the database 
in it's slow configuration so I could investigate during a quieter period.


I'll do a restore now and see whether I can remember back to April when 
I came across this issue.


Pete


Tom Lane wrote:

Peter Wilson <[EMAIL PROTECTED]> writes:
I found a while ago that after inserting a lot of rows into a clean 
Postgres table it would take several minutes just to analyse a command, 
not even starting the execution.


Oh?  Could you provide a test case for this?  I can certainly believe
that the planner might choose a bad plan if it has no statistics, but
it shouldn't take a long time to do it.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] feeding big script to psql

2005-08-02 Thread Peter Wilson

Tom Lane wrote:
> Peter Wilson <[EMAIL PROTECTED]> writes:
>> I found a while ago that after inserting a lot of rows into a clean
>> Postgres table it would take several minutes just to analyse a command,
>> not even starting the execution.
>
> Oh?  Could you provide a test case for this?  I can certainly believe
> that the planner might choose a bad plan if it has no statistics, but
> it shouldn't take a long time to do it.
>
>regards, tom lane
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
>

Hi Tom,
I've spent half an hour restoring the database from the backup I took. This was back in December '04. The database dump is about 95Mbytes, none of the 
tables are particularly big.


On investigation the problems occurs on 'EXPLAIN ANALYZE' - which is what pgadminIII does when you press the explain button. According to the manual, 
this actually executes the command and gives actual times (I think pgadmin is wrong here - if you ask it to explain a command you want the plan it's 
going to use - not wait for it to finish!)


That said - I'll include the details here, at least partly because the tables are small in database terms, and I don't understand how this query even 
with lots of scanning can take 100% CPU for over 10 minutes (654 seconds for explain/analyze  651 seconds for execute).


OK - the query is:

SELECT DISTINCT c.client_id, c.instance, c.contact_id, c.uname FROM contacts c
   WHERE c.client_id = 'gadget'
 AND c.instance = '0'
 AND (c.type = 'COMMUNITY'
   OR c.type = 'OU'
   OR c.type = 'INDIVIDUAL'
 )
 AND c.contact_id in (
SELECT subb.community_id  FROM contact_att subb
   WHERE subb.client_id = 'gadget'
 AND subb.instance = '0'
 AND subb.contact_id = 3854.00
) ;

By itself the sub-select executes in 235ms and yields a set that does not vary depending on the outer select - it's a constant set in effect. It would 
seem that in the worst case assuming no index or size information, the planner should spot the invariance of the subselect and a sequential scan of 
the 'contacts' table would be the worst result I would expect.


There are two tables involved in this query.

'contacts' contains 3728 records.
'contact_att' contains 19217 records.

The query plan yields :

Unique  (cost=12.05..12.06 rows=1 width=90) (actual time=654491.967..654491.984 
rows=3 loops=1)
  ->  Sort  (cost=12.05..12.05 rows=1 width=90) (actual 
time=654491.958..654491.959 rows=3 loops=1)
Sort Key: c.client_id, c.instance, c.contact_id, c.uname
->  Nested Loop IN Join  (cost=0.00..12.04 rows=1 width=90) (actual 
time=577763.884..654491.864 rows=3 loops=1)
  Join Filter: ("outer".contact_id = "inner".community_id)
  ->  Index Scan using cos_uk on contacts c  (cost=0.00..6.02 
rows=1 width=90) (actual time=0.227..123.862 rows=2791 loops=1)
Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
Filter: ((("type")::text = 'COMMUNITY'::text) OR (("type")::text = 
'OU'::text) OR (("type")::text = 'INDIVIDUAL'::text))
  ->  Index Scan using ca_pk on contact_att subb  (cost=0.00..6.01 
rows=1 width=8) (actual time=0.207..234.423 rows=3 loops=2791)
Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
Filter: ((contact_id)::numeric = 3854.00)
Total runtime: 654492.320 ms

Definitions for the two relevant tables are:

create table contacts (
INSTANCE CHARACTER (1)  NOT NULL ,
client_id varchar (50) not null ,
contact_id bigint default nextval('contacts_contact_id_seq'::text),
UNAME VARCHAR (32)  NOT NULL ,
TYPE VARCHAR (20)  NOT NULL DEFAULT 'INDIVIDUAL',
parent bigint,
NAME VARCHAR (240) ,
PHONE VARCHAR (32) ,
FAX VARCHAR (32) ,
EMAIL VARCHAR (240) ,
BASIC_PW VARCHAR (128) ,
DESCRIPTION VARCHAR (240),
custom_data varchar(8192),
CONSTRAINT COS_PK PRIMARY KEY(INSTANCE,CLIENT_ID,CONTACT_ID)
)

create table contact_att(
instance character(1),
client_id varchar(50) not null,
contact_id bigint,
community_id bigint,
inherited smallint,
CONSTRAINT CA_PK PRIMARY KEY(INSTANCE,CLIENT_ID,CONTACT_ID,COMMUNITY_ID)
)

CREATE INDEX CO_PA_IND ON CONTACTS (PARENT);
CREATE INDEX CO_TY_IND ON CONTACTS (TYPE);

--
Hope you find that useful! If there is anything else I can provide you with 
then please do let me know.

Pete



Peter Wilson - YellowHawk Ltd, http://www.yellowhawk.co.uk  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] feeding big script to psql

2005-08-02 Thread Peter Wilson

Tom Lane wrote:

Peter Wilson <[EMAIL PROTECTED]> writes:
I found a while ago that after inserting a lot of rows into a clean 
Postgres table it would take several minutes just to analyse a command, 
not even starting the execution.


Oh?  Could you provide a test case for this?  I can certainly believe
that the planner might choose a bad plan if it has no statistics, but
it shouldn't take a long time to do it.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


After a vacuum the query plan becomes:

Unique  (cost=1438.65..1438.66 rows=1 width=39) (actual time=260.473..260.489 
rows=3 loops=1)
  ->  Sort  (cost=1438.65..1438.65 rows=1 width=39) (actual 
time=260.468..260.471 rows=3 loops=1)
Sort Key: c.client_id, c.instance, c.contact_id, c.uname
->  Nested Loop  (cost=1434.14..1438.64 rows=1 width=39) (actual 
time=260.007..260.306 rows=3 loops=1)
  ->  HashAggregate  (cost=1434.14..1434.14 rows=1 width=8) (actual 
time=259.666..259.686 rows=3 loops=1)
->  Index Scan using ca_pk on contact_att subb  
(cost=0.00..1433.95 rows=78 width=8) (actual time=0.367..259.617 rows=3 loops=1)
  Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
  Filter: ((contact_id)::numeric = 3854.00)
  ->  Index Scan using cos_pk on contacts c  (cost=0.00..4.48 
rows=1 width=39) (actual time=0.178..0.182 rows=1 loops=3)
Index Cond: ((c.instance = '0'::bpchar) AND ((c.client_id)::text = 
'gadget'::text) AND (c.contact_id = "outer".community_id))
Filter: ((("type")::text = 'COMMUNITY'::text) OR (("type")::text = 
'OU'::text) OR (("type")::text = 'INDIVIDUAL'::text))
Total runtime: 260.886 ms

whitebeam_slow=> \d contacts
  Table "public.contacts"
   Column|  Type   | Modifiers
-+-+---
 instance| character(1)| not null
 client_id   | character varying(50)   | not null
 contact_id  | bigint  | not null default 
nextval('contacts_contact_id_seq'::text)
 uname   | character varying(32)   | not null
 type| character varying(20)   | not null default 
'INDIVIDUAL'::character varying
 parent  | bigint  |
 name| character varying(240)  |
 phone   | character varying(32)   |
 fax | character varying(32)   |
 email   | character varying(240)  |
 basic_pw| character varying(128)  |
 description | character varying(240)  |
 custom_data | character varying(8192) |
Indexes:
"cos_pk" PRIMARY KEY, btree (instance, client_id, contact_id)
"cos_uk" UNIQUE, btree (instance, client_id, uname)
"co_pa_ind" btree (parent)
"co_ty_ind" btree ("type")

whitebeam_slow-> \d contact_att
Table "public.contact_att"
Column| Type  | Modifiers
--+---+---
 instance | character(1)  | not null
 client_id| character varying(50) | not null
 contact_id   | bigint| not null
 community_id | bigint| not null
 inherited| smallint  |
Indexes:
"ca_pk" PRIMARY KEY, btree (instance, client_id, contact_id, community_id)
Foreign-key constraints:
"ca_cos_comm" FOREIGN KEY (instance, client_id, community_id) REFERENCES contacts(instance, client_id, contact_id) ON UPDATE RESTRICT ON DELETE 
RESTRICT

"ca_cos_fk" FOREIGN KEY (instance, client_id, contact_id) REFERENCES 
contacts(instance, client_id, contact_id) ON UPDATE RESTRICT ON DELETE RESTRICT



Peter Wilson - YellowHawk Ltd, http://www.yellowhawk.co.uk

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] feeding big script to psql

2005-08-03 Thread Peter Wilson

Tom Lane wrote:

Peter Wilson <[EMAIL PROTECTED]> writes:

Tom Lane wrote:

Oh?  Could you provide a test case for this?  I can certainly believe
that the planner might choose a bad plan if it has no statistics, but
it shouldn't take a long time to do it.



On investigation the problems occurs on 'EXPLAIN ANALYZE' - which is
what pgadminIII does when you press the explain button.


Ah.  Well, this is an ideal example of why you need statistics ---
without 'em, the planner is more or less flying blind about the number
of matching rows.  The original plan had


   ->  Index Scan using ca_pk on contact_att subb  (cost=0.00..6.01 
rows=1 width=8) (actual time=0.207..234.423 rows=3 loops=2791)
 Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
 Filter: ((contact_id)::numeric = 3854.00)


while your "after a vacuum" (I suppose really a vacuum analyze) plan has


 ->  Index Scan using ca_pk on contact_att subb  
(cost=0.00..1433.95 rows=78 width=8) (actual time=0.367..259.617 rows=3 loops=1)
   Index Cond: ((instance = '0'::bpchar) AND 
((client_id)::text = 'gadget'::text))
   Filter: ((contact_id)::numeric = 3854.00)


This is the identical scan plan ... but now that the planner realizes
it's going to be pretty expensive, it arranges the join in a way that
requires only one scan of contact_att and not 2791 of 'em.

The key point here is that the index condition on instance/client_id
is not selective --- it'll pull out a lot of rows.  All but 3 of 'em are
then discarded by the contact_id condition, but the damage in terms
of runtime was already done.  With stats, the planner can realize this
--- without stats, it has no chance.

Looking at your table definition, I suppose you were expecting the
contact_id condition to be used with the index, but since contact_id is
bigint, comparing it to a numeric-type constant is not considered indexable.
You want to lose the ".00" in the query.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Thanks for that Tom - especially the bit about removing the .0 from
the numbers. I'm pretty new to some of this database stuff - even newer at
trying to optimise queries and 'think like the planner'. Never occurred to
me the number format would have that effect.

Removing the zeroes actaully knocked a few ms of the execution times in
real-life querries :-)

Just out of interest - is there an opportunity for the planner to realise
the sub-select is basically invariant for the outer-query and execute once,
regardless of stats. Seems like the loop-invariant optimisation in a 'C'
compiler. If you have to do something once v. doing it 2791 times then
I'd plop for the once!

Thanks again Tom, much appreciated for that little nugget
Pete
--
Peter Wilson. YellowHawk Ltd, http://www.yellowhawk.co.uk

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] bytea or large objects?

2005-08-26 Thread Peter Wilson

Howard Cole wrote:

Hi,

I am going to create binary objects in a database which are compressed 
eml files (1K - 10 Mbytes in size). Am I better using the bytea or large 
objects?


Is there still an issue with backup and restore of databases using large 
objects with pg_dump/restore?


Thanks in advance.
Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



I've just re-written our Whitebeam code to drop large-objects in favour of 
BYTEA fields.

All the old problems of large objects in backups exist, but the killer for us was that none of the current replication systems, at least that I could 
find, would replicate large objects. This became a mandatory requirements for us.


I'd have to have a *very* good reason to use large objects over BYTEA now.

Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] bytea or large objects?

2005-08-26 Thread Peter Wilson

Joshua D. Drake wrote:


I've just re-written our Whitebeam code to drop large-objects in 
favour of BYTEA fields.


All the old problems of large objects in backups exist, but the killer 
for us was that none of the current replication systems, at least that 
I could find, would replicate large objects. This became a mandatory 
requirements for us.


Mammoth Replicator has always replicated Large Objects. The only 
"backup" issue to large objects is that you have to pass a separate flag

and use the custom or tar format to dump them.

Bytea has its own issues mostly based around memory usage.

I am not saying you should or shouldn't switch as it really depends on 
your needs but the information above just isn't quite accurate.


Sincerely,

Joshua D. Drake


I should have added that my search was limited to open source/free replication
systems.





I'd have to have a *very* good reason to use large objects over BYTEA 
now.


Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Wordpress & PostgreSQL ...

2006-10-29 Thread Peter Wilson




Dawid Kuroczko wrote:
On 10/26/06, Robert Treat
<[EMAIL PROTECTED]> wrote:
  
  the wordpress guys have basically said they
do not want to support postgres,

which is mainly why we swapped to s9y on planetpg.  you can read some
more

info here:

http://people.planetpostgresql.org/xzilla/index.php?/archives/13-One-Good-Port.html

  
  
Suppose one would like to create a WordPress workalike, i.e. a blogging
  
engine sharing look&feel of WordPress but written from scratch. 
What
  
language/framework do you think would be the best?
  
  
I mean -- one could code that in PHP and simply use PostgreSQL, but
  
PHP is probably the most popular language, but I don't think it's the
best.
  
  
Ruby on Rails perhaps?  I have not written a single line in Ruby, but
  
I've read on the list here that it has very good PostgreSQL
interface...
  
  
Python?  Perl?  Something different?
  
  
The reason I'm asking this question is that I would like to hear
personal
  
experiences with alternative to PHP environments, how do they "feel",
  
how do they scale, in order to boost my knowledge a bit.
  

If you want a mature alternative to PHP then take a look at Whitebeam
(http://www.whitebeam.org). This uses the Mozilla 'Spidermonkey'
_javascript_ engine for server-side JS and uses Postgres as the back-end
database. At the moment it doesn't interface with any other databases -
but then not really much of a need :-)

I've used Whitebeam for my own blogging software as well other
communications functions (newsletters, discussion forums, e-zines,
e-commerce and I know others that have used the Postgres Interface for
some equally sophisticated applications. Take a look at
http://www.gadgetspeak.com as an example of what's possible. That
latest release of Spidermonkey even gives you a native 'XML' type so
you can do things like:
   var myDoc = my
title

Brilliant for putting together things like RSS feeds and for
implementing the SOAP/ XML-RPC blogging APIs

Pete
  Regards,
  
    Dawid
  
  
---(end of
broadcast)---
  
TIP 5: don't forget to increase your free space map settings
  
  



-- 



  

   Peter Wilson
T: 01414 160505
M: 07796 656566
  http://www.yellowhawk.co.uk
  
   The information in this email is
confidential and is intended for the addressee/s only. Access to this
email by anyone else is unauthorised. If you are not the intended
recipient, you must not read, use or disseminate the information
contained in or attached to this email. 

  







Re: [GENERAL] Replicating changes

2006-10-30 Thread Peter Wilson
I'd start with something fairly straightforward. dbmirror
is very simple but does a lot. There are two parts:

A trigger function and set of database tables to collect
replication data. The trigger function is written is 'C' and
performance is good.

The second part is a Perl script that attaches to source
and target database and replicates all the changes. The
Perl script isn't too efficient but it should be pretty easy
to modify to send the data somewhere other than a
database

I had performance issues with dbmirror replicating big
BYTEA fields so I wrote a 'C++' version which is now
part of Whitebeam (http://www.whitebeam.org) so if
you're likely to be doing the same then you might want to
re-write the C++ version rather than the Perl version.

dbmirror is in the Postgres contrib directory for releases
up until 8.1 at least.

Pete


Alban Hertroys wrote:
> Taras Kopets wrote:
>> Hi!
>>
>> I think you should try to use triggers with
>> dbi-linkto achieve this.
>> AFAIK there is no such replication with other RDBMS as you need.
>>
>> Taras Kopets
>
> Seems like I introduced a small misunderstanding here. The data
> doesn't necessarilly go to a database; at least, none that we know
> about. The goal here is to export (semi-)real time changes from our
> database to a small number of interested parties[1]. It'll probably
> have to be in some kind of XML format.
>
> So far we've received some interesting suggestions. I particularly
> like the idea of extending Slony-I with a "custom" frontend - though
> that'll probably involve more work than a custom solution, it seems
> much more powerful.
>
> Thanks for the suggestions so far, everyone.
>
> [1] Those parties probably store our data in a database (they'd be
> nuts if they didn't), but that's something beyond our control.
>
> Regards,

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump

2006-11-23 Thread Peter Wilson

Bob Pawley wrote:
> No joy
>
> pg_dump aurel > aurel.out
>
> Returns -
> ERROR:  syntax error at or near "pg_dump" at character 8
Looks like you're trying to run it from psql. pg_dump is a command line
utility. Run it directly from the shell command prompt (or cmd window on
windows).

Pete



>
> I've had a bit of trouble with the PostgreSQL system if that helps.
> (access violation with a reinstall)
>
> Bob
>
> - Original Message - From: "Konrad Neuwirth"
> <[EMAIL PROTECTED]>
> To: "Bob Pawley" <[EMAIL PROTECTED]>
> Cc: "Postgresql" 
> Sent: Thursday, November 23, 2006 1:45 PM
> Subject: Re: [GENERAL] pg_dump
>
>
>>> To dump a database: $ pg_dump mydb > db.out"
>>> The following - $ pg_dump aurel > aurel.out - gives me
>>> "ERROR:  syntax error at or near "$" at character 1"
>>> What am I missing???
>>
>> The $ character is there to indicate the prompt your shell gives you.
>> If you leave it off at the beginning of your command, things should
>> just run fine.
>>
>> Konrad
>>
>> ---(end of broadcast)---
>> TIP 5: don't forget to increase your free space map settings
>>
>
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>


-- 

Peter Wilson
T: 01414 160505
M: 07796 656566
http://www.yellowhawk.co.uk The information in this email is
confidential and is intended for the addressee/s only. Access to this
email by anyone else is unauthorised. If you are not the intended
recipient, you must not read, use or disseminate the information
contained in or attached to this email.



[GENERAL] pg_restore error

2006-11-27 Thread Peter Wilson
I've just got the following message while trying to restore a database :

pg_restore : [custom archiver] Dumping a specific TOC data block out of order is
not supported without ID on this input stream (fseek required).

The command was :
pg_restore -L /tmp/toc --dbname=whitebeam --disable-triggers --username=postgres
/var/backups/restore-db.psql

/tmp/toc is a re-orderd output from :

pg_restore -l -a /var/backups/restore-db.psql > /tmp/toc

Using Postgres 8.1.4 on
Linux version 2.6.15-1.2054_FC5

The dump itself was made on another machine running 8.1.4 on Fedora core 4.

I've run similar commands on this machine before with no problems. Not sure why
I should start to have these problems now?

Anyone else seen anything similar? Thoughts much appreciated!

Pete

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Storing blobs in PG DB

2007-04-06 Thread Peter Wilson
Nikolay Moskvichev wrote:
> Hi All!
> 
> Question is : How suitable PG for storing about 2 000 000 binary files
> 0,5-2,0 Mb size each ? It is not planned the big number of clients or a
> plenty of updatings. Like photoalbum on local host.

In our CMS we store all page data in the database - either text for HTML pages
or the images and other truly binary data.

I agree with most of the comments on why *not* to store binary application data
in the file system, but I'd add the following :

For scalability, we use a very high performance database server (battery backed
RAID controller, lots of disks, lots of RAM, processors etc). Virtually the only
thing it runs is Postgres. This connects to a number of 'front-end' web servers.
 These are lower performance, cheaper boxes than the database. Generally 1U,
single disk, not RAID etc. The web-servers run Apache and the processor
intensive stuff like server-side JavaScript.

A load balancer shares web traffic across the front-end servers and can detect
if any of those fail and switch them out of servicing requests.

The front-end servers connect to the database server over gigabit Ethernet to
cut latency to a minimum.

We've found that the single high-spec database server is more than capable of
servicing quite a lot of front-end web servers.

Now in that environment, if you were to choose to store things in the file
system, not only have you got the issue of synchronising file system with
database, but of synchronising all the file systems.

You could get round some of those issues by keeping all the files on one system
and using an NFS share or some complex FTP system or something, but actually
it's a lot easier to hold the data in the database.

So - when you're developing your application it's worth thinking about what
happens as it starts to get busier. What's your route to scaling? Many web
applications are written to work on a single machine with no thought to what
happens when that reaches the limit, other than get a bigger server.

All the best
Peter Wilson
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Re: programmatic way to fetch latest release for a given major.minor version

2007-04-10 Thread Peter Wilson
Jorge Godoy wrote:
> Listmail <[EMAIL PROTECTED]> writes:
> 
 Yeah yeah, but terminology aside, having 2 or three digits in each
 attribute is just wrong!
>>> Terminology aside, why? The unit is "8.1" not "8" and "1". It makes no
>>> sense to say you're on version 8, in the given context, so why should the
>>> XML data pretend there is?
>>>
>>> //Magnus
>>  Just pretend that :
>>
>>  - version = a tuple of integers (a, b, c, ...)
>>  - major = (a, b)
>>  - minor = (c, ...)
>>
>>  Besides, that is sortable (unlike strings where 15 < 2) :
> 
> But then, floats are as sortable as integers and 8.3 < 15.1...
> 
> 

8.7, 8.9, 8.10 - oops. 8.10 < 8.9

The 'period' is effectively a field separator, with the unfortunate side-effect
that it looks like a number.

Unless of course, the version after 8.9 *will* be 9.0, in which case it is a
number and you're right.

Pete

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Windows Vista Support

2007-05-17 Thread Peter Wilson

Dave Page wrote:

[EMAIL PROTECTED] wrote:

Can you confirm that you don't provide support for Windows Vista for any
release of Postgres. I'm dumbfounded an it appears that you don't
support Vista. If so, are you planning any releases. I have a major
project and was hoping to use Postgres.


PostgreSQL will run on Vista, however to run the installer you will have
to temporarily disable User Account Control.

This is because the current stable version of PostgreSQL was released
some time before Vista so obviously couldn't be properly tested with it.

The installer for PostgreSQL 8.3 has already been rewritten as required
to support Vista.


I've not tried Postgres on Vista yet - I run it on Linux, however I have found 
most Windows XP software that won't run by default on vista will do provided you 
mark the executable to run in XP compatible mode - you don't have to turn off 
'User Account Control'.


Worth trying : right mouse click on the Postgres installer, select the 
'compatibility' tab and then Windows XP (Service Pack 2) from the drop down.


Pete
--
http://www.yellowhawk.co.uk
http://www.whitebeam.org




Regards, Dave

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_dump --data-only: is dump ordered to keep foreign-key-relations

2005-11-24 Thread Peter Wilson

Harald Armin Massa wrote:

I migrated one database from 8.0 to 8.1

That I used to do add "without oids" to all tables.

First step so:

pg_dump --schema-only  -U user database

the file was edited, all tables to "withoud oids"; and reloaded in 8.1

After that I

pg_dump --data-only -U user database

and tried to reload the data. But it fails on foreign keys: depending 
tables are being dumped before the tables they depend on.


I solved it by manually dumping the relevant tables and reloading them,

Now I cannot find documentation
- if pg_dump is supposed to produce a "ordered dump" so that not doing 
is a bug and I need to present a showcase

- or if it is simply not implemented and an enhancement request;
- or if it is even on a theoretical basis impossible to derive the 
correct order. [circular foreign keys came to my mind]


You can do this by creating a table of contents from your dump:
pg_restore -l -a ... > toc_file ...

Edit this to re-order the contents such that they restore in a safe
order and then restore using the modified TOC:
pg_restore -L toc_file ...

That works for me at least. It is a bit of a pain to have to do this.
Maybe there is an easier way?

Disabling triggers only affects triggers - not keys so doesn't help.

Hope that helps
Pete
--
Whitebeam Web Application Server
http://www.whitebeam.org
--

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DBMirror.pl performance change

2006-01-23 Thread Peter Wilson

The Whitebeam implementation of DBMirror.pl :
http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm

is a complete re-write in 'C' which avoids a lot of the text processing, and what text processing is required is done using a state machine rather 
than repeated regular expressions. Before I wrote the 'C' implementation I did look at optimising the Perl version. One of my big concerns was the 
time taking escaping and re-escaping the strings. I can't remember the details now but as far as I can remember a lot of that is unnecessary. There 
seemed to be an unescape of the data then a re-escape to the target database. In practice the data was in the correct format.


We make quite heavy use of both BYTEA and large varchar fields in our database. I did some load testing at the time and found the new version could 
replicate 10s of file objects per second - where the Perl version took 10 minutes to replicate a 120K BYTEA field (both on a slowish machine, but the 
Perl version wasn't much better on a fast machine *and* took 97% CPU).


I also took the opportunity to make the 'C' version much more tolerant to lost DB connections without having to restart and added a few other tweaks 
to make it more flexible.


It's released under the BSD license now as well

Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk


Achilleus Mantzios wrote:

I discovered a problem in DBMirror.pl, performance wise.

pending.c stores data in a way
very similar to the PgSQL input "\" escaped format.

When the field is of type bytea, and the source of data is binary, then
this produces 2 additional backslashes for every unprintable
char.

The performance in function extractData in DBMirror.pl, really suffers
from this condition, since it breaks data in chunks of "\" delimited
strings.

Informally speaking, performance tends to be O(n) where n is the size
of the data.

This can be remedied if we break data in chunks of "'" rather than "\".
"'" happens much more infrequently in common binary files (bz2, tiff, jpg, 
pdf etc..), and if we notice that odd number of contained "\", signals an
intermidiate "'", whereas even number of "\" signals the final "'", 
then we can make this routine run much faster.


I attach the new extractData function.

Now replicating a 400 k tiff takes 3 seconds instead of 12 minutes
it used to do.

I am wondering about the state of 
http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm


Please feel free for any comments.

Pete could you test this new DBMirror.pl, to see how it behaves
in comparison with your C++ solution?





sub extractData($$) {
  my $pendingResult = $_[0];
  my $currentTuple = $_[1];
  my $fnumber;
  my %valuesHash;
  $fnumber = 4;
  my $dataField = $pendingResult->getvalue($currentTuple,$fnumber);
  my $numofbs;

  while(length($dataField)>0) {
# Extract the field name that is surronded by double quotes
$dataField =~ m/(\".*?\")/s;
my $fieldName = $1;
$dataField = substr $dataField ,length($fieldName);
$fieldName =~ s/\"//g; #Remove the surronding " signs.

if($dataField =~ m/(^= )/s) {
  #Matched null
$dataField = substr $dataField , length($1);
  $valuesHash{$fieldName}=undef;
}
elsif ($dataField =~ m/(^=\')/s) {
  #Has data.
  my $value;
  $dataField = substr $dataField ,2; #Skip the ='
LOOP: {  #This is to allow us to use last from a do loop.
 #Recommended in perlsyn manpage.
  do {
my $matchString;
my $matchString2;
#Find the substring ending with the first ' or first \
	$dataField =~ m/(.*?[\'])?/s; 
	$matchString = $1;


$numofbs = ($matchString =~ tr/\\//) % 2;   

if ($numofbs == 1) { #// odd number of \, i.e. intermediate '
$matchString2 = substr $matchString,0, length($matchString)-2;
$matchString2 =~ s//\\/g;
$value .= ($matchString2 . "\'");
$dataField = substr $dataField,length($matchString);
}
else { #// even number of \, i.e. found end of data
$matchString2 = substr $matchString,0, length($matchString)-1;
$matchString2 =~ s//\\/g;
$value .= $matchString2;
$dataField = substr $dataField,length($matchString)+1;
last;
}

	   
  } until(length($dataField)==0);

  }
  $valuesHash{$fieldName} = $value;
  
  
  }#else if 
	  else {
	
	logErrorMessage "Error in PendingData Sequence Id " .

$pendingResult->getvalue($currentTuple,0);
die;
  }



  } #while

  return %valuesHash;

}






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire 

Re: [GENERAL] DBMirror.pl performance change

2006-01-23 Thread Peter Wilson

Achilleus Mantzios wrote:

Peter,
It is much more convinient for you to make a test,
(just change the last function in DBmirror.pl), than for me
(grab whitebeam, compile for FreeBSD, etc...)

Of course you would need to use the original .conf format
than the one you are using now.

It would be interesting to see some numbers.

P.S.

Please include my address explicitly, pgsql-general comes
to me in digest mode.


I'll take a look into this when I get a chance. Right now the only replicated systems I have are for live commercial clients - my development systems 
aren't replicated, just backed-up periodically.


It is worth looking through the Perl version some more though. I'm pretty sure I worked around most of the escaping/unescaping when I looked at the 
'C' version. I'm pretty sure some of the same approach could be used to improve performance of the Perl version. The main thing I found was that the 
data table is un-escaped when read from the table and then re-escaped before being sent to the slave database. In practice the data doesn't have to be 
touched.


My own preference right now is to stick with the C version now I have it. Replication is just about simultaneous with negligible CPU usage. When I get 
a chance, I'm intending decoupling the 'C' version from the whole of Whitebeam so it can be built by itself. At the time I needed a solution quickly 
so making use of a few Whitebeam utility classes got me there.


Pete

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Does this look ethical to you?

2006-01-24 Thread Peter Wilson

Tony Caduto wrote:

Thank you, however I'm more concerned with:

"PGLA has many advanced features not found in pgAdmin III,".

Aside from it being slightly misleading (not only are there not many
'advanced' things PGLA can do that pgAdmin can't, there are a similar
number that pgAdmin can, that PGLA can't), it is still attempting to
sell your product on our name. I would therefore ask that you either:

- Not mention pgAdmin at all, or
- Mention not only pgAdmin, but EDB Studio, Navicat, EMS PostgreSQL
Manager and other comparable products *as well*.


I changed it to say PGLA has many advanced GUI features not found in 
other admin tools.


I play fair Dave, and expect the same from you.



Because the installer is not letting it be known that there are 
alternatives available, I have had many people tell me they had no idea 
there where was anything else available.


It does not matter that pgAdmin is open source, and letting users know 
about alternatives is not free advertising, free advertising would be 
you paying for my Google addwords account.


You guys are doing the same thing as Microsoft did with Internet 
Explorer, let's include it so our browser/admin tool is all the user 
knows about or sees when they install the OS, or in this case the SQL 
server.


A link or blurb should be mentioned that there are other admin tools 
available or pgAdmin should not be installed either.


I am not saying you put specifics about mine or anyone elses product 
commercial or open source, but I think links back to the commercial and 
open source pages on the postgresql site would be a fair thing to do.


You are not letting the user make a choice about which admin tool to use 
or even try...


Just my opinion on the whole fair competition thing,



I'm not sure I understand some of these arguments, and I don't know the 
history, so as an uninformed third party who can't resist adding my 
tupennyw'th...

a) pgAdmin happens to be an admin tool that undercuts other tools in terms of 
price (free) and for some features

b) It has negotiated a distribution channel with partner organisations - something that any other organisation presumably is free to do. You just have 
to have the right proposition (OSS) to entice that partner to work with you.


c) If you would like pgAdmin to mention that there are more expensive alternative products - would your product before completing a sale recommend 
that people go take a look at pgAdmin first and see whether that might be a better alternative?


Just out of interest - which product came first? I've been aware of pgAdmin for a long time - if it was there first you'd have to look closely at 
whether there was a commercial business case for trying to get into that market with a broadly similar product.


A valid business case would obviously include making sure there were suitable accessible channels to market and sufficient funds to finance those 
channels. OSS projects don't often have a cash generating base to fund those channels so they are always at a disadvantage to commercial ventures.


Personally I'd be a little uneasy trying to build a commercial product that piggybacks on an OSS product simply because if it's something useful and 
important, as opposed to niche, then someone will add an OSS version and, if they do their work properly, destroy my market.


Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Oracle purchases Sleepycat - is this the "other shoe"

2006-02-14 Thread Peter Wilson

Marc G. Fournier wrote:

On Tue, 14 Feb 2006, Scott Marlowe wrote:


On Tue, 2006-02-14 at 10:51, Leonel Nunez wrote:

Rich Shepard wrote:

On Tue, 14 Feb 2006, Randal L. Schwartz wrote:

Oracle purchases Sleepycat.  From what I understand, BerkeleyDB was 
the

"other" way that MySQL could have transactions if Oracle decided to
restrict InnoDB tables (after purchasing Innobase last year).


  From what I read a few days ago, Oracle is negotiating with
Sleepycat, Zope
(is that the PHP developer's name?), and one other OSS developer.
Nothing is
yet signed, and they could all fall through.

Rich



Zope is a Python  framework
Zend is for  php


Also, given the license of PHP, which is NOT like the GPL, but much
closer to the BSD license, I doubt Oracle could manage to buy it and
kill it or hide it or whatever.


As of this moment, if Oracle buys Zend, they could effectively kill PHP 
... the core engine that PHP is built around is a Zend engine, so if 
they were to revoke the license for that, PHP would be dead ... kinda 
like MySQL with InnoDB ... now, there was talk at one point time with 
replacying that engine with Parrot, so I'm not sure how hard/long it 
would take for them to do so if Zend got pulled out from under them ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Why not replace the whole of PHP/mySQL with Whitebeam(unashamed 
plug)/PostgreSQL,
have a complete BSD licensed solution and avoid all this uncertainty :-) ?

--
Peter Wilson
http://www.whitebeam.org
http://www.yellowhawk.co.uk


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Oracle purchases Sleepycat - is this the "other shoe" for MySQL

2006-02-15 Thread Peter Wilson

Randal L. Schwartz wrote:

Oracle purchases Sleepycat.  From what I understand, BerkeleyDB was the
"other" way that MySQL could have transactions if Oracle decided to
restrict InnoDB tables (after purchasing Innobase last year).

Does this mean the other shoe has dropped for MySQL AB?


I think the thing being missed in all this isn't the impact of Oracle
buying innodb and sleepycat on open-source mySQL. The open-source license
for existing versions of this code I believe cannot be revoked - and
if Oracle tried it they would get a ridiculous amount of bad press.

Assuming Oracle are making these purchase to 'get at' mySql then why?

One thing it does do is give them the power to squeeze mySQLs business
model - the one where they make money out of their commercial license
for their database. The open-source version doesn't directly make money
for the company - instead you are encouraged to buy a commercial license
for that.

To do a commercial license for mySQL the company has to negotiate a
commercial licence for innodb and berkley DB. they have to pay those
companies. The agreements are probably periodically re-negotiated (I
seem to remember something about this when innodb were acquired).

When it's time to renegotiate Oracle could say add a (modest) 10-15-25%
onto the cost. mySql then have a problem - they pass that cost onto
their customers and probably loose a number of them to the open-source
version, or they swallow the cost. Either way that reduces the
amount of revenue they make. Less revenue means less resource to improve
mySql - in the worst case mySql have to use all their revenue to
support existing releases.

Stunting mySql development resource means less new features and keeps a
healthy functional gap between 'Enterprise class DB' Oracle and 'poor mans
option' of mySql. The bigger Oracle can keep that gap the fewer Enterprise
customers they loose to mySql.

Of course that can then all be offset by a knight in shining armour that,
for their own reasons, decide to donate some money or resource to mySql.

ho-hum - conspiracy theories abound!

Just about the only thing that can be said is that Oracle doesn't
need the technology they have bought!

my t'pennyw'th

Pete
--
http://www.whitebeam.org - open source web application server


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] a web framework for postgresql?

2006-03-01 Thread Peter Wilson

falcon wrote:

Hi,
Most of the web applications I work on are nothing more than front-ends
to postgresql.  I have used Perl (CGI), Java, C# and am now looking at
Django.  Each generation of frameworks lessens the pain of donig
web-apps, but it still seems redundant.

Does any one know of a framework where the database server (or a
web/app server integrated with a DB server) serves web pages?  The
database contains the data-model; names of tables, names of attributes,
their types, foreign key relationships among tables...that's A LOT of
information.  Sql query, views or stored procs could serve as 'reports'
served off the data.  Perhaps the only thing that needs to be outside a
database is something that describes how the data is to be displayed
(CSS).  There could be some java/c#/python/ruby/whatever engine which
takes all the information provided in the database and generate
html/xhtml, default css, javascript validation, etcbut all that
should be invisible to the user.

Any one know of such a framework?

(I'm asking this in pgsql because such a framework will have to be
fairly closely linked to a database...and I mainly use pgsql).



I don't know of one - I think I'd be a little uneasy with that kind of model to 
be honest on several levels.

Complex web applications involve a lot of complex logic, which tends to be very processor intensive. It's good to be able to scale such sites by 
adding more processing engines and so partitioning the logic from the database is a good idea.


We use (and wrote) Whitebeam which goes some of the way to providing a framework. It cuts down the amount of database knowledge you need by having a 
defined schema that represents data in a very flexible way (although you are free to develop SQL level applications if you want to). Whitebeam uses 
Postgres to model arbitrary things and the relationships between those things. The things can be CMS web pages or product catalogues. JavaScript 
(server side) is then used to add the semantic meaning to those things for each application.


The Whitebeam abstraction is higher than SQL (for example it represents arbitrarily deep directories of information and provides mechanisms for 
searching those) while at the same time being flexible enough to allow a wide range of data-driven applications (CMS, discussion forums, surveys, 
document repositories etc).


The programme files that drive the application are still stored in the standard file system. You could store those in the database as well if you 
wanted to but you'd be adding more contention for a central resource. To get the best performance you have to have a pragmatic approach to building 
your system.


Most applications also of course have to use other services such as sendmail, spell checkers, image tools. The datbase is only one part of the 
solution, although admittedly one you have to get right!


Pete
--
Whitebeam :  (http://www.whitebeam.org)
YellowHawk : (http://www.yellowhawk.co.uk)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] plpgsql replication stored procedure

2006-04-17 Thread Peter Wilson

[EMAIL PROTECTED] wrote:

Hi,

I'm trying to write a stored procedure that can capture all the changes
to a table and record the changes based on the table's primary key.

I can almost get there but the sticking point is being able to access
the primary key field of the NEW/OLD record in the trigger stored
procedure without knowing it's name.

The stored procedure is below and what I'm trying to do is find a way
to get the column from the NEW record that has the name
constraintColName. If I could do that I would replace NEW.oid with the
equivalent of NEW[constraintColName] and remove the need for oid's on
the table.

create or replace function replicate() returns trigger as
$$

 declare
  constraintName varchar;
  constraintColName varchar;
  keyId varchar;
  slaves record;

 begin
select into constraintName constraint_name from
information_schema.table_constraints where table_name = TG_RELNAME and
constraint_type = 'PRIMARY KEY';
select into constraintColName column_name from
information_schema.key_column_usage where constraint_name =
constraintName;
-- execute 'select ' || constraintColName || ' from ' || NEW into
keyId;

for slaves in
 select slaveid from replicationslaves
loop
 insert into replicationentries values(default, slaves.slaveid,
TG_OP, TG_RELNAME , NEW.oid, default);
end loop;
return NULL;
end;$$
language 'plpgsql';

Aaron

Why not use or adapt the 'C' function in the dbmirror implementation shipped 
with Postgres? The Perl script to replicate to the slave database is very 
inefficient but the trigger function itself is very fast.


I've also got a C++ implementation of the dbmirror replication perl script as 
well if it's any use


Pete
--
www.whitebeam.org
www.yellowhawk.co.uk
---


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Announce: GPL Framework centered on Postgres

2006-05-17 Thread Peter Wilson

Tim Allen wrote:

Kenneth Downs wrote:

GPL is to spread it as far and wide as possible as fast as possible.


LGPL?

My concern would be, I can't use this toolkit for a closed source 
application if it is GPL.


That may be your intent (which I actually don't have a business 
problem with), I was just curious as to your decision.


If it turns out that nobody can release a closed source app, I will 
definitely reconsider and look again at LGPL, but I am not convinced 
you cannot do so.


If you seek to provide a closed source app that is built upon 
Andromeda, you are required to provide the source code to Andromeda 
itself.  However, your app is not a derivative work in the strict 
sense because your code is not mixed in with mine in any sense.  You 
never modify a file, and your files and mine are actually in separate 
directories.


I greatly appreciate your asking the question though because I'd like 
to make sure that people feel safe with the project.  My goal is to 
provide the freedoms typically associated with the "plain old GPL", 
and certainly not to restrict the creation of closed apps.  I just 
don't want anybody closing *my* app.


Then it sounds like LGPL is exactly what you want. That forbids people 
closing your code, but allows linking of it to closed apps. Cf Tom's 
comments, it's quite difficult for anyone to release code that depends 
on GPL'd code without incurring the terms of the GPL for their code (and 
that is clearly the way the FSF want it to be).


But as Joshua was implying, a common business model is to release some 
code under GPL, which means it can be used only for GPL'd apps, and then 
also be willing to sell other sorts of licences for it to be used with 
commercial apps. If that's the sort of business model you have in mind, 
then GPL is probably what you want.


We've been through similar discussions recently with our web application server, 
Whitebeam (http://www.whitebeam.org).


We'd originally released this under a variant of the Mozilla licence - which I 
think is not unlike GPL. We started down that route because we make use of 
Mozillas JavaScript engine (SpiderMonkey). We did get a number of comments 
though, and we never managed to get our licence adopted by the OSS (quite 
rightly so!)


The outcome of the discussion was to release the project under a BSD license.

A good deal of the discussion centred around the fact that we make heavy use of 
Postgres and so we'd be a much more natural choice of development environment if 
we had a similar licence. It helped that the discussions took place during the 
uncertaintly around mySQL licensing coupled with Oracles buyout of the innodb 
company. The clincher was that Postgres+Whitebeam+Apache (1.3.29 before they 
changed their licence) provided a complete BSD based web development 
environment. The only external dependancy being SpiderMonkey which we link to 
under the LGPL.


My suggestion would be: a) if you want to keep the option of selling/licencing 
your code for commercial gain, do something like mySQL and release under GPL 
with lots of warnings and offer people a 'commercial' licence; b) if you want to 
see your project used in the widest possible audience go with BSD.


The BSD license does allow others to create a closed-source project from your 
code - but my view is that isn't too important. You'd be the natural port of 
call if they wanted consultancy on how to do that.


Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
--

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL and Apache

2006-06-13 Thread Peter Wilson

louis gonzales wrote:
PHP is one alternative, another is PERL with CGI to write web based 
programs that can GET/POST with input/output from the browser, and to 
interface with *SQL - i.e. postgresql - you can use PERL's DBI interface


Leif B. Kristensen wrote:


On Tuesday 13. June 2006 15:39, jqpx37 wrote:
 


I'm working on a project involving PostgreSQL and Apache.

Anyone know of any good books or online how-to's on getting PostgreSQL
and Apache to work together?  (I'm also using PHP.)
  


AFAIK, there are no dependencies beween Apache and PostgreSQL. PHP is 
what you'll use as the glue between them.


I've worked with PHP and MySQL for some years, and found the 
transition to PostgreSQL rather painless, but still I've considered 
buying the "Beginning PHP and PostgreSQL 8: From Novice to 
Professional" by W. Jason Gilmore and Robert H. Treat.
Another one is 'Whitebeam' - which it primarily targeted at developing 
web-applications using server-side JavaScript, XML and PostgreSQL. It runs as an 
Apache module (like PHP).


The Whitebeam interface to PostgreSQL allows you to do just about anything you 
could do in libpq and in a slightly more object orientated way than the raw PHP 
interface (although for PHP there are a number of wrappers).


Whitebeam at http://www.whitebeam.org

The Postgres interface is documented at:
http://www.whitebeam.org/library/guide/TechNotes/postgres.rhtm

Best regards
Pete

 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Database connectivity using a unix shell

2006-06-29 Thread Peter Wilson
Are you just asking random questions? What do you actually want to do? You've
asked how to access Postres from a shell - now you're using 'C'. Are you going
to work your way through Java, Perl and a host of others.

All of this information is *very* clearly available in the manual at:
  http://www.postgresql.org/docs/8.1/interactive/index.html

It is an *excellent* manual. Apart from how to access Postgres in a multitude of
ways it includes a good reference on SQL itself.

Go have a read.

Jasbinder Bali wrote:
> What if I don't have a shell script and instead of that I have a C code
> and need to connect to the postgres database.
> How do i accomplish this? do i still need to call this psql clinet
> interface or there is some other way to do it..
> 
> ~Jas
> 
> On 6/29/06, *Scott Marlowe* <[EMAIL PROTECTED]
> > wrote:
> 
> On Thu, 2006-06-29 at 13:29, Jasbinder Bali wrote:
> > this is what i've included in my shell script
> >
> > query="select * from films";
> > a=`echo $query|psql -tq postgres`;
> >
> >
> > it gives an error:
> > role root doesn't exist.
> >
> > where do i have to specify the role?
> 
> OK, unless this script NEEDS to be run as root, it's a good idea to run
> it as an unprivaledged account.  Create a new one if you can.  If it
> has
> to be run as root, so be it.
> 
> You use -U, so it'd look like:
> 
> a=`echo $query|psql -tqU myname postgres`;
> 
> then you need a password in ~/.pgpass following the format I posted in
> the previous message, unless your server is running in trust mode, in
> which case you don't need to worry about the .pgpass file.
> 
> >
> > On 6/29/06, Scott Marlowe <[EMAIL PROTECTED]
> > wrote:
> > On Thu, 2006-06-29 at 11:29, Jasbinder Bali wrote:
> > > isn't my normal bash script different from psql.
> > > In a bash script how wud u specify the db parameters
> >
> > Look at how I'm doing it here:
> >
> > > > > query="select * from sometable";
> > > > > a=`echo $query|psql -tq dbname`;
> >
> > Note that I'm calling psql from within a bash script. So, the
> > connection
> > params are the same as for psql, cause that's what I'm using.
> >
> 
> 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Ajax/PostgreSQL

2006-08-05 Thread Peter Wilson
Paul M Foster wrote:
> I'm doing some massive (internal company) applications using PHP, which
> query extensive PostgreSQL tables. This is fine, but obviously it often
> requires multiple web pages to get something done. Supposedly, AJAX
> promises to make web pages more interactive. But from what I understand,
> I'd have to have bindings from Javascript into PostgreSQL to make this
> work.
> 
> Here's an example: The user wants to enter a bill (accounts payable)
> into the system. He first has to pick a vendor. Normally, this would
> entail a PHP page that generates a PostgreSQL query. The user would then
> get a second page with various vendor information (like number of due
> days for that vendor), and various other payable info. But wouldn't it
> be nice to have vendor information filled in on the original page,
> directly after the user picks a vendor? Theoretically, AJAX might allow
> something like this. But from what I can see, it would require
> PostgreSQL bindings in Javascript, and some way to pass the data back so
> that PHP could use it.
> 
> Is this even possible? Is it being worked on? Is there a different
> solution I don't know about? I can see where Javascript can alter the
> look of a page, but I can't work out how it would allow interactive use
> of a PostgreSQL table.
> 
No - AJAX is really talking about the ability to use client-side JavaScript to
request XML documents from a server.

In this case if you want to have a page show vendor information when a user
selects a specific vendor, then your client-side JavaScript would request a
resource on your server that will return that information in XML.

If you're using PHP then the request you might make might be something like:
   http:/myserver.com/getVendorInfo.php

The PHP request on the server goes to Postgres and retrieves the appropriate
information, formats it as XML and writes it out like any other page.

The client-side Javascript gets the XML response, uses client-side DOM to decode
the response and changes the page HTML to include the appropriate information
without having the page do a refresh.

Obviously this is often not much faster than getting a different page, and is
one of the fundamental problems with things like AJAX and SOA - each remote
request is remote - it has to go over the internet and suffers not only the
processing time at the database but also the network latency.

But to answer your question - there is *not* coupling between JavaScript and
Postgres. The JavaScript runs on the client, Postgres on the server and PHP as
your intermediary.

Best regards
Pete
--
Peter Wilson
YellowHawk Ltd (http://www.yellowhawk.co.uk)
Server Side XML and Javascript Web Application Server - http://www.whitebeam.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster