[GENERAL] wiki.postgresql.org down?

2011-07-27 Thread Yan Chunlu
I tried several times to visit the site recently but the domain always
can not be resolved. is there something wrong?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Chris Travers
On Tue, Jul 26, 2011 at 8:13 PM, Karl Nack  wrote:
>> The current svn trunk (to be 1.3) does.
>
> So how far do you take this? I've been playing around with plpgsql a bit
> and am pretty encouraged by what I'm discovering; now I'm at a point
> where I'm thinking, "how far do/can I go with this?"

Here are the limitations I have discovered:

1)  Localization of exception strings is a bit of a problem.  Hence
exceptions need to be aimed at communicating to the application rather
than the user.

2)  Difficulties passing complex data structures back and forth and
properly parsing it in the application.  Currently we do a lot with
two dimensional arrays but will probably shift to more arrays of
complex types as we drop support for older versions of PostgreSQL and
DBD::Pg.

There are queries which do a lot of things in the db in a single SQL
statement.  The longest single SQL statement I have found thus far is
a bit over 100 lines long (due to complex requirements and some
shortcomings in the db schema we have inherited that we are working on
replacing).  It's still pretty easy to read and understand at that
length, at least when compared to a function in a more general purpose
language.

Menu data is also stored in the database (application settings and
menu argument data are the two areas where key/value modelling is
used).

The result is that the Perl codebase is shrinking in absolute terms,
being replaced in part by SQL.  However, a rigorous separation of
named query and lightweight application logic has allowed us to shrink
the amount of code total in the project while significantly adding
functionality.

>
> Probably the best example is input validation. Constraints and triggers
> on the database will (or at least should) prevent bad data from being
> added to the database, but the UI generally needs to provide more
> informative messages than errors thrown by the database, and provide
> errors messages for every invalid field, whereas the database will
> fail/stop on the first error. Consequently, I find that much of the data
> logic ends up being duplicated outside of the database to enhance the
> user experience. Might there be a way to move these validation routines
> into the database as well, and unify all the data logic into one place?

The best option is to use exceptions to communicate to the application
what went wrong and then allow the application to handle those
exceptions in many cases.  In other cases, the application may need to
know which inputs are mandatory.

In general what we do is side with the exception trapping and
handling.  This means that if the query fails, we take the sql state,
detect the type of error, and display an appropriate message.  In some
cases ("Access denied") we are terse.  In other cases we are adding
the full SQL error message to the message simply because the
combination of an easy to read description of what happened "Required
input not provided" and the sql message mentioning the field is enough
for many users to figure out what they did wrong,  It's still not
idea.

>
>> > Yes, but I'd implement the constraint "all transactions must balance" as
>> > a trigger that fires when the transaction is complete. This would
>> > enforce data integrity regardless of whether or not the database API is
>> > used, which I think is also important.
>>
>> That's problematic to do in PostgreSQL because statement-level
>> triggers don't have access to statement args, and I don't believe they
>> can be deferred.
>
> In another email I did provide a simple example of how I might implement
> this; I don't know if you saw it. Of course, that assumes your database
> isn't designed by  :-)

I didn't see it even looking back (I saw the initial attempt and the
PHP sample).

The ideal interface at the moment is something like

SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
(2, -50)}');

This would allow you do do something like:
SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
(2, -30), (3, -20)}'); as well since we are now talking about arrays of records.

But this is a pain to code to/from SQL in a robust way.  Good db
drivers sometimes handle this automatically though.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wiki.postgresql.org down?

2011-07-27 Thread John R Pierce

On 07/26/11 11:35 PM, Yan Chunlu wrote:

I tried several times to visit the site recently but the domain always
can not be resolved. is there something wrong?


per the guys on the IRC channel, there was a DNS server issue a little 
while ago, hopefully its reserved, but if your server has cached bad 
entries, it might take awhile longer to flush


\

--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wiki.postgresql.org down?

2011-07-27 Thread John R Pierce

On 07/27/11 12:48 AM, John R Pierce wrote:

...hopefully its reserved...


errr.  i meant to type RESOLVED.  hah.

--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy load-high cpu itilization

2011-07-27 Thread Filippos
thx a lot for your answer. i have already bought your book and have read most
of the chapters. 
 (in the near future i will study it more thoroughly)

i asked here for tips in case there was something i had missed and there was
a "magic" way to improve the perfomance even a little. 

as far as the cpu usage is concerned, i have monitored the situation closely
(using top and other tools), and it comes from postgres. you also told me
about the logs and i am aware of this "technique" but i didnt use it so far,
to avoid the extra load of data, but i will do it soon. 

i will also schedule an update deal with the bug for autovacuum.

thx again :)

P.S i will send a message to the admins, to ask them to move the topic to
the sub-forum of perfomance



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/heavy-load-high-cpu-itilization-tp4635696p4637997.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Vincent Veyron
Le mardi 26 juillet 2011 à 11:23 -0500, Merlin Moncure a écrit :
>  I think I've been cursed due
> to have discovered the secret to fast, efficient programming while
> continually being constrained from developing that way.   Then again,
> most programmers probably feel like that :-).
> 


I think there is a very good reason for that : contrary to the official
discourse, I believe executives in organizations actively maintain
complexity, consciously or not.

See the accomplishments of free/open source software in sometimes very
complex projects, without the need for a hierarchical management
structure. 

I know I'm astounded.

-- 
Vincent Veyron
http://marica.fr/
Logiciel de gestion des sinistres et des contentieux pour le service juridique


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mac installer on Lion

2011-07-27 Thread Grzegorz Jaśkiewicz
Alternatively you can get the 'server' package from app store, it has
postgresql already in it :) (9.0)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> The current svn trunk (to be 1.3) does.

So how far do you take this? I've been playing around with plpgsql a bit
and am pretty encouraged by what I'm discovering; now I'm at a point
where I'm thinking, "how far do/can I go with this?"

Probably the best example is input validation. Constraints and triggers
on the database will (or at least should) prevent bad data from being
added to the database, but the UI generally needs to provide more
informative messages than errors thrown by the database, and provide
errors messages for every invalid field, whereas the database will
fail/stop on the first error. Consequently, I find that much of the data
logic ends up being duplicated outside of the database to enhance the
user experience. Might there be a way to move these validation routines
into the database as well, and unify all the data logic into one place?

> > Yes, but I'd implement the constraint "all transactions must balance" as
> > a trigger that fires when the transaction is complete. This would
> > enforce data integrity regardless of whether or not the database API is
> > used, which I think is also important.
> 
> That's problematic to do in PostgreSQL because statement-level
> triggers don't have access to statement args, and I don't believe they
> can be deferred.

In another email I did provide a simple example of how I might implement
this; I don't know if you saw it. Of course, that assumes your database
isn't designed by  :-)


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] repmgr problem with registering standby

2011-07-27 Thread Toby Corkindale

Hi,
I have repmgr working to some degree on a couple of servers, but am 
having trouble with the "register" part on the slave.


On the master, I run:
# repmgr -f /etc/repmgr/validator/repmgr.conf \
  --verbose --force master register

Opening configuration file: /etc/repmgr/validator/repmgr.conf
repmgr connecting to master database
repmgr connected to master, checking its state
finding node list for cluster 'validator'
Master node correctly registered for cluster validator with id 0 
(conninfo: host=10.133.54.2 port=5432 user=repmgr dbname=repmgr)



So that looks good, but then I try this on the slave:
# repmgr -f /etc/repmgr/validator/repmgr.conf \
  --verbose standby register

Opening configuration file: /etc/repmgr/validator/repmgr.conf
repmgr connecting to standby database
repmgr connected to standby, checking its state
repmgr connecting to master database
finding node list for cluster 'validator'
A master must be defined before configuring a slave



I can query the database like so though, and it seems like it's all good:
repmgr=# select * from repmgr_validator.repl_nodes;
 id |  cluster  |   conninfo
+---+--
  0 | validator | host=10.133.54.2 port=5432 user=repmgr dbname=repmgr
(1 row)


Does anyone have an idea of what might be going wrong here?

Thanks,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variant column type

2011-07-27 Thread salah jubeh
Thanks for the help, and for different design options it really helped me. I 
had 
a look on vertical design and horizontal design and this is some cons and pros 
in general for vertical design

Advantages:
•Avoid null values and utilize storage 
•Avoid constant schema changes due to adding columns
•Avoid performance issues which may  encountered when a the table is very 
wide for specific queries
Disadvantages 
•Queries written against the vertical design became cumbersome.
•Multiple joins to pull back each attribute.
•Data needs to be converted to the horizontal design in many cases so you 
need the contribution modules such as table funcs 

•Data domains are problematic and hacks here can be used such as fix your 
attributes, or using many tables  

 

 


the car speed is 240 
the car has an airbag
 
Here the first value is integer and the second value is boolean. Consider that 
I 
have this table structure
 
feature (feature id feature name)
car (car id,  )
car_feature  (car id, feature id, value). the value attribute might have 
different  domains. How can I model this using postgres and using ANSI 
compliant  
design ?
 
Regards 

Re: [GENERAL] repmgr problem with registering standby

2011-07-27 Thread Simon Riggs
On Wed, Jul 27, 2011 at 10:36 AM, Toby Corkindale
 wrote:
> Hi,
> I have repmgr working to some degree on a couple of servers, but am having
> trouble with the "register" part on the slave.
>
> On the master, I run:
> # repmgr -f /etc/repmgr/validator/repmgr.conf \
>  --verbose --force master register
>
> Opening configuration file: /etc/repmgr/validator/repmgr.conf
> repmgr connecting to master database
> repmgr connected to master, checking its state
> finding node list for cluster 'validator'
> Master node correctly registered for cluster validator with id 0 (conninfo:
> host=10.133.54.2 port=5432 user=repmgr dbname=repmgr)
>
>
> So that looks good, but then I try this on the slave:
> # repmgr -f /etc/repmgr/validator/repmgr.conf \
>  --verbose standby register
>
> Opening configuration file: /etc/repmgr/validator/repmgr.conf
> repmgr connecting to standby database
> repmgr connected to standby, checking its state
> repmgr connecting to master database
> finding node list for cluster 'validator'
> A master must be defined before configuring a slave
>
>
>
> I can query the database like so though, and it seems like it's all good:
> repmgr=# select * from repmgr_validator.repl_nodes;
>  id |  cluster  |                       conninfo
> +---+--
>  0 | validator | host=10.133.54.2 port=5432 user=repmgr dbname=repmgr
> (1 row)
>
>
> Does anyone have an idea of what might be going wrong here?

Hi, thanks for using repmgr.

What version of repmgr are you using? What version of PostgreSQL?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wiki.postgresql.org down?

2011-07-27 Thread Yan Chunlu
thanks! working now

On Wed, Jul 27, 2011 at 3:54 PM, John R Pierce  wrote:
> On 07/27/11 12:48 AM, John R Pierce wrote:
>>
>> ...hopefully its reserved...
>
> errr.  i meant to type RESOLVED.  hah.
>
> --
> john r pierce                            N 37, W 122
> santa cruz ca                         mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backup complete

2011-07-27 Thread Adarsh Sharma

Dear all,

I take backup of postgres databases with pg_dump command.
But it didn't take backup of all sequences,views & functions in 
different databases.


I want to know how we can complete backup so we can get complete data 
after restoring from a single file.


I faced errors when dumping data from one system to another , if the 
same sequence doesn't exist in the other system.



Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup complete

2011-07-27 Thread Raymond O'Donnell
On 27/07/2011 10:51, Adarsh Sharma wrote:
> Dear all,
> 
> I take backup of postgres databases with pg_dump command.
> But it didn't take backup of all sequences,views & functions in
> different databases.

That's because you told it to backup only the postgres database. pg_dump
only dumps a single database. If you want to backup all databases in the
cluster, use pg_dumpall.

You should read up on the two commands in the documentation:

  http://www.postgresql.org/docs/9.0/static/reference-client.html

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup complete

2011-07-27 Thread Adarsh Sharma

Raymond O'Donnell wrote:

On 27/07/2011 10:51, Adarsh Sharma wrote:
  

Dear all,

I take backup of postgres databases with pg_dump command.
But it didn't take backup of all sequences,views & functions in
different databases.



That's because you told it to backup only the postgres database. pg_dump
only dumps a single database. If you want to backup all databases in the
cluster, use pg_dumpall.

You should read up on the two commands in the documentation:

  http://www.postgresql.org/docs/9.0/static/reference-client.html

Ray.

  
Thanks Raymond, I know that but did it take the backups of procedures & 
views too.



Adarsh


Re: [GENERAL] Backup complete

2011-07-27 Thread C. Bensend

> I take backup of postgres databases with pg_dump command.
> But it didn't take backup of all sequences,views & functions in
> different databases.
>
> I want to know how we can complete backup so we can get complete data
> after restoring from a single file.
>
> I faced errors when dumping data from one system to another , if the
> same sequence doesn't exist in the other system.

Check out pg_dumpall:

http://www.postgresql.org/docs/current/interactive/app-pg-dumpall.html

Benny


-- 
"Open your door, or I open your wall."
 -- Seen on an image on fukung.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup complete

2011-07-27 Thread Raymond O'Donnell
On 27/07/2011 11:31, Adarsh Sharma wrote:
> Raymond O'Donnell wrote:
>> On 27/07/2011 10:51, Adarsh Sharma wrote:
>>   
>>> Dear all,
>>>
>>> I take backup of postgres databases with pg_dump command.
>>> But it didn't take backup of all sequences,views & functions in
>>> different databases.
>>> 
>>
>> That's because you told it to backup only the postgres database. pg_dump
>> only dumps a single database. If you want to backup all databases in the
>> cluster, use pg_dumpall.
>>
>> You should read up on the two commands in the documentation:
>>
>>   http://www.postgresql.org/docs/9.0/static/reference-client.html
>>
>> Ray.
>>
>>   
> Thanks Raymond, I know that but did it take the backups of procedures &
> views too.
> 
> 
> Adarsh

Hi Adarsh,

Yes, pg_dump backs up everything - schema and data. If you need to
convince yourself, use the -s option to dump just the schema and examine
the output.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [ANN] ODB C++ ORM adds support for PostgreSQL

2011-07-27 Thread Boris Kolpackov
I am pleased to announce the release of ODB 1.5.0 with support for
PostgreSQL.

ODB is an open-source object-relational mapping (ORM) system for C++. It
allows you to persist C++ objects to a relational database without having
to deal with tables, columns, or SQL and without manually writing any of
the mapping code.

Major new features in this release:

  * Support for the PostgreSQL database, including updates to the Boost
and Qt profiles.

  * Support for per-class database operations callbacks.

  * New NULL handling mechanism.

  * Ability to specify database default values and additional column
definition options.

A more detailed discussion of the new features as well as some performance
numbers for the new PostgreSQL support can be found in the following blog
post:

http://www.codesynthesis.com/~boris/blog/2011/07/26/odb-1-5-0-released/

For the complete list of new features in this version see the official
release announcement:

http://www.codesynthesis.com/pipermail/odb-announcements/2011/06.html

ODB is written in portable C++ and you should be able to use it with any
modern C++ compiler. In particular, we have tested this release on GNU/Linux
(x86/x86-64), Windows (x86/x86-64), Mac OS X, and Solaris (x86/x86-64/SPARC)
with GNU g++ 4.2.x-4.5.x, MS Visual C++ 2008 and 2010, and Sun Studio 12.
The currently supported database systems are MySQL, SQLite, and PostgreSQL.

More information, documentation, source code, and pre-compiled binaries are
available from:

http://www.codesynthesis.com/products/odb/

Enjoy,
Boris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Merlin Moncure
On Wed, Jul 27, 2011 at 1:41 AM, Chris Travers  wrote:
> On Tue, Jul 26, 2011 at 8:13 PM, Karl Nack  wrote:
>>> The current svn trunk (to be 1.3) does.
>>
>> So how far do you take this? I've been playing around with plpgsql a bit
>> and am pretty encouraged by what I'm discovering; now I'm at a point
>> where I'm thinking, "how far do/can I go with this?"
>
> Here are the limitations I have discovered:
>
> 1)  Localization of exception strings is a bit of a problem.  Hence
> exceptions need to be aimed at communicating to the application rather
> than the user.
>
> 2)  Difficulties passing complex data structures back and forth and
> properly parsing it in the application.  Currently we do a lot with
> two dimensional arrays but will probably shift to more arrays of
> complex types as we drop support for older versions of PostgreSQL and
> DBD::Pg.
>
> There are queries which do a lot of things in the db in a single SQL
> statement.  The longest single SQL statement I have found thus far is
> a bit over 100 lines long (due to complex requirements and some
> shortcomings in the db schema we have inherited that we are working on
> replacing).  It's still pretty easy to read and understand at that
> length, at least when compared to a function in a more general purpose
> language.
>
> Menu data is also stored in the database (application settings and
> menu argument data are the two areas where key/value modelling is
> used).
>
> The result is that the Perl codebase is shrinking in absolute terms,
> being replaced in part by SQL.  However, a rigorous separation of
> named query and lightweight application logic has allowed us to shrink
> the amount of code total in the project while significantly adding
> functionality.
>
>>
>> Probably the best example is input validation. Constraints and triggers
>> on the database will (or at least should) prevent bad data from being
>> added to the database, but the UI generally needs to provide more
>> informative messages than errors thrown by the database, and provide
>> errors messages for every invalid field, whereas the database will
>> fail/stop on the first error. Consequently, I find that much of the data
>> logic ends up being duplicated outside of the database to enhance the
>> user experience. Might there be a way to move these validation routines
>> into the database as well, and unify all the data logic into one place?
>
> The best option is to use exceptions to communicate to the application
> what went wrong and then allow the application to handle those
> exceptions in many cases.  In other cases, the application may need to
> know which inputs are mandatory.
>
> In general what we do is side with the exception trapping and
> handling.  This means that if the query fails, we take the sql state,
> detect the type of error, and display an appropriate message.  In some
> cases ("Access denied") we are terse.  In other cases we are adding
> the full SQL error message to the message simply because the
> combination of an easy to read description of what happened "Required
> input not provided" and the sql message mentioning the field is enough
> for many users to figure out what they did wrong,  It's still not
> idea.
>
>>
>>> > Yes, but I'd implement the constraint "all transactions must balance" as
>>> > a trigger that fires when the transaction is complete. This would
>>> > enforce data integrity regardless of whether or not the database API is
>>> > used, which I think is also important.
>>>
>>> That's problematic to do in PostgreSQL because statement-level
>>> triggers don't have access to statement args, and I don't believe they
>>> can be deferred.
>>
>> In another email I did provide a simple example of how I might implement
>> this; I don't know if you saw it. Of course, that assumes your database
>> isn't designed by  :-)
>
> I didn't see it even looking back (I saw the initial attempt and the
> PHP sample).
>
> The ideal interface at the moment is something like
>
> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -50)}');
>
> This would allow you do do something like:
> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -30), (3, -20)}'); as well since we are now talking about arrays of 
> records.
>
> But this is a pain to code to/from SQL in a robust way.  Good db
> drivers sometimes handle this automatically though.

Note, we wrote libpqtypes (http://libpqtypes.esilo.com/) precisely to
deal with this problem -- first class handling of arrays and
composites in the client.  It's not much help for a perl client, but I
think similar methodologies can be made for most languages. Sending
rich data structures directly to procedures in the database transforms
the way the application/database communications work for the better.
It's new and weird to many developers, especially those trained on ORM
usage patterns, but is also entirely effective.

merlin

-- 
Sent via pgsql-general mailing list (pgs

[GENERAL] does slony use work_mem?

2011-07-27 Thread Willy-Bas Loos
Hi,

Does slony use work_mem?
We are considering to limit work_mem on a per task basis so that
demanding tasks can get more with a low risk of entering SWAP.

(work_mem is used for:
ORDER BY, DISTINCT, merge joins, hash joins, hash-based aggregation,
and hash-based processing of IN subqueries. )

Cheers,

WBL

-- 
"Patriotism is the conviction that your country is superior to all
others because you were born in it." -- George Bernard Shaw

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Chris Travers
On Wed, Jul 27, 2011 at 7:01 AM, Merlin Moncure  wrote:

> Note, we wrote libpqtypes (http://libpqtypes.esilo.com/) precisely to
> deal with this problem -- first class handling of arrays and
> composites in the client.  It's not much help for a perl client, but I
> think similar methodologies can be made for most languages. Sending
> rich data structures directly to procedures in the database transforms
> the way the application/database communications work for the better.
> It's new and weird to many developers, especially those trained on ORM
> usage patterns, but is also entirely effective.
>
Cool :-)

As I understand it DBD::Pg has excellent handling of both these things
too.  The reason we are not doing more with the composite types yet is
because we currently support versions of DBD::Pg which support arrays
well but not the composite types, though that will probably change in
1.4.

I wonder which other languages have first class support for these areas of Pg?

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karsten Hilbert
> I wonder which other languages have first class support for these areas of
> Pg?

While already supporting most if not all standard PG datatypes the
psycopg2 interface lets you write in/out wrappers of arbitray
complexity mapping PG <-> Python datatypes and insert them into
the driver at runtime.

Karsten

-- 
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!   
Jetzt informieren: http://www.gmx.net/de/go/freephone

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] repmgr problem with registering standby

2011-07-27 Thread Toby Corkindale
- Original Message -
> On Wed, Jul 27, 2011 at 10:36 AM, Toby Corkindale wrote:
> > Hi,
> > I have repmgr working to some degree on a couple of servers, but am
> > having
> > trouble with the "register" part on the slave.
> >
> > On the master, I run:
> > # repmgr -f /etc/repmgr/validator/repmgr.conf \
> >  --verbose --force master register
> >
> > Opening configuration file: /etc/repmgr/validator/repmgr.conf
> > repmgr connecting to master database
> > repmgr connected to master, checking its state
> > finding node list for cluster 'validator'
> > Master node correctly registered for cluster validator with id 0
> > (conninfo:
> > host=10.133.54.2 port=5432 user=repmgr dbname=repmgr)
> >
> >
> > So that looks good, but then I try this on the slave:
> > # repmgr -f /etc/repmgr/validator/repmgr.conf \
> >  --verbose standby register
> >
> > Opening configuration file: /etc/repmgr/validator/repmgr.conf
> > repmgr connecting to standby database
> > repmgr connected to standby, checking its state
> > repmgr connecting to master database
> > finding node list for cluster 'validator'
> > A master must be defined before configuring a slave
> >
> >
> >
> > I can query the database like so though, and it seems like it's all
> > good:
> > repmgr=# select * from repmgr_validator.repl_nodes;
> >  id | cluster | conninfo
> > +---+--
> >  0 | validator | host=10.133.54.2 port=5432 user=repmgr
> >  dbname=repmgr
> > (1 row)
> >
> >
> > Does anyone have an idea of what might be going wrong here?
> 
> Hi, thanks for using repmgr.
> 
> What version of repmgr are you using? What version of PostgreSQL?

Hi Simon,
We're using version 1.1.0 of repmgr, against PostgreSQL 9.0.4-1~bpo60+1 (ie. 
the version from backports) on Debian squeeze.

To complicate matters, we have several postgresql instances per machine, using 
Debian's pg cluster stuff. This seems to work elsewhere with repmgr though (as 
long as we make sure the ports are specified in the repmgr configs).
However, I'm not having any success even with just the default/single instance 
of pg either at the moment.

Cheers,
Toby

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy load-high cpu itilization

2011-07-27 Thread Greg Smith

On 07/27/2011 04:37 AM, Filippos wrote:

P.S i will send a message to the admins, to ask them to move the topic to
the sub-forum of perfomance
   


Don't do that; will just waste their time.  pgsql-general is a mailing 
list, and the "forum" view you're seeing at Nabble is just a web 
interface to it.  They can't move things around there because they don't 
really own the list; they just make a copy of all its messages.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suggested enhancement to pg_restore

2011-07-27 Thread Michael Nolan
On Tue, Jul 26, 2011 at 6:10 PM, Chris Travers wrote:

> On Tue, Jul 26, 2011 at 3:48 PM, Michael Nolan  wrote:
> > I suggest adding the following parameter to pg_restore:
> >
> > --rename-table=
> >
> > When used in conjunction with the --data-only, --schema and -t options
> (all
> > three of which would be necessary),
> > it would allow restoring a table (without indexes) to a different table
> name
> > (which would need to already exist
> > and match the structure of the table which is being restored, of course.)
>
> Does pg_restore allow you to specify a set of tables the same way
> pg_dump does, i.e. by -t table1 -t table2?
>
> If so how would this feature play along?
>

Not sure, the man page for pg_restore seems to imply that -t can be used to
restore just ONE table,
though it also seems to say that pg_restore can be used to affect the order
in which the tables are restored.

If it can handle multiple -t entries, presumably they must all be in the
same schema, otherwise things
could get really confused if the same table name exists in more than one
schema.

If multiple -t entries are supported, I guess we would have two options.

1.  Only allow one table to be restored using the --rename-table parameter
at a time.

2.  Require that the command have matching pairs of -t and --rename-table
entries to make sure that the tables
are restored to the intended new names.

I don't have a major preference between these, though I suspect #1 would be
easier to implement.
--
Mike Nolan
no...@tssi.com


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> I wonder which other languages have first class support for these areas
> of Pg?

It'd be nice if PHP could get there. :p Maybe it's time to look at some
of these other languages. Or finally learn C and try hacking on the
extension myself.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Peter Bex
On Wed, Jul 27, 2011 at 04:35:45PM +0200, Karsten Hilbert wrote:
> > I wonder which other languages have first class support for these areas of
> > Pg?
> 
> While already supporting most if not all standard PG datatypes the
> psycopg2 interface lets you write in/out wrappers of arbitray
> complexity mapping PG <-> Python datatypes and insert them into
> the driver at runtime.

The same is true for the Chicken Scheme PostgreSQL egg.

Cheers,
Peter
-- 
http://sjamaan.ath.cx
--
"The process of preparing programs for a digital computer
 is especially attractive, not only because it can be economically
 and scientifically rewarding, but also because it can be an aesthetic
 experience much like composing poetry or music."
-- Donald Knuth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> The best option is to use exceptions to communicate to the application
> what went wrong and then allow the application to handle those
> exceptions in many cases.  In other cases, the application may need to
> know which inputs are mandatory.

So other than sanitizing input and making sure it's an appropriate data
type, are you relying solely on the database for all your input
validation?

I guess a good generic example might be setting up a new user account,
which might require a username, password, real name, and birth date. We
might also put a reasonable constraint that, at the very least, the
birth date cannot be greater than the current date. Now if the user
submitted a blank page, ideally the page would come back with four
errors, one for each field that was left blank. If you submitted this
basically as-is to the database, it'd arbitrarily fail on the first
column that didn't meet the NOT NULL constraint, and that would be the
only error sent back to the client. So yes, this would work, but in
theory it could take four or five times before every error was
identified and the user notified.


> > In another email I did provide a simple example of how I might implement
> > this; I don't know if you saw it. Of course, that assumes your database
> > isn't designed by  :-)
> 
> I didn't see it even looking back (I saw the initial attempt and the
> PHP sample).

Here's a link to the archived message:
http://archives.postgresql.org/pgsql-general/2011-07/msg00631.php

Feel free to comment/praise/criticize! :-)

> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -30), (3, -20)}'); as well since we are now talking about arrays of
> records.
> 
> But this is a pain to code to/from SQL in a robust way.  Good db
> drivers sometimes handle this automatically though.

Yes, I've coded exactly this with a plpgsql function and have been
mostly pleased by how easy it is. Unfortunately, at least with PHP, it's
not so straight forward to format user input into an SQL statement that
uses arrays and composite types. It's even worse going the other way --
just Google how to convert SQL arrays into PHP arrays. :-(


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error when compiling a c function

2011-07-27 Thread Ioana Danes
Hello,

I built a c function that exports data from various tables into a text file but 
when I compile the function I get this error:

In file included from /usr/include/pgsql/server/postgres.h:48,
 from myfile.c:1:
/usr/include/pgsql/server/utils/elog.h:69:28: error: utils/errcodes.h: No such 
file or directory

If I remove the utils/ folder in front of the errcodes.h file in elog.h then it 
works. 

The only think I don't like (but I can live with...) is that every time I 
install a new version of postgres I have to edit 
/usr/include/pgsql/server/utils/elog.h file and change the line:
#include "utils/errcodes.h"
into 
#include "errcodes.h"

Is there possible to get this fixed in the future release? I am using Suse 
compiled versions of Postgres. 

Here is the "very" simplified version of my function:

#include "/usr/include/pgsql/server/postgres.h"
#include "/usr/include/pgsql/server/fmgr.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif


PG_FUNCTION_INFO_V1(initializefile);

Datum
initializefile(PG_FUNCTION_ARGS)
{
PG_RETURN_INT32(1);
}

 
Thank you,
Ioana

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problems viewing information_schema.schemata

2011-07-27 Thread Little, Douglas
Hi,
Some of my users are using sqldbx  and it's not working due to a dependence on 
displaying the content of information_schema.schemata
When I query the view I see content.  When they do it, nothing - 0 rows - from 
sqldbx, pgadmin or other client.

I suspect that they don't have access to a catalog table,  but can't find it.
What is the definition for schemata?

Thanks


Doug Little

Sr. Data Warehouse Architect | Business Intelligence Architecture | Orbitz 
Worldwide
500 W. Madison, Suite 1000  Chicago IL 60661| Office 312.260.2588 | Fax 
312.894.5164 | Cell 847-997-5741
douglas.lit...@orbitz.com
 [cid:image001.jpg@01CC4C53.AE4FBE80]   orbitz.com | 
ebookers.com | 
hotelclub.com | 
cheaptickets.com | 
ratestogo.com | 
asiahotels.com

<>

Re: [GENERAL] repmgr problem with registering standby

2011-07-27 Thread Jaime Casanova
On Wed, Jul 27, 2011 at 4:36 AM, Toby Corkindale
 wrote:
>
> So that looks good, but then I try this on the slave:
> # repmgr -f /etc/repmgr/validator/repmgr.conf \
>  --verbose standby register
>

can you show the content of /etc/repmgr/validator/repmgr.conf?

[...]
>
> I can query the database like so though, and it seems like it's all good:
> repmgr=# select * from repmgr_validator.repl_nodes;
>  id |  cluster  |                       conninfo
> +---+--
>  0 | validator | host=10.133.54.2 port=5432 user=repmgr dbname=repmgr
> (1 row)
>

this is on the master or the slave?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error when compiling a c function

2011-07-27 Thread Sebastian Jaenicke
On Wed, Jul 27, 2011 at 09:34:20AM -0700, Ioana Danes wrote:
[..]
> #include "/usr/include/pgsql/server/postgres.h"
> #include "/usr/include/pgsql/server/fmgr.h"

#include "postgres.h"
#include "fmgr.h"

> #ifdef PG_MODULE_MAGIC

#ifndef

> PG_MODULE_MAGIC;
> #endif

...and compile with -I`pg_config --includedir-server`.

- Sebastian

-- 
A: Maybe because some people are too annoyed by top-posting.
Q: Why do I not get an answer to my question(s)?
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


pgpda2UrbwywS.pgp
Description: PGP signature


Re: [GENERAL] error when compiling a c function

2011-07-27 Thread Ioana Danes
Thanks a lot Sebastian

--- On Wed, 7/27/11, Sebastian Jaenicke  
wrote:

> From: Sebastian Jaenicke 
> Subject: Re: [GENERAL] error when compiling a c function
> To: "Ioana Danes" 
> Cc: "PostgreSQL General" 
> Received: Wednesday, July 27, 2011, 2:12 PM
> On Wed, Jul 27, 2011 at 09:34:20AM
> -0700, Ioana Danes wrote:
> [..]
> > #include "/usr/include/pgsql/server/postgres.h"
> > #include "/usr/include/pgsql/server/fmgr.h"
> 
> #include "postgres.h"
> #include "fmgr.h"
> 
> > #ifdef PG_MODULE_MAGIC
> 
> #ifndef
> 
> > PG_MODULE_MAGIC;
> > #endif
> 
> ...and compile with -I`pg_config --includedir-server`.
> 
> - Sebastian
> 
> -- 
> A: Maybe because some people are too annoyed by
> top-posting.
> Q: Why do I not get an answer to my question(s)?
> A: Because it messes up the order in which people normally
> read text.
> Q: Why is top-posting such a bad thing?
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error when compiling a c function

2011-07-27 Thread Rob Sargent


On 07/27/2011 12:38 PM, Ioana Danes wrote:
> Thanks a lot Sebastian
>
> --- On Wed, 7/27/11, Sebastian Jaenicke  
> wrote:
>
>> From: Sebastian Jaenicke 
>> Subject: Re: [GENERAL] error when compiling a c function
>> To: "Ioana Danes" 
>> Cc: "PostgreSQL General" 
>> Received: Wednesday, July 27, 2011, 2:12 PM
>> On Wed, Jul 27, 2011 at 09:34:20AM
>> -0700, Ioana Danes wrote:
>> [..]
>>> #include "/usr/include/pgsql/server/postgres.h"
>>> #include "/usr/include/pgsql/server/fmgr.h"
>> #include "postgres.h"
>> #include "fmgr.h"
>>
>>> #ifdef PG_MODULE_MAGIC
>> #ifndef
>>
>>> PG_MODULE_MAGIC;
>>> #endif
>> ...and compile with -I`pg_config --includedir-server`.
>>
>> - Sebastian
>>
>> -- 
>> A: Maybe because some people are too annoyed by
>> top-posting.
>> Q: Why do I not get an answer to my question(s)?
>> A: Because it messes up the order in which people normally
>> read text.
>> Q: Why is top-posting such a bad thing?
>>
Apparently the OP didn't read you .sig.  I for one prefer top-posted
threads (and though the whole everything in every message has gotten
completely out of hand).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reproduce why a query is slow

2011-07-27 Thread John Cheng
Sorry about the long wait between reply.

On Sat, May 7, 2011 at 2:36 PM, Tomas Vondra  wrote:

> Resource usually means there's too much I/O so the query is slow, but
> when you try it later the drives are idle and query runs much faster.
> Run some monitoring, e.g. even a simple 'iostat -x' or 'dstat' output
> might be sufficient. Once the slow query happens, try to correlate it to
> the CPU / disk activity.
>

I am unsure if disk IO could be a part of the problem. iostat shows a max of
~2000kb/second write. This query gets during a batch of insert/update
process (every 5 minutes or so, we batch process a list of 20 or so xmls,
i.e., "leads"). During the batch processing, iostat shows pretty low usage.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   4.380.000.120.620.00   94.88

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.99  0.00  5.94 0.0055.45 9.33
0.10   16.17   6.17   3.66
sdb   0.00   172.28  0.00 11.88 0.00  1473.27   124.00
0.032.42   1.08   1.29
sdc   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdd   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   8.350.000.370.370.00   90.90

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdb   0.00   621.00  0.00 23.00 0.00  5152.00   224.00 0.08
   3.52   1.22   2.80
sdc   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdd   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   9.620.000.121.880.00   88.38

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdb   0.0073.00  0.00  9.00 0.00   656.0072.89
0.011.33   1.00   0.90
sdc   0.00   200.00 20.00 129.00   304.00  2632.0019.70
1.228.16   0.68  10.20
sdd   0.00 0.00  9.00  0.00   120.00 0.0013.33
0.044.67   4.56   4.10

The 'atop' command shows low disk activity ~2%, but I'm not sure how
accurate atop is. If I were to run a similar query from the command line
while the batch job is running, I still get a 50ms execution time.

CPU usage does goes up a lot during this time. There are 8 cores and 1 of
the cores shoots up to ~97%. The process updates XML records in the database
and performs dozens of xpath based queries to pull data out of the xml into
relational tables. These extractions are performed with pgplsql triggers and
Postgres xpath functions. Hence the spike in CPU usage during that time.


>
> The unexpected plan change is usually caused by autovacuum/autoanalyze
> collecting skewed data for some reason, resulting in bad plan choice.
> Then the autovacuum runs again and you get different (much better) plan.
> This can be detected using the auto_explain contrib module, as someone
> already recommended.
>

We are using Postgres 8.3 and I don't believe the autoexplain contrib module
is available for 8.3.


>
> > The query is a bit complex, as it is generated by code, but I tried to
> > format it for easier reading. I've also replaced actual data with fake
> > data to protected personal information.
>
> I generally do recommend using explain.depesz.com to post explain plans,
> especially in case of complex queries. I've posted your query and this
> is the result
>
>  http://explain.depesz.com/s/gJO
>
> Not sure if it's relevant to your issue (probably not), but the bitmap
> index scans are significantly overestimated. Not sure if the overlap
> operator affects the estimate accuracy ...
>
> BTW what postgresql version is this? How large the database is, how much
> RAM is available? What is the size of shared_buffers?
>

I am loving explain.depesz.com!

We are using Postgres 8.3.This system has 16GB of RAM. The database is
114GB. shared_buffers is set to 2000MB. A possibly related note... the
'effective_cache_size' setting. The previous admin set it to 8000MB. The
free command shows 13.8GB cache, 397MB buffer. I wonder if changing the
effective_cache_size can improve query planning.


[GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Peter V

Hello all,

I am trying out PostgreSQL 9.1 Beta 3. In particular, I am very interested in 
WITH x AS (...) construction.

drop table if exists t;
create table t
(
    identifier   serial,
    title    text
);

with c as
(
    insert into t (title) values ('old') returning *
)
update t set title = 'new' from c where t.identifier = c.identifier;

select * from t;

Can someone explain why this returns 'old' instead of 'new'? Is the new row not 
yet visible when the update is evaluated?

Thanks.
  
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Merlin Moncure
On Wed, Jul 27, 2011 at 3:18 PM, Peter V  wrote:
>
> Hello all,
>
> I am trying out PostgreSQL 9.1 Beta 3. In particular, I am very interested in 
> WITH x AS (...) construction.
>
> drop table if exists t;
> create table t
> (
>     identifier   serial,
>     title    text
> );
>
> with c as
> (
>     insert into t (title) values ('old') returning *
> )
> update t set title = 'new' from c where t.identifier = c.identifier;
>
> select * from t;
>
> Can someone explain why this returns 'old' instead of 'new'? Is the new row 
> not yet visible when the update is evaluated?

because the update statement isn't doing anything.  (you could have
confirmed this by adding 'returning *' to the update.

While the 'from c' is working, you can't join back to t yet because
the statement hasn't resolved. here's a reduced form of your problem:

postgres=# with c as
(
insert into t (title) values ('old') returning *
) select * from t join c using (identifier);

The join fails because at the time it happens t isn't yet populated.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error when compiling a c function

2011-07-27 Thread Eric Ridge
On Wed, Jul 27, 2011 at 2:12 PM, Sebastian Jaenicke
 wrote:
> On Wed, Jul 27, 2011 at 09:34:20AM -0700, Ioana Danes wrote:
>
>> #ifdef PG_MODULE_MAGIC
>
> #ifndef

Just to avoid confusion...  #ifdef *is* correct.  See:

http://www.postgresql.org/docs/current/static/xfunc-c.html

(I can't comment on the OP's actual problem)

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Peter V




> Date: Wed, 27 Jul 2011 15:58:04 -0500
> Subject: Re: [GENERAL] WITH x AS (...) and visibility in UPDATE
> From: mmonc...@gmail.com
> To: peterv861...@hotmail.com
> CC: pgsql-general@postgresql.org
>
> On Wed, Jul 27, 2011 at 3:18 PM, Peter V  wrote:
> >
> > Hello all,
> >
> > I am trying out PostgreSQL 9.1 Beta 3. In particular, I am very interested 
> > in WITH x AS (...) construction.
> >
> > drop table if exists t;
> > create table t
> > (
> > identifier   serial,
> > titletext
> > );
> >
> > with c as
> > (
> > insert into t (title) values ('old') returning *
> > )
> > update t set title = 'new' from c where t.identifier = c.identifier;
> >
> > select * from t;
> >
>
 > Can someone explain why this returns 'old' instead of 'new'? Is 
the new row not yet visible when the update is evaluated?
>
> because the update statement isn't doing anything. (you could have
> confirmed this by adding 'returning *' to the update.
>
> While the 'from c' is working, you can't join back to t yet because
> the statement hasn't resolved. here's a reduced form of your problem:
>
> postgres=# with c as
> (
> insert into t (title) values ('old') returning *
> ) select * from t join c using (identifier);
>
> The join fails because at the time it happens t isn't yet populated.
>
> merlin



This makes sense. I thought that the insert was evaluated first, before the 
join is resolved. This isn't the case apparently.



Is there another way to force this? That is, without extracting it to two 
statements ;)



Thanks.
  
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Merlin Moncure
On Wed, Jul 27, 2011 at 4:03 PM, Peter V  wrote:
>> On Wed, Jul 27, 2011 at 3:18 PM, Peter V  wrote:
>> >
>> > Hello all,
>> >
>> > I am trying out PostgreSQL 9.1 Beta 3. In particular, I am very interested 
>> > in WITH x AS (...) construction.
>> >
>> > drop table if exists t;
>> > create table t
>> > (
>> >     identifier   serial,
>> >     title        text
>> > );
>> >
>> > with c as
>> > (
>> >     insert into t (title) values ('old') returning *
>> > )
>> > update t set title = 'new' from c where t.identifier = c.identifier;
>> >
>> > select * from t;
>> >
>>
>  > Can someone explain why this returns 'old' instead of 'new'? Is
> the new row not yet visible when the update is evaluated?
>>
>> because the update statement isn't doing anything. (you could have
>> confirmed this by adding 'returning *' to the update.
>>
>> While the 'from c' is working, you can't join back to t yet because
>> the statement hasn't resolved. here's a reduced form of your problem:
>>
>> postgres=# with c as
>> (
>> insert into t (title) values ('old') returning *
>> ) select * from t join c using (identifier);
>>
>> The join fails because at the time it happens t isn't yet populated.
>>
>> merlin
>
>
>
> This makes sense. I thought that the insert was evaluated first, before the 
> join is resolved. This isn't the case apparently.
>
> Is there another way to force this? That is, without extracting it to two 
> statements ;)

not in the exact sense you were trying.  what is it you are trying to
do in general? I'm having trouble understanding your use-case.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Peter V




> Date: Wed, 27 Jul 2011 16:16:48 -0500
> Subject: Re: [GENERAL] WITH x AS (...) and visibility in UPDATE
> From: mmonc...@gmail.com
> To: peterv861...@hotmail.com
> CC: pgsql-general@postgresql.org
>
> On Wed, Jul 27, 2011 at 4:03 PM, Peter V  wrote:
> >> On Wed, Jul 27, 2011 at 3:18 PM, Peter V  wrote:
> >> >
> >> > Hello all,
> >> >
> >> > I am trying out PostgreSQL 9.1 Beta 3. In particular, I am very 
> >> > interested in WITH x AS (...) construction.
> >> >
> >> > drop table if exists t;
> >> > create table t
> >> > (
> >> > identifier   serial,
> >> > titletext
> >> > );
> >> >
> >> > with c as
> >> > (
> >> > insert into t (title) values ('old') returning *
> >> > )
> >> > update t set title = 'new' from c where t.identifier = c.identifier;
> >> >
> >> > select * from t;
> >> >
> >>
> >  > Can someone explain why this returns 'old' instead of 'new'? Is
> > the new row not yet visible when the update is evaluated?
> >>
> >> because the update statement isn't doing anything. (you could have
> >> confirmed this by adding 'returning *' to the update.
> >>
> >> While the 'from c' is working, you can't join back to t yet because
> >> the statement hasn't resolved. here's a reduced form of your problem:
> >>
> >> postgres=# with c as
> >> (
> >> insert into t (title) values ('old') returning *
> >> ) select * from t join c using (identifier);
> >>
> >> The join fails because at the time it happens t isn't yet populated.
> >>
> >> merlin
> >
> >
> >
> > This makes sense. I thought that the insert was evaluated first, before the 
> > join is resolved. This isn't the case apparently.
> >
> > Is there another way to force this? That is, without extracting it to two 
> > statements ;)
>
> not in the exact sense you were trying. what is it you are trying to
> do in general? I'm having trouble understanding your use-case.
>

I want to apply updates on a copy of a row, instead on the row itself. The 
queries are above were simplied to demonstrate the problem.

So basically I want to do:

1) create the copy of the row and return the identifier
2) apply updates on the new row identified by the identifier returned in step 1

If possible, I want to write this in a single command, to avoid overhead and 
mistakes.

I tried writing a rewrite rule or before trigger, but it becomes quickly a mess 
to avoid infinite loops.

Any ideas are welcome. Thanks.
  
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread Jack Christensen

On 7/27/2011 4:22 PM, Peter V wrote:
I want to apply updates on a copy of a row, instead on the row itself. 
The queries are above were simplied to demonstrate the problem.

So basically I want to do:

1) create the copy of the row and return the identifier
2) apply updates on the new row identified by the identifier returned in step 1

If possible, I want to write this in a single command, to avoid overhead and 
mistakes.

I tried writing a rewrite rule or before trigger, but it becomes quickly a mess 
to avoid infinite loops.

Any ideas are welcome. Thanks.

Maybe I'm totally missing something, but why insert a copy and then 
update instead of directly insert a mutated copy?


Something like:
INSERT INTO t (foo, bar) SELECT 'my new foo', t.bar FROM t WHERE id=123;

Wouldn't the above construction let you make a new row with some new 
values and some copied values?


--
Jack Christensen
ja...@hylesanderson.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] repmgr problem with registering standby

2011-07-27 Thread Toby Corkindale

On 28/07/11 03:47, Jaime Casanova wrote:

On Wed, Jul 27, 2011 at 4:36 AM, Toby Corkindale
  wrote:


So that looks good, but then I try this on the slave:
# repmgr -f /etc/repmgr/validator/repmgr.conf \
  --verbose standby register


can you show the content of /etc/repmgr/validator/repmgr.conf?


cluster=validator
node=mel-db06
conninfo='host=10.133.54.1 port=5432 user=repmgr dbname=repmgr'


I can query the database like so though, and it seems like it's all good:
repmgr=# select * from repmgr_validator.repl_nodes;
  id |  cluster  |   conninfo
+---+--
  0 | validator | host=10.133.54.2 port=5432 user=repmgr dbname=repmgr
(1 row)


this is on the master or the slave?


I ran that on the slave; however I've just checked now, and the same 
results are given on both nodes.

Just so you know, db06=10.133.54.1 and db07=10.133.54.2.
They also have a second address each on the 192.168.10.x network as well 
though.


Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problems viewing information_schema.schemata

2011-07-27 Thread sqldbxdeveloper
I did investigate this issue and the problem is pg_has_role function.

Take a look at information_schema.schemata definition below
SELECT (current_database ()) ::information_schema.sql_identifier AS
catalog_name
, (n.nspname) ::information_schema.sql_identifier AS schema_name
, (u.rolname) ::information_schema.sql_identifier AS schema_owner
, (NULL::character VARYING) ::information_schema.sql_identifier AS
default_character_set_catalog
, (NULL::character VARYING) ::information_schema.sql_identifier AS
default_character_set_schema
, (NULL::character VARYING) ::information_schema.sql_identifier AS
default_character_set_name
, (NULL::character VARYING) ::information_schema.character_data AS
sql_path
FROM pg_namespace n
, pg_authid u
WHERE ((n.nspowner = u.oid) AND pg_has_role (n.nspowner, 'USAGE'::text));


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/problems-viewing-information-schema-schemata-tp4639500p4640939.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH x AS (...) and visibility in UPDATE

2011-07-27 Thread David Johnston

> 1) create the copy of the row and return the identifier
> 2) apply updates on the new row identified by the identifier returned in step 
> 1
> 
> If possible, I want to write this in a single command, to avoid overhead and 
> mistakes.
> 
> I tried writing a rewrite rule or before trigger, but it becomes quickly a 
> mess to avoid infinite loops.
> 
> Any ideas are welcome. Thanks.
> 
> 
Insert into table (cols)
Select ... From table;

As mentioned previously just modify the original row before inserting it into 
the table as a new record.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] issue with pg_restore

2011-07-27 Thread Nigel Heron

Hi list,
I'm trying to restore a backup into a database with a new name

the dump was done on a 8.4 server with:
pg_dump -F c -f bakfile olddb

i'm trying to restore it with:
createdb newdb; pg_restore -v --jobs=4 --disable-triggers 
--no-tablespaces --dbname=newdb bakfile

or even just:
createdb newdb; pg_restore -v --dbname=newdb bakfile

It doesn't work .. pg_restore claims to be creating tables, indexes, 
etc. and there are no errors in the output. It only takes a few seconds 
to run (the file is ~250MB).
In newdb, all the tables in the "public" schema are missing. All the 
functions and triggers were created though, tables in a non "public" 
schema were created but don't contain data. Tried on 8.4 and on 9.0 with 
the same result.
I turned on server statement logging and don't see statements that would 
create the missing tables, there are alot of BEGIN/COMMIT statements 
with nothing in between.


the only way i got it to work was to run:
pg_restore bakfile | psql newdb
which loads everything just fine but i was hoping to use parallel 
restore to speed it up.


any ideas?

-nigel.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Sim Zacks

On 07/27/2011 07:18 PM, Karl Nack wrote:


The best option is to use exceptions to communicate to the application
what went wrong and then allow the application to handle those
exceptions in many cases.  In other cases, the application may need to
know which inputs are mandatory.

So other than sanitizing input and making sure it's an appropriate data
type, are you relying solely on the database for all your input
validation?

I guess a good generic example might be setting up a new user account,
which might require a username, password, real name, and birth date. We
might also put a reasonable constraint that, at the very least, the
birth date cannot be greater than the current date. Now if the user
submitted a blank page, ideally the page would come back with four
errors, one for each field that was left blank. If you submitted this
basically as-is to the database, it'd arbitrarily fail on the first
column that didn't meet the NOT NULL constraint, and that would be the
only error sent back to the client. So yes, this would work, but in
theory it could take four or five times before every error was
identified and the user notified.
You can certainly have your function do all the tests before trying to 
insert the values.
If you have 4 fields that require validation, have a return code of 4 
chars ()
Each char stands for 1 field. If it is a 0 it worked, if it is a 1 it 
didn't.
When you finish your tests, check the value, if it is all 0s it worked 
and do your insert otherwise return the value.

This is the binary method.

Another opinion is that validation checks should be done by the client 
and only good input should be given to the function to begin with.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Master/Slave Setup and Incremental Backup

2011-07-27 Thread Nithya Rajendran
Hi all,
  I have done a MASTER/SLAVE  setup. In that I have to do the Incremental 
Backup.

Without affecting the existing master/slave setup, how to do the incremental 
backup?
Please anyone help on this?

Thanks and Regards
Nithya R


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---