Re: [GENERAL] Enhancement proposal - detect chain of triggers from inside the trigger

2013-01-16 Thread Pavel Stehule
2013/1/16 Edson Richter :
> I was wondering, would be a nice addition the ability to read the chain of
> triggers (may be another trigger variable like TG_OP, called "TG_CHAIN" or
> something else that will be an array with the name of the triggers called
> before current trigger).

-1

building dynamic used array, that should or should not used can
significantly decrease performance :(

if you need it, you can use workaround - session variables.

Regards

Pavel Stehule

>
> Would help debug database triggers that have cascaded events.
>
> Regards,
>
> Edson
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux Distribution Preferences?

2013-01-16 Thread Stuart Bishop
> On Sun, Jan 13, 2013 at 08:46:58PM -0700, Scott Marlowe wrote:
>> The reasons to NOT use ubuntu under PostgreSQL are primarily that 1:
>> they often choose a pretty meh grade kernel with performance
>> regressions for their initial LTS release.  I.e. they'll choose a
>> 3.4.0 kernel over a very stable 3.2.latest kernel, and then patch away
>> til the LTS becomes stable.  This is especially problematic the first
>> 6 to 12 months after an LTS release.

I wouldn't call it a reason not to use Ubuntu, but a reason why you
might want to use the previous LTS release. The kernel chosen needs to
be supported for 5 years, yet remain stable enough for the supported
application releases to be supported for 5 years.


On Wed, Jan 16, 2013 at 4:02 AM, Bruce Momjian  wrote:

> This really sums it up for me.  Do you need the most recent kernel with
> all the performance enhancements and new hardware support, and if so,
> are you willing to accept frequent updates and breakage as the bugs are
> fixed?

I hear lots of people like to wait for the .1 release of the LTS for
this sort of reason. It seems a common policy for applications too,
steering clear of .0 releases in favor of waiting for the initial
patch release.

-- 
Stuart Bishop 
http://www.stuartbishop.net/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpython intermittent ImportErrors

2013-01-16 Thread Brian Sutherland
On Wed, Jan 16, 2013 at 08:10:26AM +1100, Chris Angelico wrote:
> On Tue, Jan 15, 2013 at 4:55 AM, Brian Sutherland
>  wrote:
> > I'm guessing that it's some kind of race condition, but I wouldn't know
> > where to start looking.
> 
> Look for a recursive import (A imports B, B imports A) 

I've always seen circular imports as deterministic.

But I don't think it's this, because at least some of the tracebacks
occur when importing standard library code during "import site" at
interpreter startup. It's very unlikely there's a circular import there.

> or multiple
> threads trying to import simultaneously - Python sometimes has issues
> with that. Quite a few of those issues were sorted out in recent 3.x
> versions, but you're using 2.7.

I thought Python, even in 2.7, had an import lock to prevent multiple
threads importing simultaneously:

http://docs.python.org/2/library/imp.html#imp.lock_held

But yes, that could be a lead onto the issue, if the import lock were
broken that could result in the behaviour I see.

Hmm, but checking in the various modules shows that the import lock is
being correctly acquired.

> 
> ChrisA
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Brian Sutherland


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux Distribution Preferences?

2013-01-16 Thread SUNDAY A. OLUTAYO

- Original Message -
From: "Stuart Bishop" 
To: "Bruce Momjian" 
Cc: "Scott Marlowe" , "SUNDAY A. OLUTAYO" 
, "Gavin Flower" , "Chris 
Ernst" , pgsql-general@postgresql.org
Sent: Wednesday, January 16, 2013 1:00:56 PM
Subject: Re: [GENERAL] Linux Distribution Preferences?

> On Sun, Jan 13, 2013 at 08:46:58PM -0700, Scott Marlowe wrote:
>> The reasons to NOT use ubuntu under PostgreSQL are primarily that 1:
>> they often choose a pretty meh grade kernel with performance
>> regressions for their initial LTS release.  I.e. they'll choose a
>> 3.4.0 kernel over a very stable 3.2.latest kernel, and then patch away
>> til the LTS becomes stable.  This is especially problematic the first
>> 6 to 12 months after an LTS release.

I wouldn't call it a reason not to use Ubuntu, but a reason why you
might want to use the previous LTS release. The kernel chosen needs to
be supported for 5 years, yet remain stable enough for the supported
application releases to be supported for 5 years.


On Wed, Jan 16, 2013 at 4:02 AM, Bruce Momjian  wrote:

> This really sums it up for me.  Do you need the most recent kernel with
> all the performance enhancements and new hardware support, and if so,
> are you willing to accept frequent updates and breakage as the bugs are
> fixed?

I hear lots of people like to wait for the .1 release of the LTS for
this sort of reason. It seems a common policy for applications too,
steering clear of .0 releases in favor of waiting for the initial
patch release.

-- 
Stuart Bishop 
http://www.stuartbishop.net/

I always wait for one year to lapse before upgrading to the latest LTS



Thanks, 

Sunday Olutayo 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Independent backups of subdatabases

2013-01-16 Thread Adrian Klaver

On 01/15/2013 09:16 PM, John R Pierce wrote:

On 1/15/2013 9:02 PM, Robert James wrote:

I would do this as actual databases, but Postgres doesn't allow JOINs
and FKs between different databases.  Can I use schema for the above?
How? How do I backup and restore schema independently?


pg_dump --schema= ...

and drop schema, create schema prior to a restore of a given schema




Just be aware that a schema only dump has the following restriction:

http://www.postgresql.org/docs/9.2/interactive/app-pgdump.html

-n schema
--schema=schema

Note: When -n is specified, pg_dump makes no attempt to dump any other 
database objects that the selected schema(s) might depend upon. 
Therefore, there is no guarantee that the results of a specific-schema 
dump can be successfully restored by themselves into a clean database.



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sample databases

2013-01-16 Thread Vraj Mohan
Is there a good sample database (with decent data volumes) for
postgresql? I am interested in one for learning and automated testing.

I looked at http://pgfoundry.org/projects/dbsamples/ (specifically at
pagila), but it seemed incomplete and not maintained,

I also looked at http://dev.mysql.com/doc/index-other.html. Employee
data (large dataset, includes data and test/verification suite) looks
interesting. Is there anything similar for postgresql?

Is there a dataset used by postgresql development for testing?

If others are interested, I am willing to spend some time in helping
to create one.

--Vraj Mohan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sample databases

2013-01-16 Thread Carlos Mennens
On Wed, Jan 16, 2013 at 9:53 AM, Vraj Mohan  wrote:
> Is there a good sample database (with decent data volumes) for
> postgresql? I am interested in one for learning and automated testing.
>
> I looked at http://pgfoundry.org/projects/dbsamples/ (specifically at
> pagila), but it seemed incomplete and not maintained,

I've spent some months searching a year ago and found the same thing
as you. All the sample databases for PG are terrible. They're old, not
maintained, incomplete, or just useless. I too would love to have a
sample database with large data and relational values. Sadly unless
you create one yourself, I don't think there's one publicly available
that meets our expectations.

-- 
Carlos Mennens


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Triggers operations and log_statement = all not working?

2013-01-16 Thread Edson Richter

Hi!

I'm debugging few triggers I wrote these days, and I can't see the 
triggers statements being logged.

Probably I'm doing something wrong.

I just enabled

log_statement = 'all'

In postgresql.conf, but I can see all statements issued by my 
application, but the statements in trigger don't show up.
Is there anything else I need to do to see all trigger statements, or my 
triggers are just not being called?


Thanks,


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enhancement proposal - detect chain of triggers from inside the trigger

2013-01-16 Thread Tom Lane
Pavel Stehule  writes:
> 2013/1/16 Edson Richter :
>> I was wondering, would be a nice addition the ability to read the chain of
>> triggers (may be another trigger variable like TG_OP, called "TG_CHAIN" or
>> something else that will be an array with the name of the triggers called
>> before current trigger).

> -1

> building dynamic used array, that should or should not used can
> significantly decrease performance :(

I agree it wouldn't do to add cycles to every trigger for a feature that
only a small minority of them would use.  However, maybe we could expose
something like this as a function you'd call to get the information if
you want it, with nil-or-close-to-nil overhead if you don't.

In the long run, maybe some of the lesser-used existing trigger
variables should be converted to that style too ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT * and column ordering

2013-01-16 Thread Rob Sargent

On 01/15/2013 07:08 PM, Meta Seller Dev/Admin wrote:

On Wed, Jan 16, 2013 at 11:45 AM, Rob Sargent  wrote:

What environment are you in.  In jdbc you can address the resultset by
column name.


C++, so it's quite inefficient to always use names.

Chris Angelico


Craft you're own metadata/lookup for each of the tables?  Run against 
well-ordered views?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpython intermittent ImportErrors RESOLVED

2013-01-16 Thread Brian Sutherland
On Mon, Jan 14, 2013 at 09:05:09AM -0800, Adrian Klaver wrote:
> On 01/14/2013 08:30 AM, Brian Sutherland wrote:
> >Hi,
> >
> >I have a plpython stored procedure which sometimes fails when I run my
> >applications automated test suite. The procedure is called hundreds of
> >times during the tests but only fails a few times, often with the
> >following ImportError:
> >
> > Traceback (most recent call last):
> >   File 
> > "/Users/jinty/.buildout/eggs/setuptools-0.6c11-py2.7.egg/site.py", line 73, 
> > in 
> > __boot()
> >   File 
> > "/Users/jinty/.buildout/eggs/setuptools-0.6c11-py2.7.egg/site.py", line 2, 
> > in __boot
> > import sys, imp, os, os.path
> >   File "/Users/jinty/src/mp/lib/python2.7/os.py", line 49, in 
> > import posixpath as path
> >   File "/Users/jinty/src/mp/lib/python2.7/posixpath.py", line 15, in 
> > 
> > import stat
> > ImportError: No module named stat
> >
> >Changing the order in which the tests are run, or running tests
> >individually makes the error move/change or disappear. The behaviour is
> >the same with PostgreSQL versions 9.2.2 and 9.1.7.
> >
> >I have tried (but failed) to reproduce this error in a simple .sql
> >script. Outside of the tests, it always seems to work.
> >
> >Having run into a brick wall debugging this, I'm hoping there's someone
> >here who can help?
> 
> Since order seems to be important what test is run prior to the
> function failing versus the test run when it succeeds?

I finally got out the big hammer. I applied the attached patch to
Python/import.c and started postgres with PYTHONVERBOSE set. I
discovered that the import was failing because the fopen() call on the
module fails with:

# trying /Users/jinty/src/mp/lib/python2.7/linecache.py
Error opening file: Too many open files

So there's at least one bug here, Python should probably raise an
intelligent error message if an import fails because of too many open
files. Reported that here: http://bugs.python.org/issue16981

I had a look at the files open by the process, there were not that many,
so no leaks or anything. Just an utterly insane OSX default maximum open
file descriptors.

Running:

ulimit -n 4096

before starting PostgreSQL resolved my issue completely.

Many thanks to all who helped out!

-- 
Brian Sutherland
--- ./Python/import.c.orig  2013-01-16 13:37:49.0 +0100
+++ ./Python/import.c   2013-01-16 14:03:04.0 +0100
@@ -7,6 +7,7 @@
 #undef Yield /* undefine macro conflicting with winbase.h */
 #include "pyarena.h"
 #include "pythonrun.h"
+#include "errno.h"
 #include "errcode.h"
 #include "marshal.h"
 #include "code.h"
@@ -1478,7 +1479,13 @@
 if (filemode[0] == 'U')
 filemode = "r" PY_STDIOTEXTMODE;
 fp = fopen(buf, filemode);
+if (fp == NULL & Py_VerboseFlag > 1) {
+PySys_WriteStderr("Error opening file: %s\n", strerror( errno 
));
+}
 if (fp != NULL) {
+if (Py_VerboseFlag > 1) {
+PySys_WriteStderr("# OPENED\n");
+}
 if (case_ok(buf, len, namelen, name))
 break;
 else {   /* continue search */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enhancement proposal - detect chain of triggers from inside the trigger

2013-01-16 Thread Edson Richter

Em 16/01/2013 14:18, Tom Lane escreveu:

Pavel Stehule  writes:

2013/1/16 Edson Richter :

I was wondering, would be a nice addition the ability to read the chain of
triggers (may be another trigger variable like TG_OP, called "TG_CHAIN" or
something else that will be an array with the name of the triggers called
before current trigger).

-1
building dynamic used array, that should or should not used can
significantly decrease performance :(

I agree it wouldn't do to add cycles to every trigger for a feature that
only a small minority of them would use.  However, maybe we could expose
something like this as a function you'd call to get the information if
you want it, with nil-or-close-to-nil overhead if you don't.

In the long run, maybe some of the lesser-used existing trigger
variables should be converted to that style too ...

regards, tom lane


Sounds perfect for me! My concern was to have additional information 
during debug only, with less performance penalty as possible in production.
There is one workaround (using RAISE NOTICE), but just because I felt 
the pain on my skin, I did proposed the feature.


Regards,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpython intermittent ImportErrors RESOLVED

2013-01-16 Thread Adrian Klaver

On 01/16/2013 08:20 AM, Brian Sutherland wrote:

On Mon, Jan 14, 2013 at 09:05:09AM -0800, Adrian Klaver wrote:

On 01/14/2013 08:30 AM, Brian Sutherland wrote:




I had a look at the files open by the process, there were not that many,
so no leaks or anything. Just an utterly insane OSX default maximum open
file descriptors.

Running:

 ulimit -n 4096

before starting PostgreSQL resolved my issue completely.

Many thanks to all who helped out!



Well that came from left field. That was some detective work. Thanks for 
the follow up.


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Case insensitive collation

2013-01-16 Thread Marcel van Pinxteren
The subject has been discussed on this mailing list before, recently.
To be able to switch from SQL Server to Postgresql, for me this is
essential.

Therefore the question: are there plans to create a set of case
insensitive, and maybe also accent insensitive collations in the near
future?
I have no idea how complex this is, but it seems to me, looking at MySQL
and SQL Server that it's not such a strange thing to expect from a database
server.
I know I can use "lower" (even on indexes) and citext, but this feels like
patchwork, and would mean a lot of work when converting our database (with
122 tables).

Regards,
Marcel van Pinxteren


Re: [GENERAL] Case insensitive collation

2013-01-16 Thread Tom Lane
Marcel van Pinxteren  writes:
> Therefore the question: are there plans to create a set of case
> insensitive, and maybe also accent insensitive collations in the near
> future?

Not from the Postgres project -- we just use the collations supplied by
the operating system.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Case insensitive collation

2013-01-16 Thread Marcel van Pinxteren
>From the Microsoft site I learned
http://msdn.microsoft.com/en-us/library/ms188046(v=sql.105).aspx
that they combine collation and "ComparisonStyle" to a collation name.

I thought that case insensitivity had to be built into the collation, but
apparently MS built case sensitivity in the database engine.
This would mean that Postgresql would need to build case (in)sensitivity
into her engine as well.

Judging from the small amount of discussion on this subject, I am afraid
this is not going to happen anytime soon. Alas, we will stay with SQL
Server then (or maybe MySQL, but I will have to investigate).

Met vriendelijke groet,

Marcel van Pinxteren
--
Volg onze bouw op http://nieuwekampen.blogspot.com


On Wed, Jan 16, 2013 at 8:14 PM, Tom Lane  wrote:

> Marcel van Pinxteren  writes:
> > Therefore the question: are there plans to create a set of case
> > insensitive, and maybe also accent insensitive collations in the near
> > future?
>
> Not from the Postgres project -- we just use the collations supplied by
> the operating system.
>
> regards, tom lane
>


Re: [GENERAL] Best method to compare subdomains

2013-01-16 Thread Andrew Sullivan
On Wed, Jan 16, 2013 at 03:23:30PM -0500, Robert James wrote:
> Is there a recommended, high performance method to check for subdomains?
> 
> Something like:
> - www.google.com is subdomain of google.com
> - ilikegoogle.com is not subdomain of google.com
> 
> There are many ways to do this (lowercase and reverse the string,
> append a '.' if not there, append a '%', and do a LIKE).  But I'm
> looking for one that will perform well when the master domain list is
> an indexed field in a table, and when the possible subdomain is either
> an individual value, or a field in a table for a join (potentially
> indexed).

Well, the _best_ thing to do would be to convert all the labels to
wire format and compare those, because that way you know you're
matching label by label the way the DNS does.  That sounds like a lot
of work, however, and you probably need to do it in C.

You could find all the label boundaries (in the presentation format,
the dots) and then split out the labels.  I suppose you could put them
into an array and then count backwards in the array to compare the
different labels.

Reversing the string might not actually work, because it's possible that
the labels are just octets and unless you're careful about your locale
you could end up messing that reverse operation up -- oughta be safe in
"C", though.  (Contrary to popular opinion, domain name labels are
not necessarily made of ASCII.)  You can, of course, also force the
labels to be only LDH-labels.

Best,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] argument of AND must not return a set when using regexp_matches

2013-01-16 Thread Robert James
I've been getting a funny SQL error, which I've boiled down to this case.

SELECT (regexp_matches('abc', '(.)b(.)'))[1] IS NOT NULL
-- Returns true, as expected

SELECT (regexp_matches('abc', '(.)b(.)'))[1] IS NOT NULL AND true
-- Gives this error:
ERROR: argument of AND must not return a set
SQL state: 42804

Can anyone make heads or tails of it? Is it a real bug? Is there a work around?

Postgres 8.3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reducing number of ANDs speeds up query RESOLVED

2013-01-16 Thread T. E. Lawrence

On 15.01.2013, at 16:36, Tom Lane  wrote:
> "T. E. Lawrence"  
> > So, apparently, we need to interrupt the heavy imports on some reasonable 
>> intervals and do manual VACUUM ANALYZE?
> 
> Data import as such, no matter how "heavy", shouldn't be a problem.
> The question is what are you doing that takes access-exclusive table
> locks frequently, and do you really need to do that?
> 
> A quick look at the docs suggests that ALTER TABLE, REINDEX, or CLUSTER
> would be the most likely candidates for taking exclusive table locks.
> 
>   regards, tom lane

Thank you for this.

We will have to research into this, it will take a while.

T.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reducing number of ANDs speeds up query RESOLVED

2013-01-16 Thread T. E. Lawrence

On 15.01.2013, at 17:32, Jeff Janes  wrote:
> T.E., Fortunately in point releases from August 2012 (9.0.9, 9.1.5,
> etc.), the default server log settings will log both the cancel and
> the command triggering the cancel.  So if you are running an up to
> date server, you can just look in the logs to see what is happening.
> 
> Cheers,
> 
> Jeff

That's interesting, I'll check it. Thank you.
T.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best method to compare subdomains

2013-01-16 Thread Steve Atkins

On Jan 16, 2013, at 12:23 PM, Robert James  wrote:

> Is there a recommended, high performance method to check for subdomains?
> 
> Something like:
> - www.google.com is subdomain of google.com
> - ilikegoogle.com is not subdomain of google.com
> 
> There are many ways to do this (lowercase and reverse the string,
> append a '.' if not there, append a '%', and do a LIKE).  But I'm
> looking for one that will perform well when the master domain list is
> an indexed field in a table, and when the possible subdomain is either
> an individual value, or a field in a table for a join (potentially
> indexed).

If you've already dealt with any punycode encoding then the lowercased,
reversed string works pretty well, either as a (probably trigger-managed)
field or as a functional index.

If you need to get fancier, you might want to take a look at the approach
https://github.com/dimitri/prefix takes.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade problem from 8.4 to 9.2 problems on OSX

2013-01-16 Thread Charles Porter
I am trying to migrate from 8.4 to 9.2 on OSX 10.8 (Mountain Lion)

Both 8.4 and 9.2 have been installed with the installers for OSX from
EnterpriseDB

Both have been stopped.

As user postgres, I run the pg_upgrade command and get the error
'old and new pg_controldata alignments are invalid or do not match'

Is there anything I can do to correct this?

The 8.4 installation contains a lot of databases, so I would rather not do
pg_dump/pg_restore on every database unless necessary.

This is the actual command and output:

[/Volumes/DISK2/Postgres/9.2/data]$  /Library/PostgreSQL/9.2/bin/pg_upgrade
 \
 -b /Library/PostgreSQL8.4/bin \
 -B /Library/PostgreSQL/9.2/bin \
 -d /Library/PostgreSQL8.4/data \
 -D /Volumes/DISK2/Postgres/9.2/data


Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions   ok

old and new pg_controldata alignments are invalid or do not match
Failure, exiting


Re: [GENERAL] argument of AND must not return a set when using regexp_matches

2013-01-16 Thread Tom Lane
Robert James  writes:
> SELECT (regexp_matches('abc', '(.)b(.)'))[1] IS NOT NULL AND true
> -- Gives this error:
> ERROR: argument of AND must not return a set
> SQL state: 42804

> Can anyone make heads or tails of it? Is it a real bug? Is there a work 
> around?

It's not a bug: regexp_matches returns a set of rows, not a scalar
result.

You might want to stick it into a sub-select as per the trick suggested
in the manual - then you get a NULL rather than zero rows when there's
no match.  (Or, in the particular example at hand, it's not very clear
why you're using regexp_matches at all and not a plain old ~ operator.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT... WHERE

2013-01-16 Thread David Johnston
Robert James wrote
> On 1/13/13, Chris Angelico <

> rosuav@

> > wrote:
>> On Mon, Jan 14, 2013 at 3:37 PM, Robert James <

> srobertjames@

> >
>> wrote:
>>> Thanks.  But how do I do that where I have many literals? Something
>>> like:
>>>
>>> INSERT INTO seltest (id, a, b) SELECT (1,2,3),(4,5,6),(7,8,9) WHERE b
>>> IN (SELECT ...)
>>
>> You can use WITH clauses in crazy ways with PostgreSQL. I haven't
>> actually tried it, but you should be able to put your VALUES behind a
>> WITH, then SELECT from that WHERE blah blah, and INSERT that SELECT.
>>
>> As they say, knock yourself out! :)
>>
>> ChrisA
> 
> 
> I don't quite follow - could you please elaborate?

INSERT INTO table_abc (a, b, c)

WITH values_to_insert (a, b, c) AS (
VALUES (1,2,3), (4,5,6), (7,8,9)
)

SELECT a, b, c 
FROM values_to_insert
WHERE a = 4

;

See:  http://www.postgresql.org/docs/9.2/interactive/sql-values.html
   for more
detail on "VALUES".  Basically it provides a way to build an on-the-fly
table and can be used wherever a normal table can be used (though usually it
takes some aliasing to get meaningful names).

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/INSERT-WHERE-tp5740009p5740164.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq and multithreading

2013-01-16 Thread Asia
It is not possible because connection is local variable in both thread 
functions, no common variables are used.

I checked that it also crashes without SSL. Two threads connecting to the same 
server, different databases.

Joanna


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store clickmap points?

2013-01-16 Thread aasat
> convert it into a heatmap at the end of each day. 

How to do it with Postgresql?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-store-clickmap-points-tp5739121p5740076.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store clickmap points?

2013-01-16 Thread aasat
> Instead of storing x/y, have you considered referencing a region of pixels?
The bigger the region, the larger your possible savings. 

Good idea, but I don't always have all points and regions will not be fully
filled



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-store-clickmap-points-tp5739121p5740079.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade problem from 8.4 to 9.2 problems on OSX

2013-01-16 Thread Bruce Momjian
On Wed, Jan 16, 2013 at 03:06:41PM -0800, Charles Porter wrote:
> I am trying to migrate from 8.4 to 9.2 on OSX 10.8 (Mountain Lion)
> 
> Both 8.4 and 9.2 have been installed with the installers for OSX from
> EnterpriseDB
> 
> Both have been stopped.
> 
> As user postgres, I run the pg_upgrade command and get the error 
> 'old and new pg_controldata alignments are invalid or do not match'
> 
> Is there anything I can do to correct this?
> 
> The 8.4 installation contains a lot of databases, so I would rather not do
> pg_dump/pg_restore on every database unless necessary. 
> 
> This is the actual command and output:
> 
> [/Volumes/DISK2/Postgres/9.2/data]$  /Library/PostgreSQL/9.2/bin/pg_upgrade  \
>  -b /Library/PostgreSQL8.4/bin \ 
>  -B /Library/PostgreSQL/9.2/bin \
>  -d /Library/PostgreSQL8.4/data \
>  -D /Volumes/DISK2/Postgres/9.2/data
> 
> 
> Performing Consistency Checks
> -
> Checking current, bin, and data directories ok
> Checking cluster versions   ok
> 
> old and new pg_controldata alignments are invalid or do not match
> Failure, exiting

OK, sure.  Run this on the old and new clusters:

$ pg_controldata /u/pg/data.old
$ pg_controldata /u/pg/data.new

You will see a mismatch here:

Maximum data alignment:   8
  ^

My quick guess is that one is 32-bit mode and the other is 64-bit mode.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Triggers operations and log_statement = all not working?

2013-01-16 Thread Edson Richter

Em 16/01/2013 13:56, Edson Richter escreveu:

Hi!

I'm debugging few triggers I wrote these days, and I can't see the 
triggers statements being logged.

Probably I'm doing something wrong.

I just enabled

log_statement = 'all'

In postgresql.conf, but I can see all statements issued by my 
application, but the statements in trigger don't show up.
Is there anything else I need to do to see all trigger statements, or 
my triggers are just not being called?


Thanks,


Don't get bored - I just discovered how to enable plsql debugging in 
PgAdmin!


Thanks,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store clickmap points?

2013-01-16 Thread Shane Spencer
Whatever you did to get 1 million points a day on your site.. I want in..
the name of your marketer please!

I agree with condensing this into a heatmap or a set of RRDs.. one for X..
one for Y.. one for x * (y * max_height)).  You can easily query RRDs later
on.. even multiple RRD files at once.

Also.. don't be afraid of flat files.  They compress well.

- Shane


On Mon, Jan 14, 2013 at 5:29 AM, aasat  wrote:

> > Instead of storing x/y, have you considered referencing a region of
> pixels?
> The bigger the region, the larger your possible savings.
>
> Good idea, but I don't always have all points and regions will not be fully
> filled
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/How-to-store-clickmap-points-tp5739121p5740079.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Sample databases

2013-01-16 Thread Jeff Janes
On Wednesday, January 16, 2013, Vraj Mohan wrote:

> Is there a good sample database (with decent data volumes) for
> postgresql? I am interested in one for learning and automated testing.
>

What do you mean by decent data volumes?  Numbers and units are wonderful
things!

What things are you looking to learn and test automatically?  I like to
learn about and test performance, and I find pgbench often quite good for
doing that.

If you want to learn and test a framework like Django or Rails or whatever
the latest and greatest is, I would think that those frameworks would
provide their own sample data for doing that.



>
> I looked at http://pgfoundry.org/projects/dbsamples/ (specifically at
> pagila), but it seemed incomplete and not maintained,
>

Is there a specific thing about the completeness that concerns you?

Not being maintained doesn't bother me, unless of course it is actually
broken in the first place.  Data is data.  It isn't like programs, which
decay over time.

Have you looked at the Dell DVD store?  It is also on that site you linked
above, but it seems to be more up to date on Dell's own site.



> I also looked at http://dev.mysql.com/doc/index-other.html. Employee
> data (large dataset, includes data and test/verification suite) looks
> interesting. Is there anything similar for postgresql?
>

I don't know enough about it to say.  The data itself should probably not
be all that hard to port (easy to say for someone who doesn't have to do
it...).  The verification suite, on the other hand, might be.  Is that your
primary interest, the code and not the data?


>
> Is there a dataset used by postgresql development for testing?
>


There is a regression database used by "make check", but it is designed to
test specific features of the database system that people worried might get
broken by future work, not to provide a cohesive thing that looks like a
web application.

Cheers,

Jeff


[GENERAL] speeding up a join query that utilizes a view

2013-01-16 Thread Kirk Wythers
I am looking for advice on a performance problem. I'm pretty sure that the 
culprit of my slow performance is a view that is several hundred million 
records in size. Because it is a view, I can only index the underlying table, 
but because the view generates an "un-pivoted" version of the underlying table 
with un unnest function, I can't index the important column in the underlying 
table, because it doesn't exist until after the un-pivot or stacking function 
of the view… I know… this is all very circular. 

Here is the join query that uses the view. I have 

SELECT  

   
data_key.site,  

data_key.canopy,


data_key.measurement_interval,  


data_key.treatment_code,

data_key.treatment_abbr,


data_key.plot,  

fifteen_min_stacked_view.*  

   
FROM

   
data_key,   


fifteen_min_stacked_view

   WHERE


data_key.variable_channel = fifteen_min_stacked_view.variable AND 
data_key.block_name = fifteen_min_stacked_view.block_name   





  AND fifteen_min_stacked_view.variable ~ 'tsoil'   




I have tried adding indexes where I can on the join colums in the data_key table
Here is the EXPLAIN.


  QUERY PLAN
  
--
 Hash Join  (cost=195.20..548004.70 rows=196 width=192) (actual 
time=3.295..443523.222 rows=28779376 loops=1)
   Hash Cond: ((fifteen_min_stacked_propper.variable = 
(data_key.variable_channel)::text) AND 
((fifteen_min_stacked_propper.block_name)::text = (data_key.block_name)::text))
   ->  Subquery Scan on fifteen_min_stacked_propper  (cost=0.00..547620.47 
rows=2878 width=156) (actual time=0.247..424911.643 rows=28779376 loops=1)
 Filter: (fifteen_min_stacked_propper.variable ~ 'tsoil'::text)
 ->  Index Scan using fifteen_min_pkey on fifteen_min  
(cost=0.00..525136.58 rows=1798711 width=1072) (actual time=0.034..96077.588 
rows=428093218 loops=1)
   -

[GENERAL] Streaming Replication Failover

2013-01-16 Thread ning chan
Hi,
I have a cluster of 3 nodes Primary is connected by StandbyA (streaming),
Standby A is connected by Standby B (streaming).
I failed over the cluster
1) stop primary
2) promoted StandbyA

Now i see from syslog on Standby B that it is complaining about the
timeline mismatch.

Replication Status from Primary
=
|Parameters   |Value|
=
|backend_start| 2013-01-16 23:05:48 |
|pid  |17851|
|usesysid |  10 |
|usename  |   postgres  |
|application_name |   StandbyA  |
|client_addr  | 10.89.94.31 |
|client_hostname  | |
|client_port  |43558|
|state|  streaming  |
|sent_location|  0/1EAC3E68 |
|write_location   |  0/1EAC3E68 |
|flush_location   |  0/1EAC3E68 |
|replay_location  |  0/1EAC3E68 |
|sync_priority|  0  |
|sync_state   |async|
=

Replication Status from Standby A
=
|Parameters   |Value|
=
|backend_start| 2013-01-16 23:06:56 |
|pid  |12320|
|usesysid |  10 |
|usename  |   postgres  |
|application_name |   StandByB  |
|client_addr  | 10.89.94.29 |
|client_hostname  | |
|client_port  |48214|
|state|  streaming  |
|sent_location|  0/1EAC3E68 |
|write_location   |  0/1EAC3E68 |
|flush_location   |  0/1EAC3E68 |
|replay_location  |  0/1EAC3E68 |
|sync_priority|  0  |
|sync_state   |async|
=

now fail over Primary
On StandByA syslog,
Jan 16 23:08:12 se032c-94-31 postgres[12316]: [3-1] 12316FATAL:
replication terminated by primary server
Jan 16 23:08:12 se032c-94-31 postgres[12312]: [5-1] 12312LOG:  redo starts
at 0/1EAC3E68

On StandByB syslog
Jan 16 23:09:48 localhost postgres[3932]: [5-1] LOG:  redo starts at
0/1EAC3E68

Now as soon as I promoted the StandByA,
i see replication between A & B is broken, from StandBy B syslog, it shows
the following.
Jan 16 23:11:28 localhost postgres[3945]: [2-1] FATAL:  timeline 15 of the
primary does not match recovery target timeline 14

Now my question is while A & B are in sync, why promoting B will break the
replication.

To resolve the problem, I need to do stop the engine on B, rsync from A,
and start back the B engine.
rsync -a --progress --exclude postgresql.conf --exclude recovery.done
--exclude pg_hba.conf root@10.89.94.31:/opt/postgres/9.2/data/*
/opt/postgres/9.2/data

Do I need to sync the whole data directory from A? I have a small DB now (2
tables with only few rows). This may take a long time if I have a much
larger DB. Any shortcut? Why do i need to do the rync while A & B are
originally in sync?

Thanks~
Ning