[GENERAL] using pg's internal timezone database?

2011-12-20 Thread Louis-David Mitterrand
Hi,

To provide my forum users with a 'timezeone' preference in their profile
how can I use postgresql's internal table of timezones ?

I found a reference to it here:

http://www.postgresql.org/docs/7.2/static/timezones.html

but not in recent versions docs.

Thanks,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fsync on ext4 does not work

2011-12-20 Thread Florian Weimer
* Havasvölgyi Ottó:

> 2011/12/19 Florian Weimer 
>
>> * Havasvölgyi Ottó:
>>
>> > Even though the TPS in pgbench about 700 with 1 client.
>> > I have tried other sync methods (fdatasync, open_sync), but all are
>> similar.
>> > Should I disable write cache on HDD to make it work?
>>
>> Did you mount your ext4 file system with the nobarrier option?
>>
>> By default, ext4 is supposed to cope properly with hard disk caches,
>> unless the drive is lying about completing writes (but in that case,
>> disabling write caching is probably not going to help much with
>> reliability, either).
>>
>
> It is mounted with defaults, no other option yet, so it should flush.
> These HDDs are 7200 rpm SATA with some low level software RAID1.
> I cannot understand why disabling HDD write cache does not help either.
> Could you explain please?

The drive appears to be fundamentally broken.  Disabling the cache won't
change that.

But you mention software RAID1---perhaps your version of the RAID code
doesn't pass down the barriers to the disk?

> There is also an InnoDB transaction log on this partition, but its commit
> time is quite longer. On the same workload PgSql's commit is about 1 ms,
> but InnoDB's is about 4-7 ms. I think 4-7 is also too short to flush
> something to such disk, am I right?

Yes, it's still too low, unless multiple commits are grouped together.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore should restore the schema comments and the database properties

2011-12-20 Thread Daniel Migowski
Hi,

I know this discussion has already been made, but for us it is a real problem 
that database properties are not restored with pg_restore. For me, the comment 
on a database, as well as the properties like pg_search_path are part of the 
data of the database. They are even contained in the dumps, so please add a 
function to pg_restore to let me restore this data also. It is really annoying 
not to be able to just use DROP DATABASE, CREATE DATABASE and pg_restore to get 
exactly the same thing  you had before.

I would also like a function that does a CREATE DATABASE on restore 
automatically, but I don't want to wish for too much here.

Thank you very much in advance,
Daniel Migowski

IKOffice
UNTERNEHMENSSOFTWARE

IKOffice GmbH

Daniel Migowski


Mail:

dmigow...@ikoffice.de

Nordstrasse 10

Tel.:

+49 (0)441 21 98 89 52

26135 Oldenburg

Fax.:

+49 (0)441 21 98 89 55

http://www.ikoffice.de

Mob.:

+49 (0)176 22 31 20 76



Geschäftsführer:

Ingo Kuhlmann, Daniel Migowski

Amtsgericht Oldenburg:

HRB 201467

Steuernummer:

64/211/01864





Re: [GENERAL] using pg's internal timezone database?

2011-12-20 Thread Scott Marlowe
On Tue, Dec 20, 2011 at 2:05 AM, Louis-David Mitterrand
 wrote:
> Hi,
>
> To provide my forum users with a 'timezeone' preference in their profile
> how can I use postgresql's internal table of timezones ?
>
> I found a reference to it here:
>
> http://www.postgresql.org/docs/7.2/static/timezones.html
>
> but not in recent versions docs.

You can view the pg internal tz database by looking at what's in the
tables pg_timezone_names and pg_timezone_abbrevs

-- To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fsync on ext4 does not work

2011-12-20 Thread Greg Smith

On 12/19/2011 10:52 AM, Havasvölgyi Ottó wrote:

PgSql 9.1.2
Debian, 2.6.32 kernel
WAL filesystem: ext4 with defaults


There's a pg_test_fsync program included with the postgresql-contrib 
package that might help you sort out what's going on here.  This will 
eliminate the possibility that you're doing something wrong with 
pgbench, and give an easy to interpret number relative to the drive RPM 
rate.


You said default settings, which eliminated "nobarrier" as a cause 
here.  The only other thing I know of that can screw up fsync here is 
using one of the incompatible LVM features to build your filesystem.  I 
don't know which currently work and don't work, but last I checked there 
were a few ways you could set LVM up that would eliminate filesystem 
barriers from working properly.  You might check:


dmesg | grep barrier

To see if you have any kernel messages related to this.

Here's a pg_test_fsync example from a Debian system on 2.6.32 with ext4 
filesystem and 7200 RPM drive, default mount parameters and no LVM:


$ ./pg_test_fsync
2000 operations per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync n/a
fdatasync 113.901 ops/sec
fsync  28.794 ops/sec
fsync_writethroughn/a
open_sync 111.726 ops/sec

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync n/a
fdatasync 112.637 ops/sec
fsync  28.641 ops/sec
fsync_writethroughn/a
open_sync  55.546 ops/sec

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
16kB open_sync write  111.909 ops/sec
 8kB open_sync writes  55.278 ops/sec
 4kB open_sync writes  28.026 ops/sec
 2kB open_sync writes  14.002 ops/sec
 1kB open_sync writes   7.011 ops/sec

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close28.836 ops/sec
write, close, fsync28.890 ops/sec

Non-Sync'ed 8kB writes:
write   112113.908 ops/sec

--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore should restore the schema comments and the database properties

2011-12-20 Thread Adrian Klaver
On Tuesday, December 20, 2011 2:48:31 am Daniel Migowski wrote:
> Hi,
> 
> I know this discussion has already been made, but for us it is a real
> problem that database properties are not restored with pg_restore. For me,
> the comment on a database, as well as the properties like pg_search_path
> are part of the data of the database. They are even contained in the
> dumps, so please add a function to pg_restore to let me restore this data
> also. It is really annoying not to be able to just use DROP DATABASE,
> CREATE DATABASE and pg_restore to get exactly the same thing  you had
> before.

search_path is in postgressql.conf, so you can get back to your state by 
copying 
it over.

> 
> I would also like a function that does a CREATE DATABASE on restore
> automatically, but I don't want to wish for too much here.

http://www.postgresql.org/docs/9.0/interactive/app-pgrestore.html

-C
--create

Create the database before restoring into it. (When this option is used, 
the 
database named with -d is used only to issue the initial CREATE DATABASE 
command. All data is restored into the database name that appears in the 
archive.)


If you are using plain text then you will need to supply the above to the 
pg_dump command.

> 
> Thank you very much in advance,
> Daniel Migowski
> 
d
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore should restore the schema comments and the database properties

2011-12-20 Thread Adrian Klaver
On Tuesday, December 20, 2011 2:48:31 am Daniel Migowski wrote:
> Hi,
> 
> I know this discussion has already been made, but for us it is a real
> problem that database properties are not restored with pg_restore. For me,
> the comment on a database, as well as the properties like pg_search_path
> are part of the data of the database. They are even contained in the
> dumps, so please add a function to pg_restore to let me restore this data
> also. It is really annoying not to be able to just use DROP DATABASE,
> CREATE DATABASE and pg_restore to get exactly the same thing  you had
> before.

Just confirmed that the database comment was dumped and restored. This is on 
Postgres 9.0.5. How are you doing the dump/restore procedure?.
To elaborate on my previous answer, search_path is in postgresql.conf because 
it 
is tied to the database cluster not a particular database.


> 
> I would also like a function that does a CREATE DATABASE on restore
> automatically, but I don't want to wish for too much here.
> 
> Thank you very much in advance,
> Daniel Migowski
> 
> IKOffice
> UNTERNEHMENSSOFTWARE
> 
> IKOffice GmbH
> 
> Daniel Migowski
> 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] segfault with plproxy

2011-12-20 Thread Marko Kreen
On Mon, Dec 19, 2011 at 01:05:20PM +0100, Filip Rembiałkowski wrote:
> W dniu 19 grudnia 2011 10:39 użytkownik Marko Kreen  
> napisał:
> > On Sat, Dec 17, 2011 at 10:25:40PM +0100, Filip Rembiałkowski wrote:
> >> Following scrip causes segmentation fault. Any ideas why / how to diagnose?
> >
> >> create table part0.users( check(id%2=0) ) inherits (public.users);
> >> create table part1.users( check(id%2=1) ) inherits (public.users);
> >> create or replace function public.list_users(condition text)
> >
> >> select * from public.list_users('%xyz%'); -- crash with segfault
> >
> > It seems you are making plproxy call public.list_users() recursively.
> > Postgres probably OOM-s somewhere then.
> >
> > Either move plproxy function to some other db, or use
> > TARGET/SELECT to pick different target function.
> 
> 
> Thanks Marko,
> 
> So is this "single-database, schemas mimic nodes" setup possible to
> achieve at all?

Yes, you just need to avoid calling same function recursively,
thats all.

> 
> My intention was:
> 
> #1. client calls func()
> 
> #2. plproxy calls func() on part0. part0 is defined as 'user=part0' so
> it directs to part0.func() thanks to current_schema setting.

This won't work, plproxy always uses fully-qualified names.

> #3. plproxy calls func() on part1 (paralell to #2). logic same as #2.
> 
> #4. plproxy combines result and sends it to client.
> 
> 
> Is schema a part of function signature?

Yes.

-- 
marko


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cisco Systems fail

2011-12-20 Thread Christian Ramseyer
On 12/14/11 7:03 PM, Ray Stell wrote:
> I've been using a network management tool for a number of years from
> cisco to manage storage networking (fibre channel).  The thing is
> called Fabric Manager and I was thrilled that they supported pg for the
> backend when I first installed.  However, their latest and greatest is
> frozen to pg 8.2.  Sigh.  I think they tripped over the datatypes not being
> automatically cast to TEXT.  That's what spewed anyway when I tried it
> to go around them.
> 
> Maybe there is porting opportunity for someone since they seem to have
> lost their way:
> http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/sw/5_0/release/notes/fm/fm_rel_notes_507.html
> 

If you're looking for a workaround: You can probably get this product to
work with >= 8.3 by re-enabling the old casting behavior:




Good luck
Christian

-- 
Christian Ramseyer
r...@networkz.ch

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL DBA in SPAAAAAAAACE

2011-12-20 Thread Merlin Moncure
On Tue, Dec 6, 2011 at 10:56 AM, Joe Miller  wrote:
> You may have seen this, but RedGate software is sponsoring a contest
> to send a DBA on a suborbital space flight.
>
> And there is a PostgreSQL representativeme!
>
> https://www.dbainspace.com/finalists/joe-miller
>
> Voting is open for 7 days. Don't let one of those Oracle or SQL Server
> punks win :p

guess who won! :-D

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore should restore the schema comments and the database properties

2011-12-20 Thread Alban Hertroys
On 20 December 2011 15:35, Adrian Klaver  wrote:
> To elaborate on my previous answer, search_path is in postgresql.conf because 
> it
> is tied to the database cluster not a particular database.

Not necessarily, it can also be tied to a schema or a role or (I
assume) a specific database in the cluster.

ALTER ROLE x SET search_path TO ...
ALTER SCHEMA x SET search_path TO ...

I don't have a PG instance available here, so I can't verify whether
those get dumped/restored. I assume they do though.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL DBA in SPAAAAAAAACE

2011-12-20 Thread Alban Hertroys
On 20 December 2011 16:01, Merlin Moncure  wrote:
> On Tue, Dec 6, 2011 at 10:56 AM, Joe Miller  wrote:
> guess who won! :-D

Ah cool. I'll wave when I get outside :)

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL DBA in SPAAAAAAAACE

2011-12-20 Thread Joe Miller
Thanks so much to everybody who voted. I really can't express my gratitude.

I'd love to head to the pub and buy everybody a drink, but I think
that might cost more than the flight.


Joe

On Tue, Dec 20, 2011 at 10:04 AM, Alban Hertroys  wrote:
> On 20 December 2011 16:01, Merlin Moncure  wrote:
>> On Tue, Dec 6, 2011 at 10:56 AM, Joe Miller  wrote:
>> guess who won! :-D
>
> Ah cool. I'll wave when I get outside :)
>
> --
> If you can't see the forest for the trees,
> Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore should restore the schema comments and the database properties

2011-12-20 Thread Adrian Klaver
On Tuesday, December 20, 2011 7:02:13 am Alban Hertroys wrote:
> On 20 December 2011 15:35, Adrian Klaver  wrote:
> > To elaborate on my previous answer, search_path is in postgresql.conf
> > because it is tied to the database cluster not a particular database.
> 
> Not necessarily, it can also be tied to a schema or a role or (I
> assume) a specific database in the cluster.
> 
> ALTER ROLE x SET search_path TO ...
> ALTER SCHEMA x SET search_path TO ...

My mistake, I forgot about that.
For a database:
ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT }

> 
> I don't have a PG instance available here, so I can't verify whether
> those get dumped/restored. I assume they do though.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How To Handle Hung Connections

2011-12-20 Thread Carlos Mennens
I'm attempting to delete a database that I've obviously not closed
connections from cleanly.

postgres=# DROP DATABASE filters;
ERROR:  database "filters" is being accessed by other users
DETAIL:  There are 4 other session(s) using the database.

How exactly would one manage this issue from a PostgreSQL
administration stand point? I know there are not real users connected
to this database but rather zombie connections. Is there a way to
force the DROP command or can I manually view / kill connections to
this database w/o having to restart the entire PostgreSQL daemon and
impact other active databases?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How To Handle Hung Connections

2011-12-20 Thread Mike Blackwell
To se a list of backends for a database:

select * from pg_stat_activity where datname = 'database_in_question';

To terminate a backend:

select pg_terminate_backend(pid);

Be sure you get the right ones...

I like to revoke connect privileges for the database first, if I can't stop
the applications/users attempting to access the database, to avoid them
grabbing another connection while I'm typing.


__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com



* *


On Tue, Dec 20, 2011 at 09:50, Carlos Mennens wrote:

> I'm attempting to delete a database that I've obviously not closed
> connections from cleanly.
>
> postgres=# DROP DATABASE filters;
> ERROR:  database "filters" is being accessed by other users
> DETAIL:  There are 4 other session(s) using the database.
>
> How exactly would one manage this issue from a PostgreSQL
> administration stand point? I know there are not real users connected
> to this database but rather zombie connections. Is there a way to
> force the DROP command or can I manually view / kill connections to
> this database w/o having to restart the entire PostgreSQL daemon and
> impact other active databases?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] segfault with plproxy

2011-12-20 Thread Filip Rembiałkowski
W dniu 20 grudnia 2011 15:36 użytkownik Marko Kreen  napisał:

>> Is schema a part of function signature?
>
> Yes.


Thanks again, that explains everything.

In the meantime, depesz has a solution basing on application_name, not
on username+schema as I tried.

http://www.depesz.com/index.php/2011/12/02/the-secret-ingredient-in-the-webscale-sauce/
- "many shards within the same database".

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] General coding question

2011-12-20 Thread jkells
General coding question.  Can I insert a text string into a character 
varying column that contains a \ as is, meaning no escaping of the 
character or is this a bad practice?

I.e:  Column data
==  
description  SUBDIV LOT 13 & N1\2 LOT  14 

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] General coding question

2011-12-20 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of jkells
Sent: Tuesday, December 20, 2011 12:33 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] General coding question

General coding question.  Can I insert a text string into a character varying 
column that contains a \ as is, meaning no escaping of the character or is this 
a bad practice?

I.e:  Column data
==  
description  SUBDIV LOT 13 & N1\2 LOT  14 

Thanks

-

You never would actually store an "escaping" black-slash in the data.  The need 
for an escape symbol occurs only during data entry and strictly depends on how 
you are entering data .  As you have not provided those details further advice 
cannot be given.  

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] General coding question

2011-12-20 Thread Bill Moran
In response to jkells :
> General coding question.  Can I insert a text string into a character 
> varying column that contains a \ as is, meaning no escaping of the 
> character or is this a bad practice?
> 
> I.e:  Column data
> ==  
> description  SUBDIV LOT 13 & N1\2 LOT  14 

This is a moderately confusing issue because standards are involved, and
standards frequently muddle things.

According to the SQL standard, there is nothing special about \.  It's
just another character and is not treated specially in any way.

PostgreSQL, for a long while, treated the \ as starting an escape character
sequence, because this was common in many databases an generally useful for
adding things like newlines.

At some point (don't know when) the escaping syntax was added.  This made
it possible for PostgreSQL to be 100% ANSI SQL compliant while still
supporting the old method of using the \ to start an escape sequence.  The
two critical tools for working with this are the standard_conforming_strings
config parameter and the E'' syntax for strings.  Documentation is here:
http://www.postgresql.org/docs/9.1/static/runtime-config-compatible.html
http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS

Unfortunately, it's just a _bit_ more complicated than that, even, because
the language you're using on the client side (which you don't mention) may
also use the \ as a special character, so it may be converted to something
before it's even transmitted to PostgreSQL.

So, the direct answer to your question is, "There's nothing wrong or bad
form about putting \ in your strings, but it can be difficult to do
correctly, depending on the circumstances."

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL server architecture

2011-12-20 Thread James B. Byrne
We run a small in-house data centre for our various
operations.  Currently, we are moving applications from
dedicated boxes to kvm based CentOS-6.1 virtual machines
on a single CentOS-6.1 host.  At issue is the question on
how to handle the PostgreSQL instances that we currently
have running on different machines.

As we see it presently we have the option of having one VM
host a centralized PostgreSQL server with multiple
databases or continue with each application specific VM
running its own copy of PostgreSQL with just the dedicated
application database.

Since whatever we chose we are likely to be using five
years from now I am soliciting informed option over which
option is considered a better choice for the long term.

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres Logs question

2011-12-20 Thread akp geek
Hi All -

   Is there way that we write the logs specific to only one
user in postgres? What I want to do is, Write the log files that are coming
from one user  or block the log entry for a pariticular user. Can you
please help? We have postgres 9.0.4

Regards


Re: [GENERAL] PostgreSQL server architecture

2011-12-20 Thread Bill Moran
In response to "James B. Byrne" :
> We run a small in-house data centre for our various
> operations.  Currently, we are moving applications from
> dedicated boxes to kvm based CentOS-6.1 virtual machines
> on a single CentOS-6.1 host.  At issue is the question on
> how to handle the PostgreSQL instances that we currently
> have running on different machines.
> 
> As we see it presently we have the option of having one VM
> host a centralized PostgreSQL server with multiple
> databases or continue with each application specific VM
> running its own copy of PostgreSQL with just the dedicated
> application database.
> 
> Since whatever we chose we are likely to be using five
> years from now I am soliciting informed option over which
> option is considered a better choice for the long term.

In my experience, you'll be better off using a single DB for all the
databases.  With proper roles, database permissions, and pg_hba.conf,
you'll have acceptable security.  The advantage to doing this is
better utilization of hardware, since you don't have all the overhead
of multiple VMs using up memory, CPU, and IO load.  You also have
less instances to monitor.

The disadvantage of doing so is a) that it doesn't scale as far, and
b) if you have one badly behaved application it can negatively affect
other databases.

In the case of both A and B, the answer when you hit that problem is
to just add another VM or physical machine and move databases off the
main server instance an onto their own instance on an as-needed basis.

In my experience, what you end up with as time goes on and you learn
how things operate are a few database servers housing many database.
With things spread out across the multiple instances as seems most
logical based on your observation of how they behave.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg crash shortly after 9.1.1 -> 9.1.2 upgrade

2011-12-20 Thread Tom Lane
Joseph Shraibman  writes:
> On 12/08/2011 12:54 AM, Tom Lane wrote:
>> Joseph Shraibman  writes:
>>> All was fine until:
>>> LOG:  statement: select "_devel".cleanupEvent('10 minutes'::interval,
>>> 'false'::boolean);
>>> ERROR:  could not open file "base/16406/2072097_fsm": Permission denied

>> That's pretty weird.  What were the permissions on that file?  Was it
>> properly owned by the postgres user?

> It had no permissions at all

> -- 1 postgres postgres 0 Feb 14  2005 2072097_fsm

> I actually didn't notice the old date until now.  This was an 8.4.x 
> database that I upgraded to 9.1.1 a while ago using pg_upgrade (using 
> the hardlink option).  I still have a backup of the 8.4 database from 
> when I did the upgrade, and that file doesn't appear in it.

It turns out that the crash at commit + failure to restart was a bug
introduced in 8.4; it should have just unlinked the file without
complaint.  I've now applied a patch for that misbehavior.

It's still mighty weird that the file was there with that mod date in
the first place, since PG certainly wasn't using FSM forks in 2005, even
if this database is that old.  I'm guessing that the mod date and lack
of permissions are both artifacts of some non-PG operation, perhaps a
failed rsync or some such?  I would suspect pg_upgrade except I'm pretty
sure it does not mess with either permissions or mod date.

Anyway, future releases should handle such a thing a bit more
gracefully.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory error with loading pg_dumpall

2011-12-20 Thread Dara Olson
Greetings.
I am attempting to create an exact copy of our production database/cluster on a 
different server for development.  I created a dumpall file which is 8.7GB. 
When I attempt to run this in psql on the new server it seems okay and then I 
got a string of "invalid command \N" lines" and then "out of memory" in the 
command prompt and then in the postgres log it states at the end,

CST LOG:  could not receive data from client: Unknown winsock error 10061
CST LOG:  unexpected EOF on client connection


I am running it on a Windows 2008 server with 8 GB Ram and dual 2GHz 
processors. I have the "postgres.conf" file set to 1GB of shared buffers. The 
production and new server are both running PostgreSQL 8.4 with PostGIS 1.4.

Am I going about this in the correct way? How can I debug to figure out what it 
happening? Can/should I just dump each database individually and drop and 
reload each database? 

Any help would be greatly appreciated.
Dara


Re: [GENERAL] General coding question

2011-12-20 Thread jkells
On Tue, 20 Dec 2011 13:32:32 -0500, David Johnston wrote:

> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of jkells Sent:
> Tuesday, December 20, 2011 12:33 PM To: pgsql-general@postgresql.org
> Subject: [GENERAL] General coding question
> 
> General coding question.  Can I insert a text string into a character
> varying column that contains a \ as is, meaning no escaping of the
> character or is this a bad practice?
> 
> I.e:  Column data
> ==  
> description  SUBDIV LOT 13 & N1\2 LOT  14
> 
> Thanks
> 
> -
> 
> You never would actually store an "escaping" black-slash in the data. 
> The need for an escape symbol occurs only during data entry and strictly
> depends on how you are entering data .  As you have not provided those
> details further advice cannot be given.
> 
> David J.

David Thanks
My problem comes from 6 records containing a backslash in several columns 
out of a million plus rows in many different tables.  I am testing some 
replication software and have found that for these 6 records the 
destination tables contain two backslashes after being replicated.

Source (master) record  
I.e:  Column data
 ==  
 description  SUBDIV LOT 13 & N1\2 LOT  14

Destination (slave) becomes the following  
 description  SUBDIV LOT 13 & N1\\2 LOT  14

My question was more generic since I cant see why a '\' character cant be 
used in a character string (I.e. storage path etc..  ).  How would you 
escape a \ character that is needed to be stored in a string and is there 
anything special that one would have to do when retrieving it?   

Regards,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] General coding question

2011-12-20 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of jkells
Sent: Tuesday, December 20, 2011 3:42 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] General coding question

On Tue, 20 Dec 2011 13:32:32 -0500, David Johnston wrote:

> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of jkells Sent:
> Tuesday, December 20, 2011 12:33 PM To: pgsql-general@postgresql.org
> Subject: [GENERAL] General coding question
> 
> General coding question.  Can I insert a text string into a character 
> varying column that contains a \ as is, meaning no escaping of the 
> character or is this a bad practice?
> 
> I.e:  Column data
> ==  
> description  SUBDIV LOT 13 & N1\2 LOT  14
> 
> Thanks
> 
> -
> 
> You never would actually store an "escaping" black-slash in the data. 
> The need for an escape symbol occurs only during data entry and 
> strictly depends on how you are entering data .  As you have not 
> provided those details further advice cannot be given.
> 
> David J.

David Thanks
My problem comes from 6 records containing a backslash in several columns out 
of a million plus rows in many different tables.  I am testing some replication 
software and have found that for these 6 records the destination tables contain 
two backslashes after being replicated.

Source (master) record  
I.e:  Column data
 ==  
 description  SUBDIV LOT 13 & N1\2 LOT  14

Destination (slave) becomes the following  
 description  SUBDIV LOT 13 & N1\\2 LOT  14

My question was more generic since I cant see why a '\' character cant be used 
in a character string (I.e. storage path etc..  ).  How would you escape a \ 
character that is needed to be stored in a string and is there 
anything special that one would have to do when retrieving it?   

-

You really need to include details like "my problems comes from  I am 
testing some replication software ..." in your original posting.  In this case 
your replication system is broken.  Mostly likely the issue stems from changes 
in how PostgreSQL deals with string literals.  There are two valid ways to 
write a string literal, one which escapes and one which does not.

1) E'some string with possible back-slash escapes'
2) ' some string where back-slashes are treated as literals'
 
Old Way) 'some string with back-slash escapes and log-file warnings'

Your software is assuming that when it embeds a "\" to escape a contained "\" 
that PostgreSQL will process the escape and leave only the original "\" in 
place.  However, if the sever is configured such that the second form behavior 
is in effect for unadorned literals (i.e., lacking the E prefix) then the added 
"\" will remain and the result column with have each instance of "\" duplicated.

You fail to mention your server versions (and any configuration changes 
thereto) but in older versions (<= 9.0) strings in the second form would be 
escaped (and logged) whereas, starting in 9.1, only strings in the first form 
have their contents analyzed and escaped.

This behavior can be changed in the configuration files of PostgreSQL but your 
replication software should be able to cope with either situation, ideally by 
querying the server for its current configuration and acting accordingly.

David J.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Logs question

2011-12-20 Thread Chris Travers
On Tue, Dec 20, 2011 at 11:53 AM, akp geek  wrote:
> Hi All -
>
>    Is there way that we write the logs specific to only one user
> in postgres? What I want to do is, Write the log files that are coming from
> one user  or block the log entry for a pariticular user. Can you please
> help? We have postgres 9.0.4

I don't know about redirecting based on a specific user.  You can.
however, set which statements to log per user (alter user akp set
log_statements='all')

Not sure if that helps.  Also if youlog to a syslog facility, it might
be possible to get some additional capabilities that way.

 Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Escaping input from COPY

2011-12-20 Thread Josh Kupershmidt
On Mon, Dec 19, 2011 at 6:56 AM, Roger Leigh  wrote:

> I'd like to switch to COPY, which should be orders of
> magnitude faster.  I see that DBD::Pg has an interface for
> this, which looks just fine.  My problem is with how to
> escape the data.  I need to use whatever escaping rules
> are in use by the server, which I've seen documented in
> the manual; but in order to cope with any future changes
> to these rules, and ensure identical behaviour, are there
> any standard functions I can use to escape the data before
> loading it?

This is really a question for the DBD::Pg folks, I think. Looking at:
  http://search.cpan.org/~turnstep/DBD-Pg-2.16.1/Pg.pm#COPY_support

It doesn't look like there is support for escaping COPY data. But
incidentally, I recently brought up the same problem with psycopg2 on
the psycopg list, and it seems there's no existing solution there,
either. Going out on a limb, I'm guessing that connectors don't offer
this support because there is no function in libpq for them to wrap,
and they don't want to kludge their own.

Anyone else think it might be a good idea for libpq to offer some
function to escape text to be used by COPY?

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] design help for performance

2011-12-20 Thread Culley Harrelson
I am bumping into some performance issues and am seeking help.

I have two tables A and B in a one (A) to many (B) relationship.  There are
1.4 million records in table A and 44 million records in table B.  In my
web application any request for a record from table A is also going to need
a count of associated records in table B.  Several years ago I added
table_b_rowcount to table A in order to minimize queries on table B.  And
now, as the application has grown, I am starting to having locking problems
on table A.  Any change to table B requires the that table_b_rowcount be
updated on table A...  The application has outgrown this solution.

So... is there a common solution to this problem?

culley


Re: [GENERAL] PostgreSQL server architecture

2011-12-20 Thread Craig Ringer

On 21/12/2011 4:08 AM, Bill Moran wrote:

In response to "James B. Byrne":

We run a small in-house data centre for our various
operations.  Currently, we are moving applications from
dedicated boxes to kvm based CentOS-6.1 virtual machines
on a single CentOS-6.1 host.  At issue is the question on
how to handle the PostgreSQL instances that we currently
have running on different machines.

As we see it presently we have the option of having one VM
host a centralized PostgreSQL server with multiple
databases or continue with each application specific VM
running its own copy of PostgreSQL with just the dedicated
application database.

Since whatever we chose we are likely to be using five
years from now I am soliciting informed option over which
option is considered a better choice for the long term.

In my experience, you'll be better off using a single DB for all the
databases.  With proper roles, database permissions, and pg_hba.conf,
you'll have acceptable security.  The advantage to doing this is
better utilization of hardware, since you don't have all the overhead
of multiple VMs using up memory, CPU, and IO load.  You also have
less instances to monitor.
While I tend to agree with this, there are some important downsides too. 
Perhaps the most important is that you can't currently use streaming or 
WAL-shipping replication to replicate only *one* database out of a 
cluster. You have to replicate all databases in the cluster. If you have 
some DBs that are small or low traffic but very important, and other DBs 
that're big or high traffic but less important, this can be a problem.


As you noted, it's also harder to isolate performance between DBs and 
protect more important DBs from response time drops caused by less 
important but heavily loaded DBs, big reporting queries on other DBs, etc.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] design help for performance

2011-12-20 Thread Misa Simic
  Hi Culley,

Have you tried to create fk together with index on fk column on table B?

What are results? Would be good if you could send the query and explain
analyze...

Sent from my Windows Phone
--
From: Culley Harrelson
Sent: 21 December 2011 00:57
To: pgsql-general@postgresql.org
Subject: [GENERAL] design help for performance

I am bumping into some performance issues and am seeking help.

I have two tables A and B in a one (A) to many (B) relationship.  There are
1.4 million records in table A and 44 million records in table B.  In my
web application any request for a record from table A is also going to need
a count of associated records in table B.  Several years ago I added
table_b_rowcount to table A in order to minimize queries on table B.  And
now, as the application has grown, I am starting to having locking problems
on table A.  Any change to table B requires the that table_b_rowcount be
updated on table A...  The application has outgrown this solution.

So... is there a common solution to this problem?

culley


Re: [GENERAL] PostgreSQL server architecture

2011-12-20 Thread John R Pierce

On 12/20/11 11:48 AM, James B. Byrne wrote:

we are moving applications from
dedicated boxes to kvm based CentOS-6.1 virtual machines
on a single CentOS-6.1 host


Database servers often end up with suboptimal performance on virtual IO 
hardware.   This is especially true if they are sharing storage channels 
and devices with other virtual machines.   The 'big boys' 
(IBM) often give a virtual machine running a database 
server its own storage channel (typically, the HBA for use with a SAN) 
to get around this.  Ditto, they give VM's their own network adapters


Now, if your databases aren't IO performance constrained, this very well 
may not matter much.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] design help for performance

2011-12-20 Thread David Johnston
Continued top-posting to remain consistent….

 

It isn’t that the application has outgrown the solution but rather the solution 
was never correct in the first place.  You attempted pre-mature optimization 
and are getting burned because of it.  The reference solution is simply:

 

SELECT a.*, COUNT(*) AS b_count

FROM a

JOIN b USING (a_id)

GROUP BY a.* {expanded * as needed)

 

Make sure table b has an index on the a.id column.

 

This is reference because you never want to introduce computed fields that keep 
track of other tables WITHOUT some kind of proof that the maintenance 
nightmare/overhead you are incurring is more than offset by the savings during 
usage.

 

Any further optimization requires two things:

Knowledge of the usage patterns of the affected data

Testing to prove that the alternative solutions out-perform the reference 
solution

 

Since you already have an existing query you should implement the reference 
solution above and then test and see whether it performs better or worse than 
you current solution.  If it indeed performs better than move to it; and if it 
is still not good enough then you need to provide more information about what 
kinds of queries are hitting A and B as well as Insert/Delete patterns on Table 
B.

 

David J.

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Misa Simic
Sent: Tuesday, December 20, 2011 7:13 PM
To: Culley Harrelson; pgsql-general@postgresql.org
Subject: Re: [GENERAL] design help for performance

 

Hi Culley,

Have you tried to create fk together with index on fk column on table B?

What are results? Would be good if you could send the query and explain 
analyze...

Sent from my Windows Phone

  _  

From: Culley Harrelson
Sent: 21 December 2011 00:57
To: pgsql-general@postgresql.org
Subject: [GENERAL] design help for performance

I am bumping into some performance issues and am seeking help.

I have two tables A and B in a one (A) to many (B) relationship.  There are 1.4 
million records in table A and 44 million records in table B.  In my web 
application any request for a record from table A is also going to need a count 
of associated records in table B.  Several years ago I added table_b_rowcount 
to table A in order to minimize queries on table B.  And now, as the 
application has grown, I am starting to having locking problems on table A.  
Any change to table B requires the that table_b_rowcount be updated on table 
A...  The application has outgrown this solution.

So... is there a common solution to this problem?

culley



Re: [GENERAL] design help for performance

2011-12-20 Thread Culley Harrelson
Thanks David.  That was my original solution and it began to bog down the
website so I resorted to demoralization 3 years ago  This is an
extremely high volume website.


On Tue, Dec 20, 2011 at 4:27 PM, David Johnston  wrote:

> Continued top-posting to remain consistent….
>
> ** **
>
> It isn’t that the application has outgrown the solution but rather the
> solution was never correct in the first place.  You attempted pre-mature
> optimization and are getting burned because of it.  The reference solution
> is simply:
>
> ** **
>
> SELECT a.*, COUNT(*) AS b_count
>
> FROM a
>
> JOIN b USING (a_id)
>
> GROUP BY a.* {expanded * as needed)
>
> ** **
>
> Make sure table b has an index on the a.id column.
>
> ** **
>
> This is reference because you never want to introduce computed fields that
> keep track of other tables WITHOUT some kind of proof that the maintenance
> nightmare/overhead you are incurring is more than offset by the savings
> during usage.
>
> ** **
>
> Any further optimization requires two things:
>
> Knowledge of the usage patterns of the affected data
>
> Testing to prove that the alternative solutions out-perform the reference
> solution
>
> ** **
>
> Since you already have an existing query you should implement the
> reference solution above and then test and see whether it performs better
> or worse than you current solution.  If it indeed performs better than move
> to it; and if it is still not good enough then you need to provide more
> information about what kinds of queries are hitting A and B as well as
> Insert/Delete patterns on Table B.
>
> ** **
>
> David J.
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Misa Simic
> *Sent:* Tuesday, December 20, 2011 7:13 PM
> *To:* Culley Harrelson; pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] design help for performance
>
> ** **
>
> Hi Culley,
>
> Have you tried to create fk together with index on fk column on table B?
>
> What are results? Would be good if you could send the query and explain
> analyze...
>
> Sent from my Windows Phone
> --
>
> *From: *Culley Harrelson
> *Sent: *21 December 2011 00:57
> *To: *pgsql-general@postgresql.org
> *Subject: *[GENERAL] design help for performance
>
> I am bumping into some performance issues and am seeking help.
>
> I have two tables A and B in a one (A) to many (B) relationship.  There
> are 1.4 million records in table A and 44 million records in table B.  In
> my web application any request for a record from table A is also going to
> need a count of associated records in table B.  Several years ago I added
> table_b_rowcount to table A in order to minimize queries on table B.  And
> now, as the application has grown, I am starting to having locking problems
> on table A.  Any change to table B requires the that table_b_rowcount be
> updated on table A...  The application has outgrown this solution.
>
> So... is there a common solution to this problem?
>
> culley
>


Re: [GENERAL] Escaping input from COPY

2011-12-20 Thread Adrian Klaver
On Tuesday, December 20, 2011 3:56:14 pm Josh Kupershmidt wrote:

> 
> This is really a question for the DBD::Pg folks, I think. Looking at:
>   http://search.cpan.org/~turnstep/DBD-Pg-2.16.1/Pg.pm#COPY_support
> 
> It doesn't look like there is support for escaping COPY data. But
> incidentally, I recently brought up the same problem with psycopg2 on
> the psycopg list, and it seems there's no existing solution there,
> either. 

As far as I know you did not get an answer, which is not the same as there 
being 
no answer:) I think you will find that the escaping is handled for you.

> Going out on a limb, I'm guessing that connectors don't offer
> this support because there is no function in libpq for them to wrap,
> and they don't want to kludge their own.
> 
> Anyone else think it might be a good idea for libpq to offer some
> function to escape text to be used by COPY?
> 
> Josh

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error with loading pg_dumpall

2011-12-20 Thread Tom Lane
"Dara Olson"  writes:
> I am attempting to create an exact copy of our production database/cluster on 
> a different server for development.  I created a dumpall file which is 8.7GB. 
> When I attempt to run this in psql on the new server it seems okay and then I 
> got a string of "invalid command \N" lines" and then "out of memory" in the 
> command prompt and then in the postgres log it states at the end,

> CST LOG:  could not receive data from client: Unknown winsock error 10061
> CST LOG:  unexpected EOF on client connection

I'd suggest you need to look at the *first* message not the last one.
What it sounds like is that psql is failing on some line of COPY data
and then trying to interpret the rest of the data as SQL commands.
Why that's happening is likely to be revealed by the first few messages.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] design help for performance

2011-12-20 Thread Alban Hertroys
On 21 Dec 2011, at 24:56, Culley Harrelson wrote:

> Several years ago I added table_b_rowcount to table A in order to minimize 
> queries on table B.  And now, as the application has grown, I am starting to 
> having locking problems on table A.  Any change to table B requires the that 
> table_b_rowcount be updated on table A...  The application has outgrown this 
> solution.


When you update rowcount_b in table A, that locks the row in A of course, but 
there's more going on. Because a new version of that row gets created, the 
references from B to A also need updating to that new version (creating new 
versions of rows in B as well). I think that causes a little bit more locking 
than originally anticipated - it may even be the cause of your locking problem.

Instead, if you'd create a new table C that only holds the rowcount_b and a 
reference to A (in a 1:1 relationship), most of those problems go away. It does 
add an extra foreign key reference to table A though, which means it will weigh 
down updates and deletes there some more.

CREATE TABLE C (
  table_a_id int PRIMARY KEY
 REFERENCES table_a (id) ON UPDATE CASCADE ON DELETE CASCADE,
  table_b_rowcount int NOT NULL DEFAULT 0
);

Yes, those cascades are on purpose - the data in C is useless without the 
accompanying record in A. Also, the PK makes sure it stays a 1:1 relationship.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general