Re: [GENERAL] Problem in "Set search path"

2013-03-25 Thread Francisco Figueiredo Jr.
Did you try to set the search_path in the connection string? This way you
won't need to set the search_path manually. Npgsql will take care of it to
you.

I hope it helps.



On Fri, Mar 22, 2013 at 1:07 AM, Kalai R  wrote:

>
>
> -- Forwarded message --
> From: Kalai R 
> Date: Fri, Mar 22, 2013 at 9:36 AM
> Subject: Re: [GENERAL] Problem in "Set search path"
> To: Alban Hertroys 
>
>
> Hi,
>
> Is that the same connection object in ,NET or the same connection to a
> connection pool or the same database connection? That's not necessarily the
> same.
> I use the same npgsql connection object in .net
>
>
> Might try temporarily turning up the logging in postgresql.conf to 'all'
> and see what is actually being done on the server.
>
> Did you know you can set a search_path on user (role) objects and on
> database objects in the database? That might just take out your need to
> specify it in your ,NET application.
>
> I try to find out my problem by your ideas. Thanks guys.
>
> Regards
> Kalai
>
>
>


-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://gplus.to/franciscojunior
http://fxjr.blogspot.com
http://twitter.com/franciscojunior


Re: [GENERAL] PostgreSQL and VIEWS

2013-03-25 Thread Merlin Moncure
On Sat, Mar 23, 2013 at 9:25 PM, Misa Simic  wrote:
> HI,
>
> When I have met PostgreSQL for a first time - I have been really amazed -
> with many things...
>
> But how we started to use it - and data jumps in - we meet performance
> problems...
>
> Now, it is a bit tricky... any concrete performance problem - can be solved
> on some way...
>
> However, I am more concerned, that "our way" how we do the things - is not
> best way for Postgres...
>
> The thing is - our tables are highly normalised - so lot of joins... (NULLS
> do not exist in the tables)
>
> to make things simplier we use VIEWS to "denormalise" data - though it is
> not that much we care about denormalisation - it is more to make things
> simplier and less error prone...
>
> So every "thing" (Customer, product, transaction, employee, whatever) is
> built up from more tables... I am not sure we have even one "thing" built up
> just from 1 table...
>
> Lot of "thing properties" are actually calculations: i.e. Invoice Amount, or
> for Employee:
> We have in the one table: first_name and last_name fields, but Full name is
> concatented as:
>
> last_name || ',' || first_name
>
> So, whenever we need employee full name - is it on Employee Info window - or
> in an Invoice as Salesperson, or in Payslip report...instead of to
> everywhere have above formula - we have function...
> but again, instead of to developer think about each possible function for
> concrete thing - we have made for each entity - the view... what have all
> relevant joins - and all relevant calculated properties... about the
> thing...
>
> I have had a thought - somewhere read (but now I am more convenient I have
> read it wrong) that planner is that smart that if we we have:
>
> CREATE VIEW person_view AS
> SELECT person_id, first_name, last_name, fn_full_name(id) as full_name,
> date_of_birth, age(date_of_birth) from person LEFT JOIN
> person_date_of_birth USING (person_id)
>
> SELECT first_name FROM person_view WHERE id = 1
>
> Planner is smart and:
> -will not care about joins  - you ask for the field(s) - what do not belong
> to other tables - both belong to 1 - and it is pk!
> -will not care about functions - you havent asked for any field what is
> function in your query...
>
>  However - how we met more and more problems with our approach... and
> spending more time on
> EXPLAIN ANALYZE - instead of on business problems... It seems things are not
> that good...
>
> for simple questions as above - results are acceptable - even looking into
> EXPLAIN ANALYZE i would not say it is the best possible plan... (i.e.
> planner spending time on Seq Scan on person_date_of_birth_table - and filter
> it - even no need to think about that table at all - LEFT JOIN (show me
> columns - if there is a matching row for pk column) - so could be check via
> index -however - there is no any column from that table in the query - I
> would simple discard that table from plan
>
> So query
>
> SELECT id FROM view WHERE id = 5 (view is SELECT * FROM table1 LEFT JOIN
> table2)
>
> I would treat the same as:
> SELECT id FROM table1 = 5
>
> ok in INNER JOIN it requires additional confimration - but even there is FK
> to PK join - that confirmation is not needed iether - but in our cases it is
> always FK to PK...
>
> However - if we need to involve more "entities"/views - from some unknown
> reason to me - postgres always picks bad plan...
>
> i.e. list of employees what work in New York
>
> we have employees_contract table:
> contract_id, person_id, department_id,
>
> a lot of others tables, but to make it shorter:
>
>
> Department_view
>
> Buidlings_view
>
>
> and now query:
> SELECT full_name FROM person_view INNER JOIN emplyee_contract USING
> (person_id) INNER JOIN department_view USING (department_id) INNER JOIN
> buildings_view USING (building_id) WHERE city_id = 'NY'
>
>
> from some unknown reason - gives bad plan - then if we "refactor" query and
> send different question - we get good result... I am pretty sure planner
> should be capable to "rephrase" my question instead of me...
>
> I would like to hear your expirience with VIEWS in postgres... And some kind
> of best practice/advice  for described situation... So far it looks to me
> there is no way - to make things ready for any specific question - every
> "request" will need specific SQL syntax to drive planner in acceptable
> direction...


You asked some broad questions so you are going to get broad answers.

*) query planner is very complicated and changes are very incremental.
 only a very, very small number of people (Tom especially) are capable
of making major changes to it.  some known planner issues that might
get fixed in the short term are better handling of quals through UNION
ALL and/or pushing quals through partitioned windows functions.  these
are documented shortcomings -- other improvements have to be looked
through the lens of 'what else did you break', including,
unfortunately, plan time.

*) filtering in predi

[GENERAL] Acess Oracle with dbi-link (PostgreSQL) Error Compile

2013-03-25 Thread Emanuel Araújo
Hello!

I'm having trouble making a base to access Oracle via dbi-link, because when
installing DBD::Oracle version 1.58 the same mistakes some missing files.
Ago as "oci.h", it is being called within the oracle.h

The purpose would be to sync data between two tools for integration.

Has anyone experienced this?
Have any solution or suggestion?
There is another tool that I could be using to make this access?

The following error returned by Perl when you squeegee "make"

make
gcc-c-D_REENTRANT-D_GNU_SOURCE
-I/root/perl5/lib/perl5/x86_64-linux-thread-multi/auto/DBI-fno-strict-
aliasing-pipe-fstack-protector -I/usr/local/include - D_LARGEFILE_SOURCE-
D_FILE_OFFSET_BITS = 64-g-O2-pipe-Wall-Wp,-D_FORTIFY_SOURCE = 2-fexceptions-
fstack-protector - param = ssp-buffer-size = 4-m64-mtune = generic-DVERSION
= \ "1:58 \ "-DXS_VERSION = \" 1:58 \ "-fPIC" -I/usr/lib64/perl5/CORE "
-Wall-Won-comment-DUTF8_SUPPORT-DORA_OCI_VERSION = \" 10.2.0.3 \ "-
DORA_OCI_102 Oracle.c
In file included from Oracle.xs: 1:0:
Oracle.h: 37:17: fatal error: oci.h: File or directory not found
compilation terminated.
make: ** [Oracle.o] Error 1

Thank you.

-- 
*Atenciosamente,

Emanuel Araújo*
http://eacshm.wordpress.com/
http://www.rootserv.com.br/
*
**Linux Certified
LPIC-1*


[GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
Hi,

I have two 9.2 databases running with hot_standby replication. Today when I
was checking, I found that replication has not been working since Mar 1st.
There was a large database restored in master on that day and I believe
after that the lag went higher.

SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset

431326108320

SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0')
AS replay

   receive|replay
--+--
 245987541312 | 245987534032
(1 row)

I checked the pg_xlog in both the server. In Slave the last xlog file
-rw--- 1 postgres postgres 16777216 Mar  1 06:02
00010039007F

In Master, the first xlog file is
-rw--- 1 postgres postgres 16777216 Mar  1 04:45
00010039005E


Is there any way I could sync the slave in quick process?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread Lonni J Friedman
On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman  wrote:
> Hi,
>
> I have two 9.2 databases running with hot_standby replication. Today when I
> was checking, I found that replication has not been working since Mar 1st.
> There was a large database restored in master on that day and I believe
> after that the lag went higher.
>
> SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset
>
> 431326108320
>
> SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
> receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0')
> AS replay
>
>receive|replay
> --+--
>  245987541312 | 245987534032
> (1 row)
>
> I checked the pg_xlog in both the server. In Slave the last xlog file
> -rw--- 1 postgres postgres 16777216 Mar  1 06:02
> 00010039007F
>
> In Master, the first xlog file is
> -rw--- 1 postgres postgres 16777216 Mar  1 04:45
> 00010039005E
>
>
> Is there any way I could sync the slave in quick process?

generate a new base backup, and seed the slave with it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman wrote:

> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman  wrote:
> > Hi,
> >
> > I have two 9.2 databases running with hot_standby replication. Today
> when I
> > was checking, I found that replication has not been working since Mar
> 1st.
> > There was a large database restored in master on that day and I believe
> > after that the lag went higher.
> >
> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset
> >
> > 431326108320
> >
> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
> '0/0')
> > AS replay
> >
> >receive|replay
> > --+--
> >  245987541312 | 245987534032
> > (1 row)
> >
> > I checked the pg_xlog in both the server. In Slave the last xlog file
> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
> > 00010039007F
> >
> > In Master, the first xlog file is
> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
> > 00010039005E
> >
> >
> > Is there any way I could sync the slave in quick process?
>
> generate a new base backup, and seed the slave with it.
>

OK. I am getting these error in slave:
LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0

What is the actual reason?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread Lonni J Friedman
On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman  wrote:
>
>
> On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman 
> wrote:
>>
>> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman  wrote:
>> > Hi,
>> >
>> > I have two 9.2 databases running with hot_standby replication. Today
>> > when I
>> > was checking, I found that replication has not been working since Mar
>> > 1st.
>> > There was a large database restored in master on that day and I believe
>> > after that the lag went higher.
>> >
>> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
>> > offset
>> >
>> > 431326108320
>> >
>> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
>> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
>> > '0/0')
>> > AS replay
>> >
>> >receive|replay
>> > --+--
>> >  245987541312 | 245987534032
>> > (1 row)
>> >
>> > I checked the pg_xlog in both the server. In Slave the last xlog file
>> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
>> > 00010039007F
>> >
>> > In Master, the first xlog file is
>> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
>> > 00010039005E
>> >
>> >
>> > Is there any way I could sync the slave in quick process?
>>
>> generate a new base backup, and seed the slave with it.
>
>
> OK. I am getting these error in slave:
> LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0
>
> What is the actual reason?

Corruption?  What were you doing when you saw the error?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman wrote:

> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman  wrote:
> >
> >
> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman 
> > wrote:
> >>
> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
> wrote:
> >> > Hi,
> >> >
> >> > I have two 9.2 databases running with hot_standby replication. Today
> >> > when I
> >> > was checking, I found that replication has not been working since Mar
> >> > 1st.
> >> > There was a large database restored in master on that day and I
> believe
> >> > after that the lag went higher.
> >> >
> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
> >> > offset
> >> >
> >> > 431326108320
> >> >
> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')
> AS
> >> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
> >> > '0/0')
> >> > AS replay
> >> >
> >> >receive|replay
> >> > --+--
> >> >  245987541312 | 245987534032
> >> > (1 row)
> >> >
> >> > I checked the pg_xlog in both the server. In Slave the last xlog file
> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
> >> > 00010039007F
> >> >
> >> > In Master, the first xlog file is
> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
> >> > 00010039005E
> >> >
> >> >
> >> > Is there any way I could sync the slave in quick process?
> >>
> >> generate a new base backup, and seed the slave with it.
> >
> >
> > OK. I am getting these error in slave:
> > LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0
> >
> > What is the actual reason?
>
> Corruption?  What were you doing when you saw the error?
>

I did not have enough idea about these stuffs. I got the database now and
saw the error.
Is there any way to recover from this state. The master database is a large
database of 500 GB.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread Lonni J Friedman
On Mon, Mar 25, 2013 at 12:55 PM, AI Rumman  wrote:
>
>
> On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman 
> wrote:
>>
>> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman  wrote:
>> >
>> >
>> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman 
>> > wrote:
>> >>
>> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > I have two 9.2 databases running with hot_standby replication. Today
>> >> > when I
>> >> > was checking, I found that replication has not been working since Mar
>> >> > 1st.
>> >> > There was a large database restored in master on that day and I
>> >> > believe
>> >> > after that the lag went higher.
>> >> >
>> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
>> >> > offset
>> >> >
>> >> > 431326108320
>> >> >
>> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')
>> >> > AS
>> >> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
>> >> > '0/0')
>> >> > AS replay
>> >> >
>> >> >receive|replay
>> >> > --+--
>> >> >  245987541312 | 245987534032
>> >> > (1 row)
>> >> >
>> >> > I checked the pg_xlog in both the server. In Slave the last xlog file
>> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
>> >> > 00010039007F
>> >> >
>> >> > In Master, the first xlog file is
>> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
>> >> > 00010039005E
>> >> >
>> >> >
>> >> > Is there any way I could sync the slave in quick process?
>> >>
>> >> generate a new base backup, and seed the slave with it.
>> >
>> >
>> > OK. I am getting these error in slave:
>> > LOG:  invalid contrecord length 284 in log file 57, segment 127, offset
>> > 0
>> >
>> > What is the actual reason?
>>
>> Corruption?  What were you doing when you saw the error?
>
>
> I did not have enough idea about these stuffs. I got the database now and
> saw the error.
> Is there any way to recover from this state. The master database is a large
> database of 500 GB.

generate a new base backup, and seed the slave with it.  if the error
persists, then i'd guess that your master is corrupted, and then
you've got huge problems.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 4:03 PM, AI Rumman  wrote:

>
>
> On Mon, Mar 25, 2013 at 4:00 PM, Lonni J Friedman wrote:
>
>> On Mon, Mar 25, 2013 at 12:55 PM, AI Rumman  wrote:
>> >
>> >
>> > On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman 
>> > wrote:
>> >>
>> >> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman 
>> wrote:
>> >> >
>> >> >
>> >> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman <
>> netll...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
>> >> >> wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I have two 9.2 databases running with hot_standby replication.
>> Today
>> >> >> > when I
>> >> >> > was checking, I found that replication has not been working since
>> Mar
>> >> >> > 1st.
>> >> >> > There was a large database restored in master on that day and I
>> >> >> > believe
>> >> >> > after that the lag went higher.
>> >> >> >
>> >> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
>> >> >> > offset
>> >> >> >
>> >> >> > 431326108320
>> >> >> >
>> >> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(),
>> '0/0')
>> >> >> > AS
>> >> >> > receive,
>> pg_xlog_location_diff(pg_last_xlog_replay_location(),
>> >> >> > '0/0')
>> >> >> > AS replay
>> >> >> >
>> >> >> >receive|replay
>> >> >> > --+--
>> >> >> >  245987541312 | 245987534032
>> >> >> > (1 row)
>> >> >> >
>> >> >> > I checked the pg_xlog in both the server. In Slave the last xlog
>> file
>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
>> >> >> > 00010039007F
>> >> >> >
>> >> >> > In Master, the first xlog file is
>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
>> >> >> > 00010039005E
>> >> >> >
>> >> >> >
>> >> >> > Is there any way I could sync the slave in quick process?
>> >> >>
>> >> >> generate a new base backup, and seed the slave with it.
>> >> >
>> >> >
>> >> > OK. I am getting these error in slave:
>> >> > LOG:  invalid contrecord length 284 in log file 57, segment 127,
>> offset
>> >> > 0
>> >> >
>> >> > What is the actual reason?
>> >>
>> >> Corruption?  What were you doing when you saw the error?
>> >
>> >
>> > I did not have enough idea about these stuffs. I got the database now
>> and
>> > saw the error.
>> > Is there any way to recover from this state. The master database is a
>> large
>> > database of 500 GB.
>>
>> generate a new base backup, and seed the slave with it.  if the error
>> persists, then i'd guess that your master is corrupted, and then
>> you've got huge problems.
>>
>
> Master is running fine right now showing only a warning:
> WARNING:  archive_mode enabled, yet archive_command is not set
>
> Do you think the master could be corrupted?
>
>
Hi,

I got the info that there was a master db restart on Feb 27th. Could this
be a reason of this error?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread Lonni J Friedman
On Mon, Mar 25, 2013 at 1:23 PM, AI Rumman  wrote:
>
>
> On Mon, Mar 25, 2013 at 4:03 PM, AI Rumman  wrote:
>>
>>
>>
>> On Mon, Mar 25, 2013 at 4:00 PM, Lonni J Friedman 
>> wrote:
>>>
>>> On Mon, Mar 25, 2013 at 12:55 PM, AI Rumman  wrote:
>>> >
>>> >
>>> > On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman 
>>> > wrote:
>>> >>
>>> >> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman 
>>> >> wrote:
>>> >> >
>>> >> >
>>> >> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman
>>> >> > 
>>> >> > wrote:
>>> >> >>
>>> >> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
>>> >> >> wrote:
>>> >> >> > Hi,
>>> >> >> >
>>> >> >> > I have two 9.2 databases running with hot_standby replication.
>>> >> >> > Today
>>> >> >> > when I
>>> >> >> > was checking, I found that replication has not been working since
>>> >> >> > Mar
>>> >> >> > 1st.
>>> >> >> > There was a large database restored in master on that day and I
>>> >> >> > believe
>>> >> >> > after that the lag went higher.
>>> >> >> >
>>> >> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0')
>>> >> >> > AS
>>> >> >> > offset
>>> >> >> >
>>> >> >> > 431326108320
>>> >> >> >
>>> >> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(),
>>> >> >> > '0/0')
>>> >> >> > AS
>>> >> >> > receive,
>>> >> >> > pg_xlog_location_diff(pg_last_xlog_replay_location(),
>>> >> >> > '0/0')
>>> >> >> > AS replay
>>> >> >> >
>>> >> >> >receive|replay
>>> >> >> > --+--
>>> >> >> >  245987541312 | 245987534032
>>> >> >> > (1 row)
>>> >> >> >
>>> >> >> > I checked the pg_xlog in both the server. In Slave the last xlog
>>> >> >> > file
>>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
>>> >> >> > 00010039007F
>>> >> >> >
>>> >> >> > In Master, the first xlog file is
>>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
>>> >> >> > 00010039005E
>>> >> >> >
>>> >> >> >
>>> >> >> > Is there any way I could sync the slave in quick process?
>>> >> >>
>>> >> >> generate a new base backup, and seed the slave with it.
>>> >> >
>>> >> >
>>> >> > OK. I am getting these error in slave:
>>> >> > LOG:  invalid contrecord length 284 in log file 57, segment 127,
>>> >> > offset
>>> >> > 0
>>> >> >
>>> >> > What is the actual reason?
>>> >>
>>> >> Corruption?  What were you doing when you saw the error?
>>> >
>>> >
>>> > I did not have enough idea about these stuffs. I got the database now
>>> > and
>>> > saw the error.
>>> > Is there any way to recover from this state. The master database is a
>>> > large
>>> > database of 500 GB.
>>>
>>> generate a new base backup, and seed the slave with it.  if the error
>>> persists, then i'd guess that your master is corrupted, and then
>>> you've got huge problems.
>>
>>
>> Master is running fine right now showing only a warning:
>> WARNING:  archive_mode enabled, yet archive_command is not set
>>
>> Do you think the master could be corrupted?
>>
>
> Hi,
>
> I got the info that there was a master db restart on Feb 27th. Could this be
> a reason of this error?
>

restarting the database cleanly should never cause corruption.  again,
you need to create a new base backup, and seed the slave with it.  if
the problem persists, then the master is likely corrupted.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and VIEWS

2013-03-25 Thread Misa Simic
Thanks Merlin,

Well... sorry, It could be and my bad english... but let me explain
chronologicaly things...

I have first written concrete case...

http://postgresql.1045698.n5.nabble.com/PostgreSQL-planner-tp5749427.html

But because of I recognized the pattern - always is problem with JOIN to a
view...

I have written this abroad generic question Because of, I think,
Postgres have problem with JOIN to a view in general...So probably someone
before me have had the same problem - and if that is the case I just wanted
to hear thier solution...

 But from others examples, and some tests EXPLAIN ANALYZE I have done...

i.e. SELECT t1.a FROM t1 LEFTJOIN t2 USING (a)

Planer includes some actions related to t2 - what are not necessary at
all... again - it is just my opinion :)
(Please, don't take this - I don't know... as some most important thing...)

So that are "small" problems - on our simplified examples - what have big
impact in performance on a bit complex examples...

So what we have indentified until know - solution to our problem with views
- is always: "rephrase the question" (not indexes - they exist - just not
used...)

for example:

SELECT view.* FROM view  INNER JOIN t1 USING (col1) WHERE t1.col2 = 1

to get better performance, you need to say:

SELECT view.* FROM view WHERE col1 = (SELECT t.col1 FROM t1 WHERE t1.col2 =
1)

Logically - that are the same questions - result is the same just from
some unknown reason to me - postgres in first case picks wrong plan - and
we got very bad performance... :(

So solution to our problem - is to add "rephrase the question" tier...
(analyze what "input question" is - and transform it to better SQL for
Postgres"

And fortunately we have that flexibility in our app... And how the things
are - we will need to do it,,,.

So, if input question is: SELECT t1.a FROM t1 LEFT JOIN t2 USING (a) -
transform it to: SELECT t1.a FROM t1

etc...

But, don't you think that would be better for Postgres planner in general?


Nowhere in our examples are cases like where A || B = 'x' ... or WHERE
volatile_function(a, b) = 5... etc...


Materalisation - well that is another thing why we use VIEWS

So for calculated properties of the things - we use SQL stable functions...

i.e the thing: Customer - is the VIEW in postgres:

cust_id, cust_name, blablabla...columns, customer_balance

customer_balance - is calculated property of Entity: Customer - postgres
function actually...

if custmer_balabce sucks in perform - involve mat view - and the function
will return value  from mat_view - instead of to do calculation... But
again - we dont have the problem with that :)


We have the problem with:


SELECT c.* FROM customers_view c INNER JOIN invoices USING (customer_id)
WHERE invoice_id = 156


And solution to our problem is: "rephrase the question" :)

Kind Regards,

Misa













2013/3/25 Merlin Moncure 

> On Sat, Mar 23, 2013 at 9:25 PM, Misa Simic  wrote:
> > HI,
> >
> > When I have met PostgreSQL for a first time - I have been really amazed -
> > with many things...
> >
> > But how we started to use it - and data jumps in - we meet performance
> > problems...
> >
> > Now, it is a bit tricky... any concrete performance problem - can be
> solved
> > on some way...
> >
> > However, I am more concerned, that "our way" how we do the things - is
> not
> > best way for Postgres...
> >
> > The thing is - our tables are highly normalised - so lot of joins...
> (NULLS
> > do not exist in the tables)
> >
> > to make things simplier we use VIEWS to "denormalise" data - though it is
> > not that much we care about denormalisation - it is more to make things
> > simplier and less error prone...
> >
> > So every "thing" (Customer, product, transaction, employee, whatever) is
> > built up from more tables... I am not sure we have even one "thing"
> built up
> > just from 1 table...
> >
> > Lot of "thing properties" are actually calculations: i.e. Invoice
> Amount, or
> > for Employee:
> > We have in the one table: first_name and last_name fields, but Full name
> is
> > concatented as:
> >
> > last_name || ',' || first_name
> >
> > So, whenever we need employee full name - is it on Employee Info window
> - or
> > in an Invoice as Salesperson, or in Payslip report...instead of to
> > everywhere have above formula - we have function...
> > but again, instead of to developer think about each possible function for
> > concrete thing - we have made for each entity - the view... what have all
> > relevant joins - and all relevant calculated properties... about the
> > thing...
> >
> > I have had a thought - somewhere read (but now I am more convenient I
> have
> > read it wrong) that planner is that smart that if we we have:
> >
> > CREATE VIEW person_view AS
> > SELECT person_id, first_name, last_name, fn_full_name(id) as full_name,
> > date_of_birth, age(date_of_birth) from person LEFT JOIN
> > person_date_of_birth USING (person_id)
> >
> > SELECT first_name FROM person_view WHERE i

Re: [GENERAL] PostgreSQL and VIEWS

2013-03-25 Thread Merlin Moncure
On Mon, Mar 25, 2013 at 4:32 PM, Misa Simic  wrote:
> Thanks Merlin,
>
> Well... sorry, It could be and my bad english... but let me explain
> chronologicaly things...
>
> I have first written concrete case...
>
> http://postgresql.1045698.n5.nabble.com/PostgreSQL-planner-tp5749427.html
>
> But because of I recognized the pattern - always is problem with JOIN to a
> view...
>
> I have written this abroad generic question Because of, I think,
> Postgres have problem with JOIN to a view in general...So probably someone
> before me have had the same problem - and if that is the case I just wanted
> to hear thier solution...
>
>  But from others examples, and some tests EXPLAIN ANALYZE I have done...
>
> i.e. SELECT t1.a FROM t1 LEFTJOIN t2 USING (a)
>
> Planer includes some actions related to t2 - what are not necessary at
> all... again - it is just my opinion :)
> (Please, don't take this - I don't know... as some most important thing...)
>
> So that are "small" problems - on our simplified examples - what have big
> impact in performance on a bit complex examples...
>
> So what we have indentified until know - solution to our problem with views
> - is always: "rephrase the question" (not indexes - they exist - just not
> used...)
>
> for example:
>
> SELECT view.* FROM view  INNER JOIN t1 USING (col1) WHERE t1.col2 = 1
>
> to get better performance, you need to say:
>
> SELECT view.* FROM view WHERE col1 = (SELECT t.col1 FROM t1 WHERE t1.col2 =
> 1)


yeah.  I understand -- it would help to see a test case there.  the
devil is always in the details.  point being, let's take your other
example

or the supplied test case you mentioned (where you evaluate a volatile
function in a view), things are working as designed.  the only
difference between  a view and a regular query is you get pushed down
one level in terms if subquery.  so,

select * from view;

is the same as:

select * from () q;

so, when using volatile function, the case basically boils down to:

SELECT * FROM (select volatile_func(), stuff FROM big_table) q WHERE
key = value;

that's a *very* different query vs:
select volatile_func(), stuff FROM big_table WHERE key = value;

the slower performance there is because logically you *have* to
evaluate volatile performance first -- things are working as designed.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_stat_get_last_vacuum_time(): why non-FULL?

2013-03-25 Thread CR Lender
According to the manual (9.1), pg_stat_get_last_vacuum_time() returns

timestamptz | Time of the last non-FULL vacuum initiated by the
| user on this table

Why are full vacuums excluded from this statistic? It looks like there's
no way to get the date of the last manual vacuum, if only full vacuums
are performed.


regards,
crl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help me to clear postgres problem

2013-03-25 Thread jayaram s
Hello
I have installed PostgreSQL 8.4.1 in my PC. For the requirement of data
migration I again want to install "PostgreSQL enterprise DB  9.2".
I couldn't install it because
I have select option "postgresql compatible" on "configuration mode". So
prompt wants me to enter "password". I have enter my existing postgres
password "postgres'. But I couldn't install. An error message
displayed as*"service user account 'postgres' couldnot be created".
Please help me to
clear the problem*

-- 
*With Regards

Jayaram

*


[GENERAL] UNLOGGED TEMPORARY tables?

2013-03-25 Thread aasat
I was tested write speed to temporary and unlogged tables and noticed that
unlogged tables was a much faster

Postgres 9.2.2

Write speed

Temporary 14.5k/s
UNLOGGED 50k/s

Before test I was convinced that temporary tables in postgres >= 9.1 are
unlogged





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/UNLOGGED-TEMPORARY-tables-tp5749477.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL EXCLUDE USING error: Data type integer has no default operator class

2013-03-25 Thread Denver Timothy

On Mar 22, 2013, at 2:57 PM, Ryan Kelly  wrote:

> On Fri, Mar 03/22/13, 2013 at 10:14:45AM -0600, Denver Timothy wrote:
>> In PostgreSQL 9.2.3 I am trying to create this simplified table:
>> 
>>CREATE TABLE test (
>>user_id INTEGER,
>>startend TSTZRANGE,
>>EXCLUDE USING gist (user_id WITH =, startend WITH &&)
>>);
>> 
>> But I get this error:
>> 
>>ERROR:  data type integer has no default operator class for access method 
>> "gist"
>>HINT:  You must specify an operator class for the index or define a 
>> default operator class for the data type.
>> 
>> I've spent quite a bit of time searching for hints on figuring out how to 
>> make this work, or figuring out why it won't work. I've also been trying to 
>> understand CREATE OPERATOR and CREATE OPERATOR CLASS, but those are over my 
>> head for now. Could anyone point me in the right direction?
> 
> CREATE EXTENSION btree_gist;

That was one of the first things I tried, but going back and looking at things 
for the millionth time, I found the error was buried in the script output:

ERROR: could not open extension control file 
"/opt/local/share/postgresql92/extension/btree_gist.control": No such file or 
directory

I also assumed it was installed because at least one of the previous examples 
worked.

It turns out the contrib module (and several others) were not installed. On Mac 
OS 10.8.2 using MacPorts, I did this:

$ su
# port build postgresql92
# cd `port work postgresql92`/postgresql-/contrib
# for d in *; do test -d $d && ( echo $d; cd $d; make all && make install; cd 
.. ); done

Now it works as expected.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2013-03-25 Thread News Subsystem
Sat, 23 Mar 2013 07:13:47 -0700 (PDT)
 23 Mar 2013 07:13:47 -0700 (PDT)
X-Newsgroups: pgsql.general
Date: Sat, 23 Mar 2013 07:13:47 -0700 (PDT)
Complaints-To: groups-ab...@google.com
Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=79.173.238.135;
 posting-account=olGTqwoh6jEnGJU_fjLVfoHldlc3
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <15aaed27-5c77-44e8-8cbe-fe451bb57...@googlegroups.com>
Subject: Moteview database problem
From: "Hana'a AL-Theiabat" 
Injection-Date: Sat, 23 Mar 2013 14:13:47 +
Content-Type: text/plain; charset=ISO-8859-1
To: pgsql-general@postgresql.org

when i install Moteview 2.0 on windows XP this problem appeared  :

""Moteview the database server localhost is not available please input a valid 
server name""

the version of pgsql is PostgreSQL 8.0.0-rc1

anyone have any idea to solve this ?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UNLOGGED TEMPORARY tables?

2013-03-25 Thread Tom Lane
aasat  writes:
> I was tested write speed to temporary and unlogged tables and noticed that
> unlogged tables was a much faster

> Postgres 9.2.2

> Write speed

> Temporary 14.5k/s
> UNLOGGED 50k/s

I think there's something skewed about your test.

Temp tables *are* unlogged.  They also live in session-private buffers,
which eliminates a great deal of synchronization overhead; at the cost
that any writing that does happen has to be done by the backend process
itself, without help from the background writer.  It's possible that
there's something about your specific test case that makes that scenario
look bad.  Another likely source of bogus results is if you were testing
a tiny temp_buffers setting versus a more appropriately sized
shared_buffers setting.

However, this is all speculation, since you provided not a whit of
detail about your test case.  Nobody's going to take these numbers
seriously if you haven't explained how to reproduce them.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UNLOGGED TEMPORARY tables?

2013-03-25 Thread Michael Paquier
On Tue, Mar 26, 2013 at 8:26 AM, Lonni J Friedman wrote:

> I'm pretty sure that unlogged tables and temp tables are two separate
> & distinct features, with no overlap in functionality.  It would be
> nice if it was possible to create an unlogged temp table.
>
Temporary tables are a subtype of unlogged tables, as temporary tables are
not WAL-logged.
This article from Robert Haas will give a good summary of such differences:
http://rhaas.blogspot.jp/2010/05/global-temporary-and-unlogged-tables.html
-- 
Michael


Re: [GENERAL] UNLOGGED TEMPORARY tables?

2013-03-25 Thread Lonni J Friedman
I'm pretty sure that unlogged tables and temp tables are two separate
& distinct features, with no overlap in functionality.  It would be
nice if it was possible to create an unlogged temp table.

On Sun, Mar 24, 2013 at 1:32 PM, aasat  wrote:
> I was tested write speed to temporary and unlogged tables and noticed that
> unlogged tables was a much faster
>
> Postgres 9.2.2
>
> Write speed
>
> Temporary 14.5k/s
> UNLOGGED 50k/s
>
> Before test I was convinced that temporary tables in postgres >= 9.1 are
> unlogged
>
>
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help me to clear postgres problem

2013-03-25 Thread Guy Rouillier

On 3/25/2013 7:35 AM, jayaram s wrote:

Hello
I have installed PostgreSQL 8.4.1 in my PC. For the requirement of data
migration I again want to install "PostgreSQL enterprise DB  9.2".
I couldn't install it because
I have select option "postgresql compatible" on "configuration mode". So
prompt wants me to enter "password". I have enter my existing postgres
password "postgres'. But I couldn't install. An error message displayed
as*"service user account 'postgres' couldnot be created". Please help me
to clear the problem*


Are you intentionally trying to install PostgresPlus Advanced Server? 
If you are working just on your PC, you should be able to use the 
PostgreSQL installer: 
http://www.enterprisedb.com/products-services-training/pgdownload#windows


The password the PPAS installer is asking you for is the password to 
your EnterpriseDB account, not a local Windows account.  You need to 
register an EnterpriseDB account before you can install PPAS.


--
Guy Rouillier


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UNLOGGED TEMPORARY tables?

2013-03-25 Thread Lonni J Friedman
On Mon, Mar 25, 2013 at 4:49 PM, Michael Paquier
 wrote:
>
>
> On Tue, Mar 26, 2013 at 8:26 AM, Lonni J Friedman 
> wrote:
>>
>> I'm pretty sure that unlogged tables and temp tables are two separate
>> & distinct features, with no overlap in functionality.  It would be
>> nice if it was possible to create an unlogged temp table.
>
> Temporary tables are a subtype of unlogged tables, as temporary tables are
> not WAL-logged.
> This article from Robert Haas will give a good summary of such differences:
> http://rhaas.blogspot.jp/2010/05/global-temporary-and-unlogged-tables.html


Thanks, that's good to know.  the official dox don't really make it
clear that temp tables are unlogged.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and VIEWS

2013-03-25 Thread Misa Simic
hm...

I have provided examples? Tables definitions and plan for each query? (in
another thread..)

I am not sure I can buy it that that are *very* different queries I
would say - they are the same - why would you need to evalute 100 rows  and
reduce end result on one?

execute function   - is most expensive step... most expensive step - I
would do on the end - not on the beginning... after i applied all filters -
of course if my function is not part of the filter - if it is - then it is
something ... unavoidable - and must be executed on all rows...)

And, even I would do it - just if it is needed i.e.

on:

SELECT stuff FROM (select immutable_func(), stuff FROM big_table) q

I would never execute  the function - even it is immutable... how i
understand it - immutable function has just advantage that it could be
executed just once - instead of number of rows times - even you want all
rows... but if it is not in top query - who cares...why to execute it at
all...


I mean - I don't know - maybe it is "by design" - but is there some
(hidden) reason why you must execute volatile function on all rows - not
just after filter - number of filtered rows times?

P.S. I took volatile function as potentially worst possible scenario...

Though I dont think it is true...

Because of :

SELECT * FROM view_with_volatile_function WHERE indexed_column = 5 - uses
index...

but

SELECT * FROM view_with_volatile_function INNER JOIN (SELECT 5 AS
indexed_column) q USING (indexed_column) - does not!


Logically - that are the same queries...

Thanks,

Misa












2013/3/26 Merlin Moncure 

> On Mon, Mar 25, 2013 at 4:32 PM, Misa Simic  wrote:
> > Thanks Merlin,
> >
> > Well... sorry, It could be and my bad english... but let me explain
> > chronologicaly things...
> >
> > I have first written concrete case...
> >
> >
> http://postgresql.1045698.n5.nabble.com/PostgreSQL-planner-tp5749427.html
> >
> > But because of I recognized the pattern - always is problem with JOIN to
> a
> > view...
> >
> > I have written this abroad generic question Because of, I think,
> > Postgres have problem with JOIN to a view in general...So probably
> someone
> > before me have had the same problem - and if that is the case I just
> wanted
> > to hear thier solution...
> >
> >  But from others examples, and some tests EXPLAIN ANALYZE I have done...
> >
> > i.e. SELECT t1.a FROM t1 LEFTJOIN t2 USING (a)
> >
> > Planer includes some actions related to t2 - what are not necessary at
> > all... again - it is just my opinion :)
> > (Please, don't take this - I don't know... as some most important
> thing...)
> >
> > So that are "small" problems - on our simplified examples - what have big
> > impact in performance on a bit complex examples...
> >
> > So what we have indentified until know - solution to our problem with
> views
> > - is always: "rephrase the question" (not indexes - they exist - just not
> > used...)
> >
> > for example:
> >
> > SELECT view.* FROM view  INNER JOIN t1 USING (col1) WHERE t1.col2 = 1
> >
> > to get better performance, you need to say:
> >
> > SELECT view.* FROM view WHERE col1 = (SELECT t.col1 FROM t1 WHERE
> t1.col2 =
> > 1)
>
>
> yeah.  I understand -- it would help to see a test case there.  the
> devil is always in the details.  point being, let's take your other
> example
>
> or the supplied test case you mentioned (where you evaluate a volatile
> function in a view), things are working as designed.  the only
> difference between  a view and a regular query is you get pushed down
> one level in terms if subquery.  so,
>
> select * from view;
>
> is the same as:
>
> select * from () q;
>
> so, when using volatile function, the case basically boils down to:
>
> SELECT * FROM (select volatile_func(), stuff FROM big_table) q WHERE
> key = value;
>
> that's a *very* different query vs:
> select volatile_func(), stuff FROM big_table WHERE key = value;
>
> the slower performance there is because logically you *have* to
> evaluate volatile performance first -- things are working as designed.
>
> merlin
>


[GENERAL] PostgreSQL service terminated by query

2013-03-25 Thread adrian . kitchingman
I'm hoping I can get some info on a query which terminates my PostgreSQL 
service.
The query is a relatively simple PostGIS query:

SELECT en.gid, ST_LENGTH(en.geom) total_length, 
(ST_DUMP(ST_INTERSECTION(en.geom, evc.geom))).geom::geometry(Linestring, 
3111) geom
FROM en, evc
WHERE ST_INTERSECTS(en.geom, evc.geom) AND en.gid =355620;

I've run this query successfully with en.gid equal to other gid values 
(gid is the table PK). The success seems rather haphazard though so I'd 
like to find out the underlying issue. The above feature with gid = 355620 
is valid and appears no different from other features which have worked.
The log text when the service crashes is:

2013-03-26 15:49:10 EST LOG:  database system was interrupted; last known 
up at 2013-03-26 15:42:29 EST
2013-03-26 15:49:10 EST LOG:  database system was not properly shut down; 
automatic recovery in progress
2013-03-26 15:49:10 EST LOG:  record with zero length at 9A/E7AAD938
2013-03-26 15:49:10 EST LOG:  redo is not required
2013-03-26 15:49:10 EST LOG:  database system is ready to accept 
connections
2013-03-26 15:49:10 EST LOG:  autovacuum launcher started
2013-03-26 15:49:55 EST LOG:  server process (PID 3536) was terminated by 
exception 0xC005
2013-03-26 15:49:55 EST HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2013-03-26 15:49:55 EST LOG:  terminating any other active server 
processes
2013-03-26 15:49:55 EST WARNING:  terminating connection because of crash 
of another server process
2013-03-26 15:49:55 EST DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2013-03-26 15:49:55 EST HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2013-03-26 15:49:55 EST WARNING:  terminating connection because of crash 
of another server process
2013-03-26 15:49:55 EST DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2013-03-26 15:49:55 EST HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2013-03-26 15:49:55 EST WARNING:  terminating connection because of crash 
of another server process
2013-03-26 15:49:55 EST DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2013-03-26 15:49:55 EST HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2013-03-26 15:49:55 EST WARNING:  terminating connection because of crash 
of another server process
2013-03-26 15:49:55 EST DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2013-03-26 15:49:55 EST HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2013-03-26 15:49:55 EST LOG:  all server processes terminated; 
reinitializing
2013-03-26 15:50:05 EST FATAL:  pre-existing shared memory block is still 
in use
2013-03-26 15:50:05 EST HINT:  Check if there are any old server processes 
still running, and terminate them.

I suspect the entry 'server process (PID 3536) was terminated by exception 
0xC005' is the culpret. Does anyone have any suggestion on the meaning 
and cause?
This may be a PostGIS issue but I figured if I'd see if the general 
Postgres community would have more of an insight first.

I'm running PostgreSQL 9.1 with PostGIS 2.0 installed on an WinXP SP3: 4GB 
RAM machine. Shared_buffers set at 50MB. Let me know if further info 
needed.

Cheers

Adrian



Notice:
This email and any attachments may contain information that is personal, 
confidential,
legally privileged and/or copyright. No part of it should be reproduced, 
adapted or communicated without the prior written consent of the copyright 
owner. 

It is the responsibility of the recipient to check for and remove viruses.

If you have received this email in error, please notify the sender by return 
email, delete it from your system and destroy any copies. You are not 
authorised to use, communicate or rely on the information contained in this 
email.

Please consider the environment before printing this email.