Re: [GENERAL] pg_advisory_lock problem

2014-08-11 Thread Rémi Cura
2014-08-11 5:33 GMT+02:00 John R Pierce :

>
> -
>
>> PERFORM pg_advisory_lock(#1 , 2 or 3 depending on the table#,id of the
>> stuff I want to upsert) ;
>> WITH stuff_to_upsert ()
>> ,updating AS (update returning id)
>> ,inserting AS (insert if not updated)
>> PERFORM pg_advisory_unlock(same as above).
>> 
>>
>
>
> ah, you're releasing the lock before the insert is committed, since this
> is all within a function call, its entirely within a single transaction.
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Hey,
thanks your two !

Oh no I hoped it was something fixable
, because I don't think I can fix this transaction problem.
I guess because the big query is called in one plpgsql function it will be
in one transaction
,so even if I resorted to plpython in the upserting functions to issue
commit it shouldn't work.
The only way would be to transform the big query plpgsqp function into
another pl function,
but I can't really afford it :-(

Damn, it is really hard to use a table with 2 different session at the same
time!

Thanks for your answers, and for the much needed typo correction ;-)

Cheers,
Rémi-C


[GENERAL] wired problem for a 9.1 slave:receive wal but do not replay it?

2014-08-11 Thread Jov
Today,our monitor report a pg slave instance'disk space usage reach 96%,I
login in to the machine,and find the pg_xlog dir take up more than
2TB,which is abnormal.
the number of WAL file in the pg_xlog dir is more than 130k,while we set
the wal keep number to 8192.
I think there is something stop the replay,so I check the
pg_stat_activity,pg_prepare_statement,pg_xact etc,but find all normal.
I run:
ps auxwww | grep postgres
and can find the wal receiver and streaming receiver work happily,because
the wal file name,the streaming log id changed.

So I have no idea.

I then restart the slave PG,and find it recover from  a very old wal which
is one month ago.
We are now set up a new slave for the master while let the recover from
this slave go.

the PG version is 9.1.9,OS is CentOS 6 x86-64.

Jov
blog: http:amutu.com/blog 


[GENERAL] PgAgent binary distribution for Centos 6.5

2014-08-11 Thread Jen Sing Choe
hi, guys
does anyone have link to download pgagent binary distribution for centos ?
i tried on building the tar source but stuck on resolving the wxGTK
(wxWidget) dependencies.

many thanks

-- 
Disclaimer : This E-mail is intended only for the use of the individual or 
entity named above and may contain information that is confidential. If you 
are not the intended recipients, please immediately notify us by return 
email and delete it from your system. Any unauthorised dissemination, 
distribution or copying of this email is strictly prohibited. Thank You.


Re: [GENERAL] pg_advisory_lock problem

2014-08-11 Thread Kevin Grittner
Rémi Cura  wrote:

> 2014-08-11 5:33 GMT+02:00 John R Pierce :

>> ah, you're releasing the lock before the insert is committed,
>> since this is all within a function call, its entirely within a
>> single transaction.

> Oh no I hoped it was something fixable

Well, it might be.  Try using a transactional advisory lock and
letting it expire at the end of the transaction, rather than
explicitly releasing it before the transaction commits.  Depending 
on some other details, that might get it to do what you want.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcluu

2014-08-11 Thread Ramesh T
Hello ,

 I specified correct method i got error/message  at perl Makefile.PL
tar xzf pgcluu-2.0.tar.gz
-bash-4.1$ cd pgcluu-2.0
*-bash-4.1$ perl Makefile.PL*
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
Makefile.PL line 1.
BEGIN failed--compilation aborted at Makefile.PL line 1.
-bash-4.1$

then i trying to  install ExtUtils/MakeMaker  same place perl Makefile.PL
error/notifying

 tar xzf ExtUtils-MakeMaker-6.99_07.tar.gz
-bash-4.1$ cd ExtUtils-MakeMaker-6.99_07
-bash-4.1$ perl Makefile.PL
Using included version of CPAN::Meta (2.120351) because it is not already
instal
   led.
Using included version of ExtUtils::Install (1.54) because it is not
already ins
   talled.
Using included version of CPAN::Meta::YAML (0.008) because it is not
already ins
   talled.
Using included version of CPAN::Meta::Requirements (2.120351) because it is
not
already installed.
Using included version of File::Copy::Recursive (0.38) because it is not
already
installed.
Using included version of Parse::CPAN::Meta (1.4405) because it is not
already i
 nstalled.
Using included version of JSON::PP (2.27203) because it is not already
installed
 .
Using included version of JSON::PP::Compat5006 (1.09) because it is not
already
installed.
Using included version of ExtUtils::Manifest (1.60) because it is not
already in
   stalled.
Generating a Unix-style Makefile
Writing Makefile for ExtUtils::MakeMaker
Writing MYMETA.yml and MYMETA.json
Can't locate Test/Harness.pm in @INC (@INC contains: bundled/CPAN-Meta
bundled/E
 xtUtils-Install bundled/CPAN-Meta-YAML
bundled/CPAN-Meta-Requirements bundled/Fi
 le-Copy-Recursive
bundled/Parse-CPAN-Meta bundled/JSON-PP bundled/version bundle

 d/Scalar-List-Utils bundled/JSON-PP-Compat5006 bundled/ExtUtils-Command
bundled/
 ExtUtils-Manifest bundled/File-Temp lib .
/usr/local/lib64/perl5 /usr/local/shar
   e/perl5
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/per

 l5 /usr/share/perl5) at Makefile.PL line 142.


wheni trying test Harness it is also not installing..

let me know how to fix issue..?

thanks,
rao


On Sat, Aug 9, 2014 at 9:43 AM, Adrian Klaver 
wrote:

> On 08/08/2014 06:40 AM, Ramesh T wrote:
>
>> Hi,
>> i want install pgcluu on postgres 9.3 and i'm putty tool to
>> connect pg database when i ran .
>>tar xzf pgcluu-2.0.tar.gz
>>  cd pgcluu-2.0.tar/
>> perl Makefile.PL
>>  make && sudo make install
>>
>> it's return like..
>> bash-4.1$ tar xzf pgcluu-2.0.tar.gz
>> tar (child): pgcluu-2.0.tar.gz: Cannot open: No such file or directory
>> tar (child): Error is not recoverable: exiting now
>> tar: Child returned status 2
>> tar: Error is not recoverable: exiting now
>>
>
> Well at this point you are dead in the water, all the other steps are
> bound to fail.
>
> Are you sure pgcluu-2.0.tar.gz is there?
> If it is, then it may be corrupted, so try downloading it again.
>
>
>
>  -bash-4.1$ cd pgcluu-2.0.tar/
>> -bash: cd: pgcluu-2.0.tar/: No such file or directory
>> -bash-4.1$perl Makefile.PL
>> Can't open perl script "Makefile.PL": No such file or directory
>> -bash-4.1$ make && sudo make install
>>
>> where do i run the tar file let me know..
>> thanks in advance..
>> R
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] postgresql referencing and creating types as record

2014-08-11 Thread vpmm2007
type function is record (f1 NUMERIC,f2 NUMERIC..); this is in oracle 

kindly tell me what is the substitute to use "is record " in postgres.

its urgent .

thanks and rgds
vpmm



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/postgresql-referencing-and-creating-types-as-record-tp5813901.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcluu

2014-08-11 Thread David Carpio

Hello

You must install the harness.pm module

You may do it via yum

yum install perl-Test-Harness

or downloaded the tar package and install it

http://search.cpan.org/~leont/Test-Harness-3.32/lib/Test/Harness.pm

I hope this help you.

David


On 08/11/2014 08:52 AM, Ramesh T wrote:

Hello ,

 I specified correct method i got error/message  at perl Makefile.PL
tar xzf pgcluu-2.0.tar.gz
-bash-4.1$ cd pgcluu-2.0
*-bash-4.1$ perl Makefile.PL*
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: 
/usr/local/lib64/perl5 /usr/local/share/perl5 
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl 
/usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.

BEGIN failed--compilation aborted at Makefile.PL line 1.
-bash-4.1$

then i trying to  install ExtUtils/MakeMaker  same place perl 
Makefile.PL error/notifying


 tar xzf ExtUtils-MakeMaker-6.99_07.tar.gz
-bash-4.1$ cd ExtUtils-MakeMaker-6.99_07
-bash-4.1$ perl Makefile.PL
Using included version of CPAN::Meta (2.120351) because it is not 
already installed.
Using included version of ExtUtils::Install (1.54) because it is not 
already ins  talled.
Using included version of CPAN::Meta::YAML (0.008) because it is not 
already ins  talled.
Using included version of CPAN::Meta::Requirements (2.120351) because 
it is not already installed.
Using included version of File::Copy::Recursive (0.38) because it is 
not already installed.
Using included version of Parse::CPAN::Meta (1.4405) because it is not 
already i  nstalled.
Using included version of JSON::PP (2.27203) because it is not already 
installed.
Using included version of JSON::PP::Compat5006 (1.09) because it is 
not already installed.
Using included version of ExtUtils::Manifest (1.60) because it is not 
already in  stalled.

Generating a Unix-style Makefile
Writing Makefile for ExtUtils::MakeMaker
Writing MYMETA.yml and MYMETA.json
Can't locate Test/Harness.pm in @INC (@INC contains: bundled/CPAN-Meta 
bundled/E  xtUtils-Install bundled/CPAN-Meta-YAML 
bundled/CPAN-Meta-Requirements bundled/Fi  le-Copy-Recursive 
bundled/Parse-CPAN-Meta bundled/JSON-PP bundled/version bundle 
 d/Scalar-List-Utils bundled/JSON-PP-Compat5006 
bundled/ExtUtils-Command bundled/  ExtUtils-Manifest bundled/File-Temp 
lib . /usr/local/lib64/perl5 /usr/local/shar  e/perl5 
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl 
/usr/lib64/perl5 /usr/share/perl5) at Makefile.PL line 142.



wheni trying test Harness it is also not installing..

let me know how to fix issue..?

thanks,
rao


On Sat, Aug 9, 2014 at 9:43 AM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 08/08/2014 06:40 AM, Ramesh T wrote:

Hi,
i want install pgcluu on postgres 9.3 and i'm
putty tool to
connect pg database when i ran .
   tar xzf pgcluu-2.0.tar.gz
 cd pgcluu-2.0.tar/
perl Makefile.PL
 make && sudo make install

it's return like..
bash-4.1$ tar xzf pgcluu-2.0.tar.gz
tar (child): pgcluu-2.0.tar.gz: Cannot open: No such file or
directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now


Well at this point you are dead in the water, all the other steps
are bound to fail.

Are you sure pgcluu-2.0.tar.gz is there?
If it is, then it may be corrupted, so try downloading it again.



-bash-4.1$ cd pgcluu-2.0.tar/
-bash: cd: pgcluu-2.0.tar/: No such file or directory
-bash-4.1$perl Makefile.PL
Can't open perl script "Makefile.PL": No such file or directory
-bash-4.1$ make && sudo make install

where do i run the tar file let me know..
thanks in advance..
R




-- 
Adrian Klaver

adrian.kla...@aklaver.com 






Re: [GENERAL] pg_advisory_lock problem

2014-08-11 Thread Merlin Moncure
On Mon, Aug 11, 2014 at 9:49 AM, Kevin Grittner  wrote:
> Rémi Cura  wrote:
>
>> 2014-08-11 5:33 GMT+02:00 John R Pierce :
>
>>> ah, you're releasing the lock before the insert is committed,
>>> since this is all within a function call, its entirely within a
>>> single transaction.
>
>> Oh no I hoped it was something fixable
>
> Well, it might be.  Try using a transactional advisory lock and
> letting it expire at the end of the transaction, rather than
> explicitly releasing it before the transaction commits.  Depending
> on some other details, that might get it to do what you want.

Better to use vanilla LOCK TABLE statement in my opinion for this purpose.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql referencing and creating types as record

2014-08-11 Thread Adrian Klaver

On 08/10/2014 09:50 PM, Vaishali Maheshkar wrote:


SIR,

THANKS FOR YOUR REPLY ,


CCing list.



actually i m trying to create record type data in postgres , but i cant
declare it directly as in oracle my purpose is to create types of record
type in oracle it was done by creating a package and declaring all the
types inside it.
Then used it with
other packages as :
create or replace function IT_R_A_get_se_CE_without_org
(
in_LT_PAYMENTinCUM_A_TYPES. LT_PAYMENT
 out_se_ce_hashmapout
IT_CUSTOM_RECORD_TYPES.LT_NUMBER_HASHMAP
)
as..


i want to convert this package to postgres.

it would be great if i get the solution.


The solution(s) have already been presented. Either get EDB Advanced 
Server Plus and its Oracle compatibility features or use the links below 
to craft your own solution. The basic issue is the community version of 
Postgres does not have packages, so you will need to go another route if 
you stick with it. That route would most likely revolve around using 
CREATE TYPE to create the types you had in the package. Another way 
would be to use IN and OUT parameters to a function which is shown in 
the links below. The method you choose is dependent on the bigger 
picture of what you are doing with the database/application/middleware. 
Only you have that full picture, so there is only so much we can do from 
this end.




thx a lot


On Fri, Aug 8, 2014 at 7:15 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 08/07/2014 10:23 PM, vpmm2007 wrote:

   In reply to this post by vpmm2007
create or replace package CUM_A_TYPES
as
   type LT_PAYMENT is record
(BASICnumber,
DPnumber,
GRADE_PAYnumber
);
TYPE TYPE_CALC_TAX is record
   (
FIN_ROLE_ID   number(8),
CALC_FOR_ROLE_CODE  number(4));

NEED TO CONVERT THIS TO POSTGRES , ANYBODY PLS HELP ME I M NEW
TO POSTGRES


Well you have not provided context for where and how you are using
this, so I will wing it.

 From here:


http://docs.oracle.com/cd/__B19306_01/appdev.102/b14261/__record_definition.htm



I suspect it is being used in a pl/sql function. In that case Davids
previous pots holds. Look in:

CREATE TYPE
http://www.postgresql.org/__docs/9.3/interactive/sql-__createtype.html


CREATE FUNCTION
http://www.postgresql.org/__docs/9.3/interactive/sql-__createfunction.html


pl/pgsql
http://www.postgresql.org/__docs/9.3/interactive/plpgsql.__html


In particular:

http://www.postgresql.org/__docs/9.3/interactive/plpgsql-__declarations.html



If you want a more specific answer, we will need more specific
information.


THX&RGDS
VPMM



--
View this message in context:

http://postgresql.1045698.n5.__nabble.com/postgresql-__referencing-and-creating-__types-as-record-__tp5813901p5814176.html


Sent from the PostgreSQL - general mailing list archive at
Nabble.com.




--
Adrian Klaver
adrian.kla...@aklaver.com 




--
Regards
Vaishali Maheshkar





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_advisory_lock problem

2014-08-11 Thread Rémi Cura
Hey,
as you (both?) suggested it works using advisory lock used at the beginning
and end of the transaction. This way there is no upsert at all if the
element is locked? (I used general advisory lockbut in the same way as
transactionnal lock)
The issue in this case is simple : I have to use about 100k advisory locks,
which is a big memory requirement for my hardware :-(

Merlin I'm afraid I don't understand what is "vanilla" LOCK TABLE.

I can't really use a lock table because each query upsert sequentially into
3 tables, doing lots of computing between.
I use parallel query to compute faster (load dividing). I guess it would be
very slow with about 8 parallel queries with locks.

I should test this lock approach to be sure.

Thanks both of you !
Cheers,
Rémi-C



2014-08-11 17:51 GMT+02:00 Merlin Moncure :

> On Mon, Aug 11, 2014 at 9:49 AM, Kevin Grittner  wrote:
> > Rémi Cura  wrote:
> >
> >> 2014-08-11 5:33 GMT+02:00 John R Pierce :
> >
> >>> ah, you're releasing the lock before the insert is committed,
> >>> since this is all within a function call, its entirely within a
> >>> single transaction.
> >
> >> Oh no I hoped it was something fixable
> >
> > Well, it might be.  Try using a transactional advisory lock and
> > letting it expire at the end of the transaction, rather than
> > explicitly releasing it before the transaction commits.  Depending
> > on some other details, that might get it to do what you want.
>
> Better to use vanilla LOCK TABLE statement in my opinion for this purpose.
>
> merlin
>


Re: [GENERAL] pg_advisory_lock problem

2014-08-11 Thread Kevin Grittner
Rémi Cura  wrote:

> as you (both?) suggested it works using advisory lock used at the
> beginning and end of the transaction. This way there is no upsert
> at all if the element is locked? (I used general advisory lockbut
> in the same way as transactionnal lock)

This is too vague to comment on.

> The issue in this case is simple : I have to use about 100k
> advisory locks, which is a big memory requirement for my hardware
> :-(

... and that doesn't seem to make any sense.  Either you are not
understanding advisory locks or you are doing something very, very
unusual.

> Merlin I'm afraid I don't understand what is "vanilla" LOCK
> TABLE.

See the LOCK TABLE command.

http://www.postgresql.org/docs/current/interactive/sql-lock.html

http://www.postgresql.org/docs/current/interactive/explicit-locking.html

> I can't really use a lock table because each query upsert
> sequentially into 3 tables, doing lots of computing between.

Now *that* I understand.  :-)  It's not an unusual requirement,
but can be a challenge when using snapshot isolation (where writes
don't block reads and reads don't block anything).  There are two
main approaches -- introduce blocking to serialize some of the
operations, or use the SERIALIZABLE transaction isolation level to
ensure that the behavior of all concurrent transactions is
consistent with the behavior you would see if they were run one at
a time.  The latter approach doesn't introduce any new blocking,
but it can cause transactions to get an ERROR with a SQLSTATE of
40001 at just about any point, so you need to be prepared to
recognize that and retry those transactions from the beginning (not
just the last statement of the transaction), ignoring any data read
during the failed attempt.

You may want to read the entire chapter on concurrency control:

http://www.postgresql.org/docs/current/interactive/mvcc.html

If you are considering using SERIALIZABLE transactions, you should
probably review the examples in the Wiki, to get an idea of how it
behaves in various cases:

http://wiki.postgresql.org/wiki/SSI

> I use parallel query to compute faster (load dividing). I guess
> it would be very slow with about 8 parallel queries with locks.

Well, if you introduce blocking you reduce your parallelism, but if
you use serializable transactions and there are actually a lot of
conflicts you can see poor performance because of the errors
rolling back transactions and the need to retry them from the
start.  The techniques used to implement serializable transactions
in PostgreSQL are basically a refinement of the Optimistic
Concurrency Control (OCC) techniques, but generally with far fewer
retries needed -- the point being that it optimistically assumes
that there will not be a conflict so that concurrency is better,
but has to cancel things if that optimism proves to be unfounded.

To make related to changes to multiple tables and maintain coherent 
data, you probably will need to do one or the other.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming replica refuse to do restore_command

2014-08-11 Thread Soni M
Hello All,

This is how i set up the db :
Slave using streaming replica.
We configure slave to run pg_dump which usually last for about 12 hours.
We have limited pg_xlog on slave.
Once the pg_xlog on slave is full while pg_dump still in progress.

2014-08-11 09:39:23.226 CDT,,,25779,,53d26b30.64b3,2,,2014-07-25 09:35:28
CDT,,0,FATAL,53100,"could not write to file ""pg_xlog/xlogtemp.25779"": No
space left on device",""

after some restart point, the slave is trying to connect to primary and
refuse to do restore_command.

Is this an expected behaviour?

Full log :

*2014-08-11 09:39:23.226 CDT,,,25779,,53d26b30.64b3,2,,2014-07-25 09:35:28
CDT,,0,FATAL,53100,"could not write to file ""pg_xlog/xlogtemp.25779"": No
space left on device",""*
*2014-08-11 09:46:36.163 CDT,,,14394,,53c582ea.383a,14031,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: time",""*
2014-08-11 09:49:33.786 CDT,,,14394,,53c582ea.383a,14032,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 21862 buffers
(4.2%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=176.358 s, sync=1.243 s, total=177.623 s; sync files=683,
longest=0.159 s, average=0.001 s",""
2014-08-11 09:49:33.787 CDT,,,14394,,53c582ea.383a,14033,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC5/36CE52A0","last
completed transaction was at log time 2014-08-10
23:01:55.754838-05"""
2014-08-11 09:49:33.790 CDT,,,14394,,53c582ea.383a,14034,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 09:51:37.256 CDT,,,14394,,53c582ea.383a,14035,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 76725 buffers
(14.6%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=122.170 s, sync=1.284 s, total=123.465 s; sync files=770,
longest=0.424 s, average=0.001 s",""
2014-08-11 09:51:37.256 CDT,,,14394,,53c582ea.383a,14036,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC5/BE0D9590","last
completed transaction was at log time 2014-08-11
01:10:49.220537-05"""
2014-08-11 09:51:37.258 CDT,,,14394,,53c582ea.383a,14037,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 09:54:23.552 CDT,,,14394,,53c582ea.383a,14038,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 49243 buffers
(9.4%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=164.859 s, sync=1.428 s, total=166.293 s; sync files=794,
longest=0.157 s, average=0.001 s",""
2014-08-11 09:54:23.552 CDT,,,14394,,53c582ea.383a,14039,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC6/571657C8","last
completed transaction was at log time 2014-08-11
04:43:16.23604-05"""
2014-08-11 09:54:23.561 CDT,,,14394,,53c582ea.383a,14040,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 09:57:51.829 CDT,,,14394,,53c582ea.383a,14041,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 215682 buffers
(41.1%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=206.897 s, sync=1.364 s, total=208.267 s; sync files=697,
longest=0.110 s, average=0.001 s",""
2014-08-11 09:57:51.829 CDT,,,14394,,53c582ea.383a,14042,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC7/FBE0D48","last
completed transaction was at log time 2014-08-11
06:42:22.051994-05"""
2014-08-11 09:57:51.832 CDT,,,14394,,53c582ea.383a,14043,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 10:00:18.828 CDT,,,14394,,53c582ea.383a,14044,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 127358 buffers
(24.3%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=145.721 s, sync=1.263 s, total=146.995 s; sync files=905,
longest=0.038 s, average=0.001 s",""
2014-08-11 10:00:18.828 CDT,,,14394,,53c582ea.383a,14045,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC7/E80EDE30","last
completed transaction was at log time 2014-08-11
07:15:48.30725-05"""
2014-08-11 10:00:18.833 CDT,,,14394,,53c582ea.383a,14046,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 10:03:27.063 CDT,,,14394,,53c582ea.383a,14047,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint complete: wrote 168653 buffers
(32.2%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=186.808 s, sync=1.409 s, total=188.229 s; sync files=886,
longest=0.151 s, average=0.001 s",""
2014-08-11 10:03:27.063 CDT,,,14394,,53c582ea.383a,14048,,2014-07-15
14:37:14 CDT,,0,LOG,0,"recovery restart point at 2AC8/6D003160","last
completed transaction was at log time 2014-08-11
07:40:58.390313-05"""
2014-08-11 10:03:27.065 CDT,,,14394,,53c582ea.383a,14049,,2014-07-15
14:37:14 CDT,,0,LOG,0,"restartpoint starting: xlog",""
2014-08-11 10:06:43.148 CDT,,,14394,,53c582ea.383a,14050,,2014-07-15
14:37:

Re: [GENERAL] Streaming replica refuse to do restore_command

2014-08-11 Thread Michael Paquier
On Tue, Aug 12, 2014 at 2:10 PM, Soni M  wrote:
> This is how i set up the db :
> Slave using streaming replica.
> We configure slave to run pg_dump which usually last for about 12 hours.
> We have limited pg_xlog on slave.
> Once the pg_xlog on slave is full while pg_dump still in progress.
What is the version of Postgres you are using?

> after some restart point, the slave is trying to connect to primary and
> refuse to do restore_command.
> Is this an expected behaviour?
> 2014-08-11 10:15:53.298 CDT,,,29141,,53e8de29.71d5,2,,2014-08-11 10:15:53
> CDT,,0,FATAL,XX000,"could not receive data from WAL stream: FATAL:
> requested WAL segment 00032ACC00A6 has already been removed
> ",""
> 2014-08-11 10:15:58.550 CDT,,,29146,,53e8de2e.71da,1,,2014-08-11 10:15:58
> CDT,,0,LOG,0,"streaming replication successfully connected to
> primary",""

A restart point is created each time a standby server shuts down
during WAL replay. It seems that you stopped it for a too long time,
and that WAL files that this server expected to find are done because
it was out of sight for a too long time. Then, when a standby starts
and enters in recovery, it tries first to recover necessary WAL files
from the archives with restore_command and then switches to streaming
mode if it cannot find what is necessary. Are you sure that you kept
enough WAL history in your archives and that restore_command is
pointing to the right path/host?
Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replica refuse to do restore_command

2014-08-11 Thread Soni M
On Tue, Aug 12, 2014 at 12:37 PM, Michael Paquier  wrote:

> On Tue, Aug 12, 2014 at 2:10 PM, Soni M  wrote:
> > This is how i set up the db :
> > Slave using streaming replica.
> > We configure slave to run pg_dump which usually last for about 12 hours.
> > We have limited pg_xlog on slave.
> > Once the pg_xlog on slave is full while pg_dump still in progress.
> What is the version of Postgres you are using?
>
> Pg 9.1 on from RHEL packages

> after some restart point, the slave is trying to connect to primary and
> > refuse to do restore_command.
> > Is this an expected behaviour?
> > 2014-08-11 10:15:53.298 CDT,,,29141,,53e8de29.71d5,2,,2014-08-11 10:15:53
> > CDT,,0,FATAL,XX000,"could not receive data from WAL stream: FATAL:
> > requested WAL segment 00032ACC00A6 has already been removed
> > ",""
> > 2014-08-11 10:15:58.550 CDT,,,29146,,53e8de2e.71da,1,,2014-08-11 10:15:58
> > CDT,,0,LOG,0,"streaming replication successfully connected to
> > primary",""
>
> A restart point is created each time a standby server shuts down
> during WAL replay. It seems that you stopped it for a too long time,
> and that WAL files that this server expected to find are done because
> it was out of sight for a too long time. Then, when a standby starts
> and enters in recovery, it tries first to recover necessary WAL files
> from the archives with restore_command and then switches to streaming
> mode if it cannot find what is necessary. Are you sure that you kept
> enough WAL history in your archives and that restore_command is
> pointing to the right path/host?
> Regards,
> --
> Michael
>

the wal files needed still on the archives and it's on the right path.

I try this and it's solved.
I moved all pg_xlog files to a bigger storage and manually copy needed wal
files from archive, and restart the server and then it runs smoothly

-- 
Regards,

Soni Maula Harriz