It seems that it was the Postgres bug on replica, after upgrading minor
version to 9.1.21 on replica1, the corruption goes away.
Thanks everyone for the help
On Tue, Apr 5, 2016 at 1:32 AM, Soni M wrote:
> Hello Adrian, thanks for the response.
>
> master data also located on SAN
>
messages detected.
On Sun, Apr 3, 2016 at 11:23 PM, Adrian Klaver
wrote:
> On 04/02/2016 08:38 PM, Soni M wrote:
>
>> Hello Everyone,
>>
>> We face TOAST table corruption.
>>
>> One master and two streaming replicas. The corruption happen only on
>> both
, Joshua D. Drake
wrote:
>
> What version of PostgreSQL and which OS?
>
>
> On 04/02/2016 08:38 PM, Soni M wrote:
>
>
>> How can the corruption occurs ? and how can I resolve them ?
>>
>> Thank so much for the help.
>>
>> Cheers \o/
>>
>&g
Hello Everyone,
We face TOAST table corruption.
One master and two streaming replicas. The corruption happen only on both
streaming replicas.
We did found the corrupted rows. Selecting on this row, return (on both
replica) : unexpected chunk number 0 (expected 1) for toast value
1100613112 in pg
This is hard to tell, but You can get some estimation.
1. You can have WAL rate estimation from pg_xlog/ dir, i.e. How many WAL
generated per minutes
2. How long this pg_basebackup will last. Lets say for 3 hours.
Then You can multiple values in #1 and #2 to get rough estimation.
Hope this would h
On Sat, Aug 23, 2014 at 2:18 AM, Joseph Kregloh
wrote:
>
>
> On Fri, Aug 22, 2014 at 2:21 PM, Jerry Sievers
> wrote:
>
>> Joseph Kregloh writes:
>>
>> > Hi,
>> >
>> > Currently I am doing asynchronous replication from master to
>> > slave. Now if I restart the slave it will fall out of sync wit
here's the explain analyze result : http://explain.depesz.com/s/Mvv and
http://explain.depesz.com/s/xxF9
it seems that i need to dig more on query planner parameter.
BTW, thanks all for the helps.
On Sat, Aug 23, 2014 at 4:33 PM, Alban Hertroys wrote:
> On 23 Aug 2014, at 4:34, Soni
On Fri, Aug 22, 2014 at 9:10 PM, Alban Hertroys wrote:
> On 22 August 2014 14:26, Soni M wrote:
> > Currently we have only latest_transmission_id as FK, described here :
> > TABLE "ticket" CONSTRAINT "fkcbe86b0c6ddac9e" FOREIGN KEY
> > (latest_tran
On Thu, Aug 21, 2014 at 9:26 AM, David G Johnston <
david.g.johns...@gmail.com> wrote:
> Soni M wrote
> > Hi Everyone,
> >
> > I have this query :
> >
> > select t.ticket_id ,
> > tb.transmission_id
> > from ticket t,
> >
Hi Everyone,
I have this query :
select t.ticket_id ,
tb.transmission_id
from ticket t,
transmission_base tb
where t.latest_transmission_id = tb.transmission_id
and t.ticket_number = tb.ticket_number
and tb.parse_date > ('2014-07-31');
Execution plan: http://explain.depesz.com/s/YAak
Indexes on
it is creating blank log file on
> pg_log.
>
>
>
> *From:* Soni M [mailto:diptat...@gmail.com]
> *Sent:* 13 August 2014 15:02
> *To:* M Tarkeshwar Rao
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] Can I see the detailed log of query fired by
> p
On each session created by the client, run set log_statement to 'all'
before firing your query
On Wed, Aug 13, 2014 at 4:21 PM, M Tarkeshwar Rao <
m.tarkeshwar@ericsson.com> wrote:
> Hi all,
>
>
>
> Can I see the detailed log of query fired by particular Postgres client
> on Postgres serve
Do you run intensive read query on slave ?
If yes, query conflict can cause that,
http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT
On conflict, xlog stream will be saved on xlog dir on slave instead of
replaying it. This happen until slave has opportunity to write all
Genereal advice is to set up shared_buffers to 25% of total RAM. 75% RAM
for OS cache.
On my case (1.5 TB database, 145 GB RAM), setting shared_buffers bigger
than 8GB would give no significant performance impact.
On some cases, setting it low would be an advantage
http://www.depesz.com/2007/12/05/
On Tue, Aug 12, 2014 at 12:37 PM, Michael Paquier wrote:
> On Tue, Aug 12, 2014 at 2:10 PM, Soni M wrote:
> > This is how i set up the db :
> > Slave using streaming replica.
> > We configure slave to run pg_dump which usually last for about 12 hours.
> > We ha
Hello All,
This is how i set up the db :
Slave using streaming replica.
We configure slave to run pg_dump which usually last for about 12 hours.
We have limited pg_xlog on slave.
Once the pg_xlog on slave is full while pg_dump still in progress.
2014-08-11 09:39:23.226 CDT,,,25779,,53d26b30.64b3,
i think you could try pg_basebackup tools. it has options to achieve same
thing as you wanted. but need pgdata on destination emptied. if you really
need to do the exact thing as you stated, then you need to set postgres to
keep high enough number of xlog files on master to ensure that needed xlog
17 matches
Mail list logo