Vladimir,
Just curious, if your goal is to reduce checkpoint overhead, shouldn’t you
decrease shared_buffers instead of increasing it?
With bigger shared_buffers, you can accumulate more dirty buffers for
checkpoint to take care.
I remember in early versions ( around 8.4), when checkpoint_compl
ott Mead wrote:
>
>
>
> On Wed, Sep 27, 2017 at 1:59 PM, Igor Polishchuk <mailto:ora4...@gmail.com>> wrote:
> Sorry, here are the missing details, if it helps:
> Postgres 9.6.5 on CentOS 7.2.1511
>
> > On Sep 27, 2017, at 10:56, Igor Polishchuk > <ma
Sorry, here are the missing details, if it helps:
Postgres 9.6.5 on CentOS 7.2.1511
> On Sep 27, 2017, at 10:56, Igor Polishchuk wrote:
>
> Hello,
> I have a multi-terabyte streaming replica on a bysy database. When I set it
> up, repetative rsyncs take at least 6 hours eac
, which I may not notice for a while on such a huge database.
Any educated opinions on the subject here?
Thank you
Igor Polishchuk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Actually, what works is
set search_path='crabdata', 'public' ;
On 2/23/12 1:10 PM, "Adrian Klaver" wrote:
> On 02/23/2012 01:08 PM, Willem Buitendyk wrote:
>> I have it set in postgresql.conf and I've also used:
>> alter user postgres set search_path = crabdata,public;
>>
>
> Well search_pat
t seems that having synchronized
transaction id between let say streaming-replicated databases would give
some advantages if done properly.
Regards
Igor Polishchuk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql
I'm in a similar position, cloning a database multiple times to provide
development and qa databases for multiple groups. A feature desired by
Clark would greatly help many people with their development and qa
databases.
On the other hand, I imagine, a feature like this would not be easy to
develo
I saw such behavior a few years ago on multiple very busy databases
connected to the same EMC SAN. The SAN's cache got overwhelmed by the
databases IO, and the storage latency went up significantly. I don't
remember now what was the latency, but it was above 40ms.
Is everything ok with your storag