> On 27/07/16 18:54, Chris Travers wrote:
>> Another one I think they obliquely referred to (in the subtle problems
>> section) was the fact that if you have longer-running queries on the
>> replica with a lot of updates, you can get funny auto-vacuum-induced
>> errors (writes from autovacuum on th
On 5 August 2016 at 09:16, Tatsuo Ishii wrote:
>> On 27/07/16 18:54, Chris Travers wrote:
>>> Another one I think they obliquely referred to (in the subtle problems
>>> section) was the fact that if you have longer-running queries on the
>>> replica with a lot of updates, you can get funny auto-va
Il 04/08/2016 18:15, Eduardo Morras ha scritto:
[...]
a) As others said, max_connections = 200 is too high. Set it at your number of
cores (I use number of cores -1)
Excuse me for crossthreading, but I have to make things clearer to me.
That's one of the things I feel hard to understand how to
hi, friends,
Our phone number format is 11 digital, i want to replace last 4 digital
into star, for example:
Replace this string including 11 digital phone number:
'1891018,1391018,232,abc'
into:
'1891018,1391018,232,abc'
I try to use:
select
regexp_replace('
You should grouping first 7 digits to keep "([0-9]{7})" and
replace last 4 digits \d{4}:
postgres@postgres=# select
regexp_replace('1891018,1391018,232,abc','([0-9]{7})\d{4}',E'\\1','g')
;
regexp_replace
-
1891018,1391018,232,abc
(1 row)
>>> https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
>>
>> I wonder if their problem could be fixed by using
>> hot_standby_feedback. I have encountered similar problem but it seems
>> hot_standby_feedback was not any help in this case:
>>
>>
Hello!
As I can see files is delivered not with delay but with timeshift.
1. Can you show me restore_command on slave?
2. Also can you check archived WAL creation time on slaves in archive
location after you copied them with archive_command? Is in near WAL
creation time in pg_xlogs? Or differ
On 5 August 2016 at 14:06, Tatsuo Ishii wrote:
> On primary:
> test=# vacuum verbose t1;
> INFO: vacuuming "public.t1"
> INFO: "t1": removed 3 row versions in 1 pages
> INFO: "t1": found 3 removable, 0 nonremovable row versions in 1 out of 1
> pages
> DETAIL: 0 dead row versions cannot be re
Hello, everyone!
I`ve noticed interesting aspect in temp tables working. It appears
postgres is trying to reserve space on disk for temp tables even before
temp_buffers overflow.
test4=# show temp_buffers ;
temp_buffers
--
8MB
test4=# create temp table t(a int, b int);
strace:
Grigory Smolkin writes:
> I`ve noticed interesting aspect in temp tables working. It appears
> postgres is trying to reserve space on disk for temp tables even before
> temp_buffers overflow.
Sure. Just like it reserves space for ordinary tables right away,
long before there's any need to push
Thank you for your answer.
But it`s temporary table so it`s equal to saying 'I don`t care about
this data' and I can get 'out of disk space' regardless of using
temporary tables.
What are we winning here?
On 08/05/2016 05:51 PM, Tom Lane wrote:
Grigory Smolkin writes:
I`ve noticed intere
On 05.08.2016 17:51, Tom Lane wrote:
Grigory Smolkin writes:
I`ve noticed interesting aspect in temp tables working. It appears
postgres is trying to reserve space on disk for temp tables even before
temp_buffers overflow.
Sure. Just like it reserves space for ordinary tables right away,
lon
Grigory Smolkin writes:
> Thank you for your answer.
> But it`s temporary table so it`s equal to saying 'I don`t care about
> this data' and I can get 'out of disk space' regardless of using
> temporary tables.
> What are we winning here?
Sane behavior. Would you really want this:
=>
Alex Ignatov writes:
> On 05.08.2016 17:51, Tom Lane wrote:
>> Sure. Just like it reserves space for ordinary tables right away,
>> long before there's any need to push the data out of shared_buffers.
>> Otherwise, you might find yourself having to throw an "out of disk
>> space" error after havi
I can get error anyway on first select because of hint bit write. no?
On 08/05/2016 06:54 PM, Tom Lane wrote:
Alex Ignatov writes:
On 05.08.2016 17:51, Tom Lane wrote:
Sure. Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shar
On Fri, Aug 05, 2016 at 06:02:08PM +0300, Grigory Smolkin wrote:
> But it`s temporary table so it`s equal to saying 'I don`t care about this
> data' and I can get 'out of disk space' regardless of using temporary
> tables.
>
> What are we winning here?
Surely, that the transaction operates in a p
On 08/05/2016 07:18 PM, Andrew Sullivan wrote:
On Fri, Aug 05, 2016 at 06:02:08PM +0300, Grigory Smolkin wrote:
But it`s temporary table so it`s equal to saying 'I don`t care about this
data' and I can get 'out of disk space' regardless of using temporary
tables.
What are we winning here?
Su
On 05.08.2016 18:54, Tom Lane wrote:
Alex Ignatov writes:
On 05.08.2016 17:51, Tom Lane wrote:
Sure. Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of
Hi,
I'm trying to find a way to have Postgres tell me if the current
transaction would modify database if I committed it now. I can live with a
conservative approximation (sometimes – ideally, rarely – get a "yes" even
though nothing would be modified, but never get a "no" even though there
are p
Hi! Make trigger function
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
On Fri, Aug 5, 2016 at 10:25 PM +0300, "Christian Ohler"
wrote:
Hi,
I'm trying to find a way to have Postgres tell me if the current transaction
would modify database
On 8/4/2016 9:15 AM, Eduardo Morras wrote:
If you set max_connections too high, those connections will compete/figth for
same resources, CPU processing, I/O to disks, Memory and caches, Locks, and
postgres will spend more time managing the resources than doing real work.
Believe me (or us) set
Thanks, fair point. I should have mentioned that I know about triggers but
was hoping to find a less invasive mechanism (IIUC, I'd have to install a
trigger on every table) – it seems to me that Postgres should just be able
to tell me whether COMMIT will do anything, it obviously has to track that
On 08/05/2016 01:48 PM, Christian Ohler wrote:
Thanks, fair point. I should have mentioned that I know about
triggers but was hoping to find a less invasive mechanism (IIUC, I'd
have to install a trigger on every table) – it seems to me that
Postgres should just be able to tell me whether CO
On Fri, Aug 5, 2016 at 12:55 PM, Rob Sargent wrote:
>
> What sort of interface are you looking for. Where/When would you grab the
> information? Do what with it? Log triggers are the typical pattern here
> (with packages just for that sort of thing).
I'm looking for a statement (or sequence o
Christian Ohler writes:
> Thanks, fair point. I should have mentioned that I know about triggers but
> was hoping to find a less invasive mechanism (IIUC, I'd have to install a
> trigger on every table) â it seems to me that Postgres should just be able
> to tell me whether COMMIT will do anyth
On 08/05/2016 02:15 PM, Christian Ohler wrote:
On Fri, Aug 5, 2016 at 12:55 PM, Rob Sargent wrote:
What sort of interface are you looking for. Where/When would you grab the
information? Do what with it? Log triggers are the typical pattern here (with
packages just for that sort of thing).
On Fri, Aug 5, 2016 at 1:24 PM, Tom Lane wrote:
> Christian Ohler writes:
>> Thanks, fair point. I should have mentioned that I know about triggers but
>> was hoping to find a less invasive mechanism (IIUC, I'd have to install a
>> trigger on every table) – it seems to me that Postgres should ju
On Fri, Aug 5, 2016 at 1:26 PM, Rob Sargent wrote:
> On 08/05/2016 02:15 PM, Christian Ohler wrote:
>>
>> I'm looking for a statement (or sequence of statements) that, when run
>> within a transaction, tells the client if any writes are happening in
>> that transaction – basically an interface sim
Hi,
I need advice regarding RAID Controller configuration for PostgreSQL data.
Known common best practices:
- RAID cache at least 512MB
- BBU is must have
- cache mode = write back
- disk write cache mode = disabled for hdd and low level ssd and disk write
cache mode = enabled for ssd enterprise l
> Yes, the VACUUM truncation is still an issue. But statements are
> retryable, just like deadlocks.
>
> Unfo the truncation logic always kicks in or small tables of less than
> 16 blocks. It's more forgiving on bigger tables.
Oh, I didn't know that. Thanks for the info.
> Maybe we could defer t
Hey all,
I'm using logical decoding in my application to capture change streams and
ship them to Kafka. However, logical replication slots aren't included in
the WAL and thus don't make it to replicas. In the case of a failover, it's
not clear what (if anything) can be done to continue receiving a
Hi guys,
I can now confirm that by taking off the IONICE command solved my problem.
Thanks a lot !
Patrick
Hi all,
We’ve got a 16TB database that’s run up against the wrap-around tx id issue
(despite running auto-vacuum):
ERROR: database is not accepting commands to avoid wraparound data loss in
database "public"
HINT: Stop the postmaster and use a standalone backend to vacuum that
database.
33 matches
Mail list logo