Hi!
Basically after upgrade to version 11.5 from 10.6 I experience error messages
on streaming replica host “FATAL: terminating connection due to conflict with
recovery” and “ERROR: canceling statement due to conflict with recovery”. There
is no changes for vacuuming on master nor max_standby_
Koen De Groote writes:
>> Index expressions are relatively expensive to maintain, because the derived
>> expression(s) must be computed for each row upon insertion and whenever it
>> is updated
> I'd like to get an idea on "relatively expensive".
It's basically whatever the cost of evaluating th
Greetings all.
The following page:
https://www.postgresql.org/docs/11/indexes-expressional.html
States the following:
Index expressions are relatively expensive to maintain, because the derived
> expression(s) must be computed for each row upon insertion and whenever it
> is updated
>
I'd like
Ok, thanks a lot! Got it.
вт, 16 июн. 2020 г. в 17:12, Tom Lane :
> Eugene Pazhitnov writes:
> > xbox=> \d herostat
> > ...
> > "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE
> (valfloat)
>
> > eugene@dignus:/var/www/html/health$ sudo -u postgres pg_repack -t
> herostat
>
På tirsdag 16. juni 2020 kl. 17:59:37, skrev Jim Hurne mailto:jhu...@us.ibm.com>>:
We have a cloud service that uses PostgreSQL to temporarily store binary
content. We're using PostgreSQL's Large Objects to store the binary
content. Each large object lives anywhere from a few hundred millisecon
On Tue, Jun 16, 2020 at 1:45 PM Jim Hurne wrote:
> Thanks Michael,
>
> Here are our current autovacuum settings:
>
> autovacuum | on
> autovacuum_analyze_scale_factor | 0.1
> autovacuum_analyze_threshold| 50
> autovacuum_freeze_max_age | 2000
Thanks Michael,
Here are our current autovacuum settings:
autovacuum | on
autovacuum_analyze_scale_factor | 0.1
autovacuum_analyze_threshold| 50
autovacuum_freeze_max_age | 2
autovacuum_max_workers | 3
autovacuum_multixact_
On Tue, Jun 16, 2020 at 10:01 AM Jim Hurne wrote:
> Other than the increasing elapsed times for the autovacuum, we don't see
> any other indication in the logs of a problem (no error messages, etc).
>
> We're currently using PostgreSQL version 10.10. Our service is JVM-based
> and we're using the
Thanks for the comment.
from what I was able to monitor memory usage was almost stable and there
were about 20GB allocated as cached memory. Memory overcommit is disabled
on the database server. Might it be a memory issue, since wit was
synchronizing newly added tables with a sum of 380 GB of data
We have a cloud service that uses PostgreSQL to temporarily store binary
content. We're using PostgreSQL's Large Objects to store the binary
content. Each large object lives anywhere from a few hundred milliseconds
to 5-10 minutes, after which it is deleted.
Normally, this works just fine and w
On 6/16/20 7:59 AM, Pepe TD Vo wrote:
Just noticed you cross posted to pgsql-admin listed. FYI, That is not a
good practice.
I can run \copy in Linux with individual csv file into the table fine
and run import using pgadmin into AWS instance. I am trying to run \copy
all csv files import int
I can run \copy in Linux with individual csv file into the table fine and run
import using pgadmin into AWS instance. I am trying to run \copy all csv files
import into its own table in Linux and in AWS instance. If all csv files into
one table is fine but each csv for each table. Should I cre
Hello,
I am having some issues setting/using my PSQL console encoding to UTF-8
under Windows 10.
I have a Windows server and client. The |Postgres 12| database contains
tables with content in multiple languages (ex: English, French (with
characters such as |é|), Korean (with characters such
On 6/16/20 7:20 AM, Pepe TD Vo wrote:
good morning experts,
I nêd to set up a batch script to import multi csv files to import them
to Postgres tables. Each csv files will be named table1_todaydate.csv,
table2_todaydate.csv, etc... tablen_todaydate.csv. Each csv file will
import to its tabl
good morning experts,
I nêd to set up a batch script to import multi csv files to import them to
Postgres tables. Each csv files will be named table1_todaydate.csv,
table2_todaydate.csv, etc... tablen_todaydate.csv. Each csv file will import
to its table and how do I execute the script to call
So when I first started working with PostgreSQL I was using the latest version
(11.2). I don't want to move to 12 yet but I would like to get my 11.2 up to
11.8. Due to my servers not being connected to the Internet I ended up
downloading the libraries and building the files locally. My que
Eugene Pazhitnov writes:
> xbox=> \d herostat
> ...
> "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE
> (valfloat)
> eugene@dignus:/var/www/html/health$ sudo -u postgres pg_repack -t herostat
> -N -d xbox
> INFO: Dry run enabled, not executing repack
> WARNING: relation "p
On Tue, Jun 16, 2020, 4:52 AM Eugene Pazhitnov wrote:
> xbox=> \d herostat
>Table "public.herostat"
> Indexes:
> "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE
> (valfloat)
>
> WARNING: relation "public.herostat" must have a primary key or not-null
> un
On Tue, 2020-06-16 at 00:28 +0200, Peter wrote:
> On Mon, Jun 15, 2020 at 09:46:34PM +0200, Laurenz Albe wrote:
> ! On Mon, 2020-06-15 at 19:00 +0200, Peter wrote:
> ! > And that is one of a couple of likely pitfalls I perceived when
> ! > looking at that new API.
> !
> ! That is a property of my
On Mon, Jun 15, 2020 at 09:46:34PM +0200, Laurenz Albe wrote:
! On Mon, 2020-06-15 at 19:00 +0200, Peter wrote:
! > And that is one of a couple of likely pitfalls I perceived when
! > looking at that new API.
!
! That is a property of my scripts, *not* of the non-exclusive
! backup API...
Then ho
On Sun, Jun 14, 2020 at 03:05:15PM +0200, Magnus Hagander wrote:
! > You can see that all the major attributes (scheduling, error-handling,
! > signalling, ...) of a WAL backup are substantially different to that
! > of any usual backup.
!
! > This is a different *Class* of backup object, therefo
Hello everyone!
eugene@dignus:/var/www/html/health$ psql xbox
Timing is on.
psql (12.3 (Ubuntu 12.3-1.pgdg20.04+1))
Type "help" for help.
xbox=> \d herostat
Table "public.herostat"
Column | Type | Collation | Nullable | Default
--+--+-
On Tue, Jun 16, 2020 at 11:49:15AM +0200, Koen De Groote wrote:
> Alright, I've done that, and that seems to be a very good result: https://
> explain.depesz.com/s/xIph
>
> The method I ended up using:
>
> create or replace function still_needs_backup(shouldbebackedup bool,
> backupperformed bool
Alright, I've done that, and that seems to be a very good result:
https://explain.depesz.com/s/xIph
The method I ended up using:
create or replace function still_needs_backup(shouldbebackedup bool,
backupperformed bool)
returns BOOLEAN as $$
select $1 AND NOT $2;
$$
language sql immutable;
An
24 matches
Mail list logo