On 8/21/23 20:50, Adrian Klaver wrote:
On 8/21/23 09:31, Rihad wrote:
On 8/21/23 20:17, Adrian Klaver wrote:
On 8/21/23 09:09, Rihad wrote:
On 8/21/23 20:00, Adrian Klaver wrote:
Sorry, they are all as per default, commented out in the config.
There are no long running queries
On 8/21/23 20:17, Adrian Klaver wrote:
On 8/21/23 09:09, Rihad wrote:
On 8/21/23 20:00, Adrian Klaver wrote:
Thanks for the detailed reply, no tables have custom settings.
I need to make it clear once again that all autovac/analyze work as
expected when n_live_tup matches reality, i.e
On 8/21/23 20:00, Adrian Klaver wrote:
On 8/20/23 22:31, Rihad wrote:
On 8/21/23 00:15, Adrian Klaver wrote:
On 8/20/23 12:10, Rihad wrote:
On 8/20/23 20:22, Adrian Klaver wrote:
On 8/18/23 22:35, Rihad wrote:
On 8/17/23 13:01, rihad wrote:
Hard to say without seeing the actual
On 8/21/23 00:15, Adrian Klaver wrote:
On 8/20/23 12:10, Rihad wrote:
On 8/20/23 20:22, Adrian Klaver wrote:
On 8/18/23 22:35, Rihad wrote:
On 8/17/23 13:01, rihad wrote:
Hi, all. After calling pg_stat_reset all statistics used by
autovacuum got zeroed, and started accumulating from
On 8/20/23 20:22, Adrian Klaver wrote:
On 8/18/23 22:35, Rihad wrote:
On 8/17/23 13:01, rihad wrote:
Hi, all. After calling pg_stat_reset all statistics used by
autovacuum got zeroed, and started accumulating from scratch. Some
tables get acted upon properly, some don't.
Self-rep
On 8/17/23 13:01, rihad wrote:
Hi, all. After calling pg_stat_reset all statistics used by autovacuum
got zeroed, and started accumulating from scratch. Some tables get
acted upon properly, some don't.
Self-replying: yup, it seems there's an arbitrary limit of 100K of
n_live
Hi, all. After calling pg_stat_reset all statistics used by autovacuum
got zeroed, and started accumulating from scratch. Some tables get acted
upon properly, some don't.
foo=> select
relname,n_live_tup,n_dead_tup,cast(round(n_live_tup*cast(current_setting('autovacuum_vacuum_scale_factor')
a
On 6/13/23 12:57, Magnus Hagander wrote:
On Tue, Jun 13, 2023 at 10:35 AM rihad wrote:
Hi, all. When pg_upgrading from PG 11 to 13, a file analyze_new_cluster.sh was
generated in the current directory which ran vacuumdb --all --analyze-in-stages
When upgrading from 13 to 15.3 no such file
Hi, all. When pg_upgrading from PG 11 to 13, a file
analyze_new_cluster.sh was generated in the current directory which ran
vacuumdb --all --analyze-in-stages
When upgrading from 13 to 15.3 no such file was generated, which made me
believe it was no longer necessary. Alas, it wasn't the case.
Hi, all. When pg_upgrading from PG 11 to 13, a file
analyze_new_cluster.sh was generated in the current directory which ran
vacuumdb --all --analyze-in-stages
When upgrading from 13 to 15.3 no such file was generated, which made me
believe it was no longer necessary. Alas, it wasn't the case.
On 9/23/21 7:58 PM, Garfield Lewis wrote:
The way I normally get this info is by setting the following:
log_statement = 'all'
then the arguments will be printed in the postgres.log file. There could be
some other way but that is what I know.
Thanks, those queries are logged after their comp
Hi, is it possible to view the list of currently running queries with
$1, $2 etc replaced with the actual arguments?
On 11/05/2019 10:05 PM, Kevin Brannen wrote:
From: rihad
Hi, all. Why is it normally suggested to stop the server, upgrade it,
then start it? Wouldn't it be easier & quicker to simply upgrade the
package in-place and restart the service? On OSen that allow
modification of currentl
>From: rihad
Hi, all. Why is it normally suggested to stop the server, upgrade it,
then start it? Wouldn't it be easier & quicker to simply upgrade the
package in-place and restart the service? On OSen that allow
modification of currently running binaries, which is most Unix OS,
Hi, all. Why is it normally suggested to stop the server, upgrade it,
then start it? Wouldn't it be easier & quicker to simply upgrade the
package in-place and restart the service? On OSen that allow
modification of currently running binaries, which is most Unix OS, M$
Windows being a notable e
On 10/25/2019 05:49 PM, Tom Lane wrote:
rihad writes:
Hi, we frequently run many query involving XML that use a smallish
temporary file, despite having increased local work_mem in that
transaction to 16GB. FreeBSD's top shows that the memory isn't actually
being used - it re
Hi, we frequently run many query involving XML that use a smallish
temporary file, despite having increased local work_mem in that
transaction to 16GB. FreeBSD's top shows that the memory isn't actually
being used - it remains free. Basically many such queries are run within
a single transactio
On 08/14/2019 11:42 AM, Laurenz Albe wrote:
rihad wrote:
Sorry, I just decreased work_mem back to 256MB, reloaded, and
instantly started seeing 82mb temp file creation, not 165mb as was
usual with work_mem=512MB.
So it indeed was applied immediately.
Really weird figures )
Increased work_mem
On 08/13/2019 09:04 PM, rihad wrote:
On 08/13/2019 08:44 PM, rihad wrote:
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem
On 08/13/2019 08:44 PM, rihad wrote:
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'user
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'user', that means it will
affect running sess
On 08/13/2019 07:41 PM, Peter Eisentraut wrote:
On 2019-08-13 17:16, rihad wrote:
If I increase it in postgresql.conf and SIGHUP the master server, will
the change be applied to all running backends, or only to the ones
started after the change? Thanks.
It will be applied to all running
If I increase it in postgresql.conf and SIGHUP the master server, will
the change be applied to all running backends, or only to the ones
started after the change? Thanks.
On 06/30/2019 09:45 PM, Andrew Gierth wrote:
"rihad" == rihad writes:
rihad> There's a weird problem, even when the time is initially set by
rihad> openntpd it keeps lagging by one second every few seconds:
rihad> $ sudo /usr/local/etc/rc.d/openntpd restart
Wha
On 06/30/2019 10:59 AM, rihad wrote:
Damn. Sorry, and please disregard my post. The master server had the
wrong time. Not wrong TZ, simply wrong time.
$ date
Sun Jun 30 08:34:52 +04 2019
while it's currently 10:58
There's a weird problem, even when the time is initially set by op
Damn. Sorry, and please disregard my post. The master server had the
wrong time. Not wrong TZ, simply wrong time.
$ date
Sun Jun 30 08:34:52 +04 2019
while it's currently 10:58
Current time is 10:44. pg_controldata shows on both on master & slave
server which uses streaming replication:
Time of latest checkpoint: Sun Jun 30 07:49:18 2019
So it was almost 3 hours ago. There are always some heavy writes and a
new WAL file in the pg_wal/ directory is created
On 05/03/2019 05:35 PM, Daniel Verite wrote:
For non-English text, I would recommend C.UTF-8 over "C" because of
BTW, there's no C.UTF-8 inside pg_collation, and running select
pg_import_system_collations('pg_catalog') doesn't bring it in, at least
not on FreeBSD 11.2.
On 05/03/2019 05:35 PM, Daniel Verite wrote:
rihad wrote:
Thanks, I'm a bit confused here. AFAIK indexes are used for at least two
things: for speed and for skipping the ORDER BY step (since btree
indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 i
On 05/02/2019 05:36 PM, Daniel Verite wrote:
rihad wrote:
Thanks for the reply. Do you know what would a "decent" ICU collation be
to bind to a field's schema definition so it would mimic a UTF-8
encoding for a multilingual column? Maybe und-x-icu? We aren't as
On 05/02/2019 12:26 AM, Peter Geoghegan wrote:
On Mon, Apr 29, 2019 at 7:45 AM rihad wrote:
Hi. Today we run pg_ctl promote on a slave server (10.7) and started
using it as a master. The OS also got upgraded FreeBSD 10.4 -> FreeBSD
11.2. And you guessed it, most varchar indexes got corrup
Hi. Today we run pg_ctl promote on a slave server (10.7) and started
using it as a master. The OS also got upgraded FreeBSD 10.4 -> FreeBSD
11.2. And you guessed it, most varchar indexes got corrupted because
system local changed in subtle ways. So I created the extension amcheck
and reindexed
Hi. Say I have column A.b_id which references B.id (which is a primary
key) and as such it is declared as a foreign key constraint. A.b_id has
no index because it doesn't need one. What happens when table B's rows
are modified (but never deleted)? Will PG still have have to scan A fo
find A.b_i
Hi. Say there are 2 indexes:
"foo_index" btree (foo_id)
"multi_index" btree (foo_id, approved, expires_at)
foo_id is an integer. Some queries involve all three columns in their WHERE
clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove foo_in
On 04/12/2019 08:39 AM, Michael Lewis wrote:
Way to many indexes. I'm going to have a hard time convincing our
programmers to get rid of any of them )
You can create (concurrently) an identical index with a new name, then
drop old version concurrently and repeat for each. It doesn't h
On 04/11/2019 10:13 PM, Jeff Janes wrote:
On Thu, Apr 11, 2019 at 12:18 PM rihad <mailto:ri...@mail.ru>> wrote:
On 04/11/2019 08:09 PM, Jeff Janes wrote:
On Thu, Apr 11, 2019 at 11:44 AM rihad mailto:ri...@mail.ru>> wrote:
Since we dump&restore produ
On 04/11/2019 08:09 PM, Jeff Janes wrote:
On Thu, Apr 11, 2019 at 11:44 AM rihad <mailto:ri...@mail.ru>> wrote:
On 04/11/2019 07:40 PM, Jeff Janes wrote:
The disk usage doesn't reach a steady state after one or two
autovacs? Or it does, but you are just unhappy a
On 04/11/2019 07:40 PM, Jeff Janes wrote:
On Thu, Apr 11, 2019 at 10:28 AM rihad <mailto:ri...@mail.ru>> wrote:
Yup, it's just that n_dead_tuples grows by several hundred
thousand (the
table sees much much more updates than inserts) and disk usage grows
cons
On 04/11/2019 07:04 PM, Alvaro Herrera wrote:
On 2019-Apr-11, rihad wrote:
On 04/11/2019 06:41 PM, Alvaro Herrera wrote:
Perhaps it'd be better to vacuum this table much more often.
Each run took 5-6 hours, now it takes 2-3 hours after I've tweaked some
cost-based vacuum knobs
On 04/11/2019 06:41 PM, Alvaro Herrera wrote:
On 2019-Apr-11, rihad wrote:
On 04/11/2019 06:20 PM, Tom Lane wrote:
rihad writes:
Thanks! Our autovacuum_work_mem = 1GB, so this probably means any space
would be available for reuse only at the end of the vacuum?
It's six bytes per dead
On 04/11/2019 06:20 PM, Tom Lane wrote:
rihad writes:
Thanks! Our autovacuum_work_mem = 1GB, so this probably means any space
would be available for reuse only at the end of the vacuum?
It's six bytes per dead tuple, last I checked ... you do the math.
Are there
any downsides in decre
On 04/11/2019 06:09 PM, Alvaro Herrera wrote:
On 2019-Apr-11, rihad wrote:
Thanks! Our autovacuum_work_mem = 1GB, so this probably means any space
would be available for reuse only at the end of the vacuum? Are there any
downsides in decreasing it to, say, 64MB? I see only pluses )
Yes, each
On 04/11/2019 05:48 PM, Tom Lane wrote:
rihad writes:
If an autovacuum job on a huge table runs for 5-6 hours, do its freed
pages/tuples become available for reuse immediately when they are marked
as free, or only at the end of the multi-hour vacuum operation?
They'll be freed in ba
If an autovacuum job on a huge table runs for 5-6 hours, do its freed
pages/tuples become available for reuse immediately when they are marked
as free, or only at the end of the multi-hour vacuum operation?
Hi. When master server receives smart shutdown request (TERM) does it
exit after making sure it sends all received writes to the slave
server(s), or it exits leaving the slave in an inconsistent state? What
about during fast shutdown (SIGINT)? I know that it asks current
requests to terminate i
And future updates can reuse it, too (an update is very similar to an
insert+delete).
Hm, then it's strange our DB takes 6 times as much space compared to
freshly restored one (only public schema is considered).
Not if autovacuum has a chance to run between updates.
Ours is run regularly,
On 04/03/2019 06:40 PM, Michael Lewis wrote:
"Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more and more unused space that is never
used again by postgres, and plain autovacuuming doesn't return it to the
OS."
Can you expound on that? I thought
Does *every* table have *so much* free space that it's impractical to
just
let the files just get refilled by normal usage?
Ideally VACUUM FULL should not require a giant lock on the table.
Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more an
On 04/01/2019 08:30 PM, Michel Pelletier wrote:
On Sun, Mar 31, 2019 at 10:49 PM David Rowley
mailto:david.row...@2ndquadrant.com>>
wrote:
Perhaps a bunch of processes waiting on the access exclusive lock on
the materialized view being released?
log_lock_waits might help you i
On 04/01/2019 06:17 PM, Adrian Klaver wrote:
On 3/31/19 10:08 PM, rihad wrote:
What exactly do you mean by "running processes"? I don't think I've
ever
seen a Unix with only 1 to 3 running processes in total, so you are
probably referring to processes in a certai
What exactly do you mean by "running processes"? I don't think I've ever
seen a Unix with only 1 to 3 running processes in total, so you are
probably referring to processes in a certain state. Runnable (R)?
Uninterruptible sleep (D)? Both? Something else?
Just that, 250-300 running processes in
Postgres 10.3 On freebsd 10.3 is almost idle, disk i/o about 5-10%,
number running processes about 1-3, cpu about 90% idle, then we run a
i/o heavy job like "refresh materialized view", cpu & disk i/o still not
maxed out, all of a sudden the number of running processes increases to
about 250-30
Hi, since pg_import_system_collations() that would solve all this
doesn't exist in 9.6.x, is there a way to I properly map it to a CREATE
COLLATE call? Specifically I need to have this collation (from 10.x) in
9.6.x so it works on FreeBSD:
collname | collnamespace | collowner |
On 01/06/2019 08:05 PM, rihad wrote:
On 01/06/2019 07:57 PM, rihad wrote:
On 01/06/2019 07:51 PM, Tom Lane wrote:
rihad writes:
Hi. on a freshly installed 10.6 I can see all ICU collations added in
pg_collation schema. Is there a way to have them for an existing
database cluster?
There
On 01/06/2019 07:57 PM, rihad wrote:
On 01/06/2019 07:51 PM, Tom Lane wrote:
rihad writes:
Hi. on a freshly installed 10.6 I can see all ICU collations added in
pg_collation schema. Is there a way to have them for an existing
database cluster?
There's a function called something
On 01/06/2019 07:51 PM, Tom Lane wrote:
rihad writes:
Hi. on a freshly installed 10.6 I can see all ICU collations added in
pg_collation schema. Is there a way to have them for an existing
database cluster?
There's a function called something like pg_import_system_collations
Hi. on a freshly installed 10.6 I can see all ICU collations added in
pg_collation schema. Is there a way to have them for an existing
database cluster?
On 08/02/2018 05:34 PM, rihad wrote:
On 08/02/2018 04:38 PM, rihad wrote:
Hi all. We see such errors on one of our own (non-system) tables on
Postgres 10.3. They stop both automatic vacuum & automatic analyze
not only on that table, but on all tables & databases (manual
vacuuming work
On 08/02/2018 04:38 PM, rihad wrote:
Hi all. We see such errors on one of our own (non-system) tables on
Postgres 10.3. They stop both automatic vacuum & automatic analyze not
only on that table, but on all tables & databases (manual vacuuming
works though). Luckily it's a small (
Hi all. We see such errors on one of our own (non-system) tables on
Postgres 10.3. They stop both automatic vacuum & automatic analyze not
only on that table, but on all tables & databases (manual vacuuming
works though). Luckily it's a small (but important) table - less than
1000 rows. Any qui
Here's the prompt response I got from the maintainer of FreeBSD package,
girgen@
Forwarded Message
Subject: Re: pg_upgrading to 10.1 breaks some indices maybe due to
changing providers for ICU
Date: Fri, 22 Dec 2017 17:44:41 +0100
From: Palle Girgensohn
To:
Here's the prompt response I got from the maintainer of FreeBSD package,
girgen@
Forwarded Message
Subject: Re: pg_upgrading to 10.1 breaks some indices maybe due to
changing providers for ICU
Date: Fri, 22 Dec 2017 17:44:41 +0100
From: Palle Girgensohn
To:
On 12/22/2017 08:07 PM, Peter Geoghegan wrote:
Please report this as a bug to the freebsd package maintainer.
You mean they would have to bring back ICU support as an option? Ok,
I'll try to report the problem.
On 12/22/2017 07:43 PM, Peter Geoghegan wrote:
I have no idea. Maybe something to do with not actually using the same
collation provider?
The freebsd package contains modifications to make icu work, or at
least did. Possibly, you just stopped using the same collation
provider, without even re
I wonder why the errors don't show up under 9.6.6 and only started
appearing after the pg_upgrade to 10.1?
Both severs naturally used the same OS environment.
On 12/22/2017 07:01 PM, rihad wrote:
[snip]
foo=# select bt_index_check('index_users_on_email_and_type');
ERROR: item order invariant violated for index
"index_users_on_email_and_type"
DETAIL: Lower index tid=(3,25) (points to index tid=(26,1)) higher
index tid=(3,26)
On 12/22/2017 05:32 PM, Peter Geoghegan wrote:
On Fri, Dec 22, 2017 at 11:56 AM, rihad wrote:
I forgot to mention the real problem: the mentioned unique constraint didn't
work and allowed duplicate rows to get inserted into the table until the
duplicates were manually removed the the inde
On 12/22/2017 05:09 PM, Laurenz Albe wrote:
rihad wrote:
Hi there, this is a reproducible error. We recently pg_upgraded our
production database to 10.1 from 9.6.6. The upgrade runs fine with the
suggestion to analyze all data.
[...]
Trying to find data using the specified indices fails to
On 12/22/2017 05:09 PM, Magnus Hagander wrote:
What platform are you on
FreeBSD 10.3-RELEASE-p26 amd64
, how was PostgreSQL installed
pkg install.
Additionally, 10.1 needed for pg_upgrade to run was installed using
portmaster -m PREFIX=/10.1, because FreeBSD doesn't allow more than one
maj
I forgot to mention the real problem: the mentioned unique constraint
didn't work and allowed duplicate rows to get inserted into the table
until the duplicates were manually removed the the index was rebuilt.
Hi there, this is a reproducible error. We recently pg_upgraded our
production database to 10.1 from 9.6.6. The upgrade runs fine with the
suggestion to analyze all data.
[rihad@postgres-10-test]$ cat analyze_new_cluster.sh
#!/bin/sh
echo 'This script will generate minimal opti
71 matches
Mail list logo