Hi,
Basically I had the same topic recently and based on observation I would say
that configuration parameter hot_standby_feedback disables no only some
vacuuming operations but something else as well. I played thru the same
scenario where I disabled hot_standby_feedback and and vacuuming, but
The version you are running is neither up-to-date for its major version
(9.4) nor is the major version being supported.
https://www.postgresql.org/support/versioning/
Thoug a functioning backup is good to have before upgrading, especially
major versions.
On Wednesday, June 24, 2020, Sri Linux w
Hi All,
I am trying to do PINT backup and recovery for the standalone
database which is not in a cluster.
recovery.done is created, after restart database is having errors to start "
< 2020-06-22 16:34:08.280 CDT >LOG: starting archive recovery
< 2020-06-22 16:34:08.456 CDT >LOG: restored log
Peter J. Holzer writes:
> On 2020-06-24 13:55:00 -0400, Bee.Lists wrote:
>> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer wrote:
> Does "I have 37 queries" mean you have seen 37 queries of this type in
> some time window (e.g. the last day or hour) or does it mean you are
> currently seeing 37
Bee.Lists writes:
>> On Jun 23, 2020, at 8:09 PM, Tim Cross wrote:
>>
>> Sounds like your web app may not be closing connections once it has
>> finished with them? The fact your seeing idle connections would seem to
>> support this. I would be verifying there isn't a code path in your
>> clie
On Wed, Jun 24, 2020, 2:35 PM Peter J. Holzer wrote:
> Yes, estimating the number of distinct values from a relatively small
> sample is hard when you don't know the underlying distribution. It might
> be possible to analyze the sample to find the distribution and get a
> better estimate. But I'm
[Please keep replies on the list]
On 2020-06-24 11:02:22 +, Klaudie Willis wrote:
> Holzer, thanks for your feedback. Yes, your guess is very good. The
> data consists of millions of instruments that occur a handful of cases
> (rare), and instruments that are very common.
>
> I am still a
Michael Lewis wrote on 06/23/2020 04:44:51 PM:
> Long running transactions are the common one that I see. You might
> be dealing with replication slots or prepared transactions.
> Basically, if some process might see that "old truth", then it can't
> be vacuumed away yet.
Thanks, those links yo
On 2020-06-24 13:55:00 -0400, Bee.Lists wrote:
> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer wrote:
> > The default is 100. What was your reason for reducing it to such a low
> > value?
>
> “PostgreSQL 9 High Availability” recommended core count * 3.
I suspected something like that.
I don't
>
> But how can I set the edited vale = 1 on the objects (line segments) that
> are not deleted (in the current buffer) at the same time so it won’t be
> deleted in the next run with an adjacent buffer?
>
You might want to create a temporary table to hold unique identifiers of
all records that you
On 2020-06-24 13:45:21 -0400, Bee.Lists wrote:
> > Sounds like your web app may not be closing connections once it has
> > finished with them? The fact your seeing idle connections would seem to
> > support this. I would be verifying there isn't a code path in your
> > client application which is f
Hi list, the question is in the ens of the Mail.
I have a problem with adjacent polygons when converting them to lines,
as I would like to only have one line in the boundaries between the former
polygons. Now it could be up to 5.
I’ve been thinking of:
1. Creating a thin buffer around the l
> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer wrote:
>
> The default is 100. What was your reason for reducing it to such a low
> value?
“PostgreSQL 9 High Availability” recommended core count * 3.
>
>> I’ve installed pg_stat_activity and pg_stat_statements.
>>
>> I access this server th
> On Jun 23, 2020, at 8:09 PM, Tim Cross wrote:
>
> Sounds like your web app may not be closing connections once it has
> finished with them? The fact your seeing idle connections would seem to
> support this. I would be verifying there isn't a code path in your
> client application which is fa
Le mer. 24 juin 2020 à 16:24, Jim Hurne a écrit :
> "Daniel Verite" wrote on 06/24/2020 10:08:27 AM:
> > Unfortunately it [pg_repack] can't work with pg_largeobject
>
> That is unfortunate, and potentially leaves us in a difficult spot.
>
> Is it possible to configure PosgreSQL to use a user tab
I wrote:
> David Rowley writes:
>> I don't often do much with pgbench and variables, but there are a few
>> things that surprise me here.
>> 1) That pgbench replaces variables within single quotes, and;
>> 2) that we still think it's a variable name when it starts with a digit, and;
>> 3) We repla
"Daniel Verite" wrote on 06/24/2020 10:08:27 AM:
> Unfortunately it [pg_repack] can't work with pg_largeobject
That is unfortunate, and potentially leaves us in a difficult spot.
Is it possible to configure PosgreSQL to use a user table for large
objects instead of a system table?
We're findi
David Rowley writes:
> On Wed, 24 Jun 2020 at 20:41, Jaime Soler wrote:
>> I don't know why pgbench use timestamp: «2006-03-01 00$1$2» instead of
>> timestamp '2006-03-01 00:00:00'
> I've not debugged it, but it looks like pgbench thinks that :00 is a
> pgbench variable and is replacing each i
Jim Hurne wrote:
> "Daniel Verite" wrote on 06/22/2020 06:00:37 PM:
> > If you can arrange a maintenance window, a faster way to rebuild
> > pg_largeobject when it contains mostly empty pages can be to:
>
> Thanks Daniel. If we cannot arrange for a maintenance window, is this the
> sort
"Daniel Verite" wrote on 06/22/2020 06:00:37 PM:
> If you can arrange a maintenance window, a faster way to rebuild
> pg_largeobject when it contains mostly empty pages can be to:
Thanks Daniel. If we cannot arrange for a maintenance window, is this the
sort of thing that pg_repack[1] could be u
Hi,
Thanks for your comments, I worked around that problem because I was able
to truncate the timestamp and use only the date part , alsoit might
works the use of to_timestamp. But I would like to understand what is
happening , I realized that pgbench is identified erroneously the minutes
and se
On Wed, 24 Jun 2020 at 20:41, Jaime Soler wrote:
>
> Hi, does anybody know what is wrong with pgbench in this case ?. Here is a
> simple query to generate a random date in a interval time.sql:
>
> (select timestamp '2005-09-01' + random() * ( timestamp '2006-03-01
> 00:00:00' - timestamp '2005
On 2020-06-23 16:29:46 -0400, Bee.Lists wrote:
> I have an issue with a server (v10) that’s seeing increasing
> connections until it’s maxxed-out.
>
> max_connections for my 4-core server is set to 12.
The default is 100. What was your reason for reducing it to such a low
value?
> I’ve inst
On 2020-06-24 07:30:05 +, Klaudie Willis wrote:
> show default_statistics_target; --> 500
> ALTER TABLE public.bigtable ALTER COLUMN instrumentid_ref SET STATISTICS 5000;
>
> Here is the output of the "ANALYZE VERBOSE bigtable;"
> INFO: analyzing "public.bigtables" inheritance tree
[...]
> IN
Hi, does anybody know what is wrong with pgbench in this case ?. Here is a
simple query to generate a random date in a interval time.sql:
(select timestamp '2005-09-01' + random() * ( timestamp '2006-03-01
00:00:00' - timestamp '2005-09-01 00:00:00' ));
query executed successfullly with psql
/
Hello all,
Your expertise is needed on this. I was going through previous mails
regarding the same topic and was able to setup the slave with
hot_standby_feedback enabled.
Queries are running fine with bloat occuring on master.
I tried the below scenario, where i disabled hot_standby_feedback an
show default_statistics_target; --> 500
ALTER TABLE public.bigtable ALTER COLUMN instrumentid_ref SET STATISTICS 5000;
Here is the output of the "ANALYZE VERBOSE bigtable;"
INFO: analyzing "public.bigtables" inheritance tree
INFO: "bigtable_y2018": scanned 622250 of 10661013 pages, containing 1199
Hi Wolf,
> On 23. Jun, 2020, at 22:59, Wolff, Ken L wrote:
>
> https://www.netapp.com/us/media/tr-4770.pdf
I will check this out. Thank you very much.
Cheers,
Paul
28 matches
Mail list logo