Hi Tom
Thanks for replying so fast. You are absolutely right.
I changed the code file but failed to install it. :-( I am sorry for
not checking the obvious.
Kind regards
Thiemo
Quoting Tom Lane :
Thiemo Kellner writes:
I created a function that ought to return a timestamptz (another
li
Thiemo Kellner writes:
> I created a function that ought to return a timestamptz (another
> likewise timestamp) but calling it, I get mentionied error. What do I
> miss?
Whether that:
> return '294277-01-01 00:59:59.99'::timestamptz;
is in range or not depends on your timezone
Hi all
I created a function that ought to return a timestamptz (another
likewise timestamp) but calling it, I get mentionied error. What do I
miss? I tried to return a timestamp of the year 2000 to no avail.
Call: select utils.get_max_timestamptz();
--
Function
--
create or replace functio
On 8/23/19 3:47 PM, PegoraroF10 wrote:
This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers.
I cannot imagine our script doing 75000 updates just for adding those
schemas.
Very rough calculation:
https://www.postgresql.org/docs/11/catalog-pg-class.htm
Can you manually execute vacuum analyze on all three involved tables and
again share the plan for the same query? If it is significantly improved,
it would seem like following the recommendation to tune autovacuum (and
analyze) to be more frequent would be prudent.
You haven't seemed to change fro
This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers.
I cannot imagine our script doing 75000 updates just for adding those
schemas.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
autovacuum_analyze_threshold = 50;
autovacuum_analyze_scale_factor = 0.1
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
https://explain.depesz.com/s/5Rrd
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT
current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA, t.relname AS
TABLE_NAME, t.oid AS TABLE_OID, current_database() AS INDEX_CATALOG,
n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS INDEX_O
On 8/23/19 2:47 PM, PegoraroF10 wrote:
Coming back to my problem, today happened again, that search comes slow.
Explaining again, auto vacuum and auto analyse for pg_* tables are not
configured one by one, so its using that defaults of scale_factor = 0.2
and threshold = 50. Today, when that pro
Might the issue be with pg_index or pg_attribute rather than pg_class? Is
the query still slow? Do you have the output of explain analyze for use on
https://explain.depesz.com/ or the output of EXPLAIN (ANALYZE, COSTS,
VERBOSE, BUFFERS, FORMAT JSON) for use on
http://tatiyants.com/pev/#/plans/new w
Coming back to my problem, today happened again, that search comes slow.
Explaining again, auto vacuum and auto analyse for pg_* tables are not
configured one by one, so its using that defaults of scale_factor = 0.2 and
threshold = 50. Today, when that problem of speed came back, the view
pg_stat_s
What is default_statistics_target set to? Or is there custom statistics
value set for this table/column? Perhaps the planner is making sub-optimal
choices because it only has a vague idea about the data and the histogram
is not very informative. Planning time will increase when statistics target
is
Hi Bikram,
Am 23.08.2019 um 22:10 schrieb Bikram Majumdar:
Hi George,
So nice for your response.
Thanks. But, my question is how does it get the IP address 172.17.0.2 ?
And, how does one run psql command to connect/login to the test
database ( pgwatch configuration database) to add any data
Hi George,
So nice for your response.
Thanks. But, my question is how does it get the IP address 172.17.0.2 ?
And, how does one run psql command to connect/login to the test database (
pgwatch configuration database) to add any database ?
Thanks and regards,
Bikram
On Fri, Aug 23, 2019 at 2:0
On Thu, Aug 22, 2019 at 1:46 PM Jeremy Finzel wrote:
> Good afternoon!
>
> I am finding it difficult to understand how to maintain my BRIN index from
> the docs. Specifically, this is the documentation on the
> function brin_summarize_range which isn't clear to me:
>
> brin_summarize_range(index
On 8/23/19 3:38 AM, Devrim Gündüz wrote:
Hi,
v12 is in testing repos only. Find
[pgdg12-updates-testing]
in the repo file, and enable that one, please.
You may want to include the above here:
https://yum.postgresql.org/news-12snapshot-ready-for-testing.php
Regards, Devrim
On 22 August 2
Hi,
v12 is in testing repos only. Find
[pgdg12-updates-testing]
in the repo file, and enable that one, please.
Regards, Devrim
On 22 August 2019 21:29:44 GMT+03:00, "Igal @ Lucee.org" wrote:
>I have installed the CentOS 7 PGDG file from
>https://yum.postgresql.org/repopackages.php
>
>If I `c
On 21/08/2019 22:15, stan wrote:
I have a situation where we need to import data, as an interim measure,
from spreadsheets.
I have read up on \copy and COPY, but I do not see that either of these can
use the header from a CSV file to define a new table. Am I missing
something?
Also downloaded s
Hello Bikram,
Am 22.08.2019 um 23:50 schrieb Bikram MAJUMDAR:
Hi,
Need help from anyone in the team who has worked on pgwatch2.
Yesterday we installed pgwatch2 docker image and started the container.
We are running the pgwatch2 container from the database server itself -
Now we want to add our
19 matches
Mail list logo