something
changed in the underlying Postgres build.
On Wed, Sep 20, 2023 at 7:11 PM Erik Wienhold wrote:
> On 2023-09-20 17:53 -0400, Michael Corey wrote:
> > To make matters even more strange. I checked the permissions of
> > rds_superuser in 15 and 14
> >
> >
t sure that this is the correct path to do
so or that in some cases forcing the hand of the user was incorrect.
It was also creating a penalty in some of the hot loops of area:
commit: dbab0c07e5ba1f19a991da2d72972a8fe9a41bda
committer: Michael Paquier
date: Mon, 14 Jun 2021 09:25:50 +0900
Remove force
lation if you are too aggressive with the updates. You
could do that in periodic steps, as well.
--
Michael
signature.asc
Description: PGP signature
les, but should I take an exclusive lock on the table,
> switch it to unlogged,
> rewrite, and switch it back to logged?
Switching a table back to be logged requires all its 8k blocks to be
WAL-logged, so that would be roughly the same as a plain UPDATE.
--
Michael
signature.asc
Description: PGP signature
y to them. I have an example of this kind of
idea in one of my plugin modules here:
https://github.com/michaelpq/pg_plugins/tree/main/jsonlog
--
Michael
signature.asc
Description: PGP signature
I'm not clear which rows are duplicates.
It looks like limb_id, branch_id and twig_id make up a unique key.
If there was a row (2, 2, 7,103.7) would it be a duplicate of rows
(1,1,2,103.7) and (1,1,3,103.7)?
--
MIke Nolan
To add to what Steve has said, see if you can find a room with
equipment similar to what you'll be using for the actual presentation
to practice in. Put up your most complex slide, then take a seat in
the back of the room and imagine what it'll look like with a room full
of people in front of you.
Sorry for the top posting, I forget that Gmail does that by default.
Mike Nolan
or older snapshots held by concurrent sessions when the
index build is finished and when the index built gets validated.
--
Michael
signature.asc
Description: PGP signature
plicated to the standby.
- Promote the standby.
- Enable checksums on the previous primary.
- Start the previous primary to be a standby of the node you failed
over to.
--
Michael
signature.asc
Description: PGP signature
have here is that we've lost track of the
patch proposed, so I have added a CF entry for now:
https://commitfest.postgresql.org/46/4720/
--
Michael
signature.asc
Description: PGP signature
I'm trying to install mysql_fdw_16 and it requires the mariadb-devel module.
There is no mariadb-devel module for Almalinux 9. There is a
MariaDB-devel module, but when I install it, the install for
mysql_fdw_16 still tells me that there is no mariadb-devel module
available.
How do I work around
tails, and work on fixing the issue.
--
Michael
signature.asc
Description: PGP signature
This is on AlmaLinux 9.3, installing postgresql from source code.
In PG 16.1 when I try to install pgcrypto, the modules compile but I
get this error when running checks:
CREATE EXTENSION pgcrypto;
+ERROR: could not load library
"/home/postgres/src/postgresql-16.1/tmp_install/usr/local/pgsql/li
Sorry for the delay in responding, network issues kept me offline for
several days.
These are the openssl packages installed from the Almalinux 9 repositories:
apr-util-openssl.x86_64 1.6.1-23.el9 @appstream
openssl.x86_641:3.0.7-24.el9
On Wed, Jan 17, 2024 at 5:32 PM Daniel Gustafsson wrote:
>
> > On 18 Jan 2024, at 00:24, Michael Nolan wrote:
> >
> > Sorry for the delay in responding, network issues kept me offline for
> > several days.
> >
> > These are the openssl packages inst
On Thu, Jan 18, 2024 at 5:03 AM Daniel Gustafsson wrote:
>
> > On 18 Jan 2024, at 00:59, Michael Nolan wrote:
> > On Wed, Jan 17, 2024 at 5:32 PM Daniel Gustafsson wrote:
> >>
>
> That's surprising, I expected that it would require the legacy provider be
On Thu, Jan 18, 2024 at 9:51 AM Tom Lane wrote:
>
> Looks like we have aarch64 and ppc64 machines running Alma 8 and 9.
> No x86 though, which might matter for such a low-level failure
> as this.
So I guess that'll be on the list to add to the build farm at some
point? (My Xanthian 'talent' of f
Writing or debugging makefiles is something I haven't done much of,
but as best I can figure out the problem is that the libcrypto.so file
isn't being linked in, though this line in the Makefile in pgcrypto
seems to say should be:
SHLIB_LINK += $(filter -lcrypto -lz, $(LIBS))
I'm guessing it is s
Sorry, I meant 'make check'. :sigh:
No, it wasn't there, because I hadn't included --with-openssl in the
configure. Looking at my history, I had done that once earlier but
dropped it for the reason noted below.
Including --with-openssl does include the crypto library, but if I
don't do a 'make clean' before doing a make, I get erro
I'm in the process of updating a server from postgresql 10.4 on Centos
7 to 16.1 on AlmaLInux 9, and am reminded of the old line about when
you're up to your necks in alligators, it is difficult to remember
that your goal was to drain the swamp.
We have several materialized views that are refreshe
On Thu, Jan 25, 2024 at 3:38 PM Tom Lane wrote:
>
> Michael Nolan writes:
> > On the 10.4 server this materialized view works, but on the 16.1
> > server it fails:
>
> > uscf=# refresh materialized view uscf_vip;
> > refresh materialized view uscf_vip;
&
Followup: The problem turned out to be a field in the mysql server
that was not size-limited to 255 characters and had several rows with
as many as 299 characters in them. Apparently when using an FDW and
materialized view in postgresql 10.4, field size limits aren't
checked, because a query on t
RangeVar, so there is no "easy" way to
tackle that :/
--
Michael
signature.asc
Description: PGP signature
ations of this ALTER TABLE statement and none seem
to work. What is the proper way to reattach the identity and the sequence
bearing in mind that I will load the new table with the data from the old
table?
--
Michael Corey
sync?
On Fri, Feb 16, 2024 at 11:35 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Fri, Feb 16, 2024 at 9:24 AM Michael Corey <
> michael.corey...@nielsen.com> wrote:
>
>> If I run these two statements I get an error
>> ERROR: relation "part_t
two .signal files when promotion is achieved would be
a problem to me because we'd reenter recovery again at a follow-up
startup. ArchiveRecoveryRequested should be set if there was either
recovery.signal or standby.signal found at startup, meaning that we
should have a TLI jump at promotion
OREIGN KEY (par_file_id) REFERENCES
par_log_file(par_file_id);
Version Postgres 13.13
Any help would be appreciated/
--
Michael Corey
My production server has a materialized view that is refreshed from a
mysql_fdw several times a day.
What I'd like to be able to do is refresh the materialized view on the
testbed server from the one on the production server so that they are
the same. (Refreshing it from the MySQL server will res
d for it today by me down to v15. This will
show up in the next round of minor releases.
--
Michael
signature.asc
Description: PGP signature
uld
begin some sort of investigation, for example, to know what is
happening to your system and figure out if there is even something
wrong going on.
Thanks,
--
Michael
signature.asc
Description: PGP signature
I don't have any direct experience with Yugabyte (the databases I work
with are way too small to be on Yugabyte) but my older son does work
for them as an SRE, sometimes remotely when he's visiting us, so we've
talked about it a bit. (It's actually the first time in 20 years I've
had much of a clu
In the following short code, the out parameter d3 is also declared, so
its value will be null in the output.
create or replace function public.test_function(in varchar, in char, in date, ou
t d1 text, out d2 integer, out d3 char, out d4 date )
stable as $$
DECLARE
wk_intxt alias for $1;
wk
On Sun, Jul 7, 2024 at 4:13 AM Pavel Stehule wrote:
>
> but looks so there are false alarms related to using an alias. It is
> interesting so I have not any report about this issue, so probably using
> aliases is not too common today.
I'm not sure why there's a warning about using an alias. 43.
On Thu, Jul 18, 2024 at 4:38 AM sivapostg...@yahoo.com
wrote:
>
> Hello,
> PG V11
>
> Select count(*) from table1
> Returns 10456432
>
> Select field1, field2 from table1 where field3> '2024-07-18 12:00:00'
> Times out
>
> The above query was working fine for the past 2 years.
>
> Backup was taken
Hello Experts,
Our application has a database with a large number of partitioned tables
used to store time series data.
It is partitioned by time: new data is populated into the current
partition, old partitions eventually get dropped, and the partitions in
between are largely static.
I had not
Many thanks David for the comprehensive response.
> I think the complaint was about no autovacuum on the partitioned
> table, not the partitions.
Yes, exactly.
One other piece of information: these tables contain a lot of columns, of which
only 4 are normally used for WHERE clauses or joins. The
E.
Regards
Mike
On Wed, 7 Aug 2024 at 18:09, David Rowley wrote:
>
> On Wed, 7 Aug 2024 at 19:20, Michael Harris wrote:
> > I found that running an ANALYZE specifying only those 4 columns only took
> > 5 minutes, compared to the 30 minutes for the whole table.
> >
> > T
I've got two Postgres 13 databases on AWS RDS.
* One is a master, the other a slave using logical replication.
* Replication has fallen behind by about 350Gb.
* The slave was maxed out in terms of CPU for the past four days because of
some jobs that were ongoing so I'm not sure what l
Why just do a plain update, relying on row level locking to serialize
requests properly, and then just do an insert where not exists? Is there
value in doing the delete? I don't see it.
Note- On conflict clause is supported from 9.5+ and that is already past
EOL. Upgrading to at least v10 is recom
I just meant a regular update (which might impact 0 rows) and then insert
(not exists) like you are doing already.
--transaction still ways. Should manual locking used or is there better
method.
I don't follow what you mean.
ies in the logs if the internal unlink() failed
(see RemoveXlogFile() in xlog.c).
--
Michael
signature.asc
Description: PGP signature
t; safe to kill it during database usage.
I don't understand what you mean here.
--
Michael
signature.asc
Description: PGP signature
ill need to figure out what's causing those files to remain around
on your side, perhaps we should improve the situation in the backend
itself.
--
Michael
signature.asc
Description: PGP signature
I can't find a page on the website right now that specifies it, but the
feature freeze is around April each year then general availability in
October. Minor releases are scheduled quarterly I believe. Major versions
are EOL'd after 5 or so years.
;t have the pg_xact
> files for that around. Should pg_resetwal create the files in the gap
> between the old NextXID and the new one?
I think that you should add this patch to the next commit fest to
track it properly:
https://commitfest.postgresql.org/33/
--
Michael
signature.asc
Description: PGP signature
On Sat, Mar 13, 2021 at 12:03:04PM -0800, Adrian Klaver wrote:
> So, the 10 instance is not running and the 9.5 instance is listening on the
> default port. At this point I would leave things as they are.
Robert, you may want to know that 9.5 has been EOL'd by community.
Just saying.
an environment where this is easily
reproducible is a key thing.
--
Michael
signature.asc
Description: PGP signature
same error as yours.
In one of those servers, do you have in pg_wal/ some files named
xlogtemp.N? N is an integer that would be the PID of the process that
generated it.
--
Michael
From 961f9a03d4c27220c33e88402d5ef274424a0ab2 Mon Sep 17 00:00:00 2001
From: Michael Pa
ly
test. I would not send that to the lists as an installation is rather
large, but I could just send you links from where you could download
both of them. Then you would just need to stop the Postgres service,
do a drop-in deplacement of the binaries, and start again the Postgres
service.
, LDAP, uuid-ossp, xml2, PL/Perl
PL/Python, or the XML datatype for your applications there? It may be
better if those custom builds have a minimum number of dependencies
filled, while still being compatible with what you do on those servers
so as they can still have some load applied
version cutoff is wrong? Maybe we have to do this with
> a run-time version check, instead of statically compiling it?
All the reports received are on 13.1 and 13.2. This code is new as of
bed9075, no?
--
Michael
signature.asc
Description: PGP signature
, 13.2 unpatched and 13.2 patched.
--
Michael
signature.asc
Description: PGP signature
g
CreateHardLinkA()+_unlink() instead of a single rename when attempting
to recycle a segment. And I am just in a mood to build things by
myself and send some links to people to be able to download and test
that, so one more is fine..
--
Michael
signature.asc
Description: PGP signature
On Fri, Mar 19, 2021 at 09:00:10AM +0200, Andrus wrote:
> I replaced files in 13.1 server with ones from your patched version. There
> are no errors in log file now for 8 hours.
Yippee. Thanks.
Have you tested the unpatched builds? And did you see some errors
with them?
--
M
n 13.3~).
--
Michael
signature.asc
Description: PGP signature
gments. This
requires a separate investigation.
--
Michael
signature.asc
Description: PGP signature
Hi,
My application is getting the following exception:
InternalError: (psycopg2.InternalError) MultiXactId 808263738 has not been
created yet -- apparent wraparound
I read over
https://www.postgresql.org/docs/9.6/routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUND
and I suspect the issue was
I don't believe you can determine a date/time that it happened, but
querying the age of the table is simple to do and then you can compare that
with the freeze age parameters. A periodic manual vacuum freeze may
preclude the system ever needing to perform the emergency autovacuum
freeze, and as an
Etiquette on these lists is to reply in line or below the relevant portion,
not top-post with full quoting like default gmail behavior.
On Wed, Mar 31, 2021 at 9:18 AM Mohan Radhakrishnan <
radhakrishnan.mo...@gmail.com> wrote:
> But we don't search using UUIDs always. Only when data from another
You can have an updatable view.
Joins are allowed though.
postgresql.org/docs/current/sql-createview.html
My apologies. It seems INSTEAD OF triggers are required to implement
updates across multiple tables. I thought not if all were simple joins. My
mistake.
pgrade but I would need to justify it somehow, so if I am missing
something please let me know.
On Tue, Mar 23, 2021 at 11:54 AM Michael Lewis wrote:
> I don't believe you can determine a date/time that it happened, but
> querying the age of the table is simple to do and then you ca
If you share example schema and desired output (like a dummy table or even
pseudo code SQL), then I'm sure many people could help you. Right now, the
description of your desired result seems a bit unclear, at least to me.
If you wanted to run this hourly for the last 1 hour, it sounds a bit like
Your query and explain analyze output do not seem to match.
Filter: (cred_id = '1001344096118566254'::bigint)
I don't see anything like that in your query, nor an index that would
support accomplishing that without filtering after fetching the 184k rows
initially like the planner does.
>
On Fri, Apr 23, 2021, 2:32 AM Mohan Radhakrishnan <
radhakrishnan.mo...@gmail.com> wrote:
> I execute this on the AWS RDS instance. Is there something in the plan I
> should pay attention to ? I notice the Execution Time.
>
>
> " -> Bitmap Index Scan on "testtab_date_brin_idx " (cost=0.00..20.0
On Sat, Apr 24, 2021, 1:27 AM Mohan Radhakrishnan <
radhakrishnan.mo...@gmail.com> wrote:
> What's your question exactly? If you have confidence that correlation will
> remain high (insert only table, or occasional cluster/repack with cluster
> is done), then BRIN can be a good fit. If you do upda
Without knowledge of the structure of your data, or even real table/field
names to be able to read the query with some sense of what the tables might
be like, offering advice is rather difficult. I didn't notice any nodes
with huge estimation problems at least.
I will suggest that DISTINCT is ofte
and
discussions that this would easily get lost if you don't register it.
And from what I can see having a discussion on this matter looks
adapted to me.
--
Michael
signature.asc
Description: PGP signature
> What can and does happen is that a new version is released under a
different license while the old version is made obsolete.
It is often more than just 'made obsolete', updates to other parts of the
OS (that are almost impossible to avoid installing) can make it so those
older products either d
It seems like there is some flaw here. From my reading, on insert of any
row, you are updating ALL rows in the same table to just remove an
underscore if it matches the pattern of 'US_' at the beginning. That
doesn't seem likely to be what you want. I'd think you would want something
like the below
Are you wanting to minimize the locking time, or ensure the whole process
completes as soon as possible? If the prior, you can insert into a temp
table like the real one (which is maybe what pricelistnew is already),
delete the rows where they don't have a valid reference, and then insert
all remai
On Sat, May 29, 2021 at 9:15 AM Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:
> Hi
>
> I've got a bit of a puzzle that I'm not quite sure how to approach.
>
> Let's say I've got a table of bios, so :
>
> create table bios (
> first_name text not null,
> last_name text not null,
> person
On Sat, May 29, 2021, 4:40 AM Lionel Bouton wrote:
> The last time I had to use this setting to solve this kind of problem I
> ended with :
>
> default_statistics_target = 500
>
> But obviously the value suited to your case could be different (I'd
> increase it until the planner uses the correct
Why not change the defaults? How many tables would hit this new threshold
and you would NOT want autovacuum to process them?
>
Greetings!
I'm trying to insert a record returning column value using the following query:
INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning
load_idx INTO :idx
But when I try to prepare the query I'm getting the following error:
PREPARE p1515544c0001 AS insert into
Hallo, no.
I am using postgres c library, namely I'm calling PQexec()
Best regards,
On 24.06.2021 16:14, Laurenz Albe wrote:
> On Thu, 2021-06-24 at 12:31 +0300, Michael Ivanov wrote:
>> I'm trying to insert a record returning column value using the following
>> qu
r clarification!
On 24.06.2021 17:13, Laurenz Albe wrote:
> On Thu, 2021-06-24 at 16:25 +0300, Michael Ivanov wrote:
>>>> I'm trying to insert a record returning column value using the following
>>>> query:
>>>>
>>>>INSERT INTO t_loa
Do you have an index on the "updated_at" field and found that the query is
too slow? Do you have an example query?
I am unclear exactly what you want to do with modified_date. Can you write
pseudo code perhaps?
It sounds like you are wanting to union all the results of the query you
gave, with the results of a similar query where modified_date >=
'2021-03-01 08:16:13.589' and created_at < '2021-03-01 08:16:13.
Are vacuum and analyze happening regularly on the live system? What's an
example query that uses indexes on test and does not on live? Does the live
system show poor estimates when executing 'explain analyze select...' and
the test system show semi-accurate row estimates?
50 million seems to be a
Other than rows being frozen on test (and not on live), I'm not aware of
anything that would push the planner to choose to do an index scan on an
entire table. Maybe someone else will chime in. Or, if you try running
vacuum freeze on live and can verify if that changes the result.
I'm not sure why
It sounds like you are wanting to run 'explain analyze [query]' inside a
loop inside a DO block. That isn't possible as far as I know, but
auto_explain and log_nested_statements should be able to let you profile
the whole thing and perhaps you can pick out the part you want from the
logs.
Why are you using hash partitioning?
((current_setting('env.groupid'::text))::integer)::numeric
If you pass this value in directly as part of the query string, how does it
perform? It seems like calling the function to get this value may be
creating a planning problem with the value unknown at plan time. If you
were able to put the r
Have you tried setting autovacuum to run quite aggressively, perhaps just
on this table? Have you tried an index on the status column only, rather
than partial?
lemented the same?
Hard to answer with so little detail, but if you are referring to the
backend server, aren't you looking for the fact that SSL contexts and
its surrounding applications can be reloaded? That would apply after
a simple pg_ctl "reload" for example.
--
Michael
sig
Hello Experts
We have a large-ish (16T) database cluster which were are performing the
following sequence on.
- First we upgrade the whole cluster from pg11 to pg13, using pg_upgrade
(this succeeds)
- Next we run a migration script on each database in the cluster. The
migration script
e, but
maybe that's not possible with a database of this size.
I am surprised by the error message though - I thought that if we'd hit some
limit on a transaction we would get a more specific error.
Cheers
Mike
-Original Message-
From: Tom Lane
Sent: Thursday, August 12, 2021 1:41
On Fri, Aug 13, 2021 at 3:02 AM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> ... use binary split for large partitions, to avoid large row movements.
>
Would you expound on this?
FROM pg_class WHERE relname='my_table';
Best,
Michael
huge number. Probably it needs to be
broken into multiple smaller databases, but that's easier said than done.
Thanks again
Cheers
Mike
-Original Message-
From: Simon Riggs
Sent: Thursday, August 12, 2021 7:19 PM
To: Michael Harris
Cc: Tom Lane ; pgsql-general@lists.postgresql.org
S
If you want to update many rows after updating N rows, you want an after
STATEMENT trigger which is executed after insert, update or delete. You
also want to ensure that only that function maintains sort_order field and
that you don't update recursively, perhaps by executing that function when
NEW.
This is my first post so if I’m in the wrong place please help me get to the
correct list.
Issue:
I installed PostgreSQL on my Mac over a year ago and since upgraded my disk
drive and OS to Big Sur. I am a retired Software Engineer and still like to
program and am learning PostgreSQL and have
0/10,250/25,250/250,300/20,300/30,300/7,300/300,300/10,300/50,400/20,400/25,450/50,500/500,500/50,500/35,500/25,600/40,1000/20,1000/40,1000/1000,1000/35,1000/50,1000/500
*/
If you don't need the order maintained, it becomes a much simpler problem
and you can strip off some of this complexity.
*
Btw- My apologies for top posting. I think my caffeine wore off.
NSERT INTO person VALUES ('Roe', 'ok');
SELECT * FROM person order by current_mood;
SELECT * FROM person order by current_mood desc;
Note- using enum may complicate other things in your usage, so I am not
suggesting this is ideal, just one option.
*Michael Lewis | Database Engineer*
*Entrata*
This is showing many false positives from the index scan that get removed
when the actual values are examined. With such a long search parameter,
that does not seem surprising. I would expect a search on "raj nagar
ghaziabad 201017" or something like that to yield far fewer results from
the index s
601 - 700 of 889 matches
Mail list logo