Greetings,
On Thu, Mar 31, 2022 at 12:58 Marc wrote:
> On 29 Mar 2022, at 17:17, Stephen Frost wrote:
>
> Greetings,
>
>- Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
>
> On 2022-Mar-22, Shukla, Pranjal wrote:
>
> Are there any disadvantages of increasing the “wal_keep_segments” to a
> hi
On 29 Mar 2022, at 17:17, Stephen Frost wrote:
Greetings,
* Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
On 2022-Mar-22, Shukla, Pranjal wrote:
Are there any disadvantages of increasing the
“wal_keep_segments” to a
higher number say, 500? Will it have any impact on performance of
streamin
Greetings,
* Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
> On 2022-Mar-22, Shukla, Pranjal wrote:
> > Are there any disadvantages of increasing the “wal_keep_segments” to a
> > higher number say, 500? Will it have any impact on performance of
> > streaming replication, on primary or secondary
On 2022-Mar-22, Shukla, Pranjal wrote:
> Team,
> Are there any disadvantages of increasing the “wal_keep_segments” to a
> higher number say, 500? Will it have any impact on performance of
> streaming replication, on primary or secondary servers?
No. It just means WAL will occupy more disk space.
Team,
Are there any disadvantages of increasing the “wal_keep_segments” to a higher
number say, 500? Will it have any impact on performance of streaming
replication, on primary or secondary servers?
Thanks & Regards
Pranjal Shukla
On 3/14/22 23:42, Shukla, Pranjal wrote:
Thanks Adrian,
Can we say that, "Despite an informational Error, entire data got imported with sanity in the original case"?
Yes.
To verify see that the public schema is there and has tables and other
objects in it.
Also, can we say that either of th
Thanks Adrian,
Can we say that, "Despite an informational Error, entire data got imported with
sanity in the original case"? Also, can we say that either of the approaches
mentioned i.e. Approach 1 & 2 are equally good to do migration from PG 10 to PG
12?
Thanks & Regards
Pranjal Shukla
On 3/
On 3/14/22 06:39, Shukla, Pranjal wrote:
Hello,
We tried importing into an empty database in PG 12 from the dump that
was created in PG 10. Import was successful but we got an message that
an error was ignored. We agin imported with -e option and the following
message was printed:
pg_restor
.0.1<https://127.0.0.1> -p 5432 -U postgres -d mydb -v
"/var/mydata/dbbackup"
To mitigate the same, we took SQL (.sql) dump of the above database in PG10 and
restored in an empty database in PG12, it worked. Using this process, we
migrated our database from PG10 to 12.
To conf
Hello,
starting with an explanation first... i have some sort of 'cold
backup' system: it rsyncs a postgres dump (size: ~1.5TB) from the main
system (created with 'pg_dump $dbname > db.sql) to the backup system
(10TB hdds in a raid10), and then automatically imports that dump
(psql $dbname < $db.s
Thanks to Imre Samu's help, I found out that this is an unwarranted
interference of the JIT compilation. When it is disabled, the short queries
work stably. Before the problem started, I purposely increased the amount
of surrogate data to evaluate performance. Perhaps the logic for enabling
JIT com
On 2021-12-11 06:00:40 +0500, Дмитрий Иванов wrote:
> Afternoon. I was able to make the necessary changes to my base needed to
> migrate win_pg12 to debian pg14.
> But there is a new problem, which was not there at the initial stage so I
> checked:
>
> win_pg12:
> -> Index Scan using index_class_
than this;
# -1 disables
jit = off # allow JIT compilation
--
Regards, Dmitry!
сб, 11 дек. 2021 г. в 09:12, Imre Samu :
> Hi Dmitry,
>
> pg12:
> > Execution Time: 44.123 ms
>
> pg14:
> > JIT:
> > Functions: 167
> > Options: Inlining true, Opti
Hi Dmitry,
pg12:
> Execution Time: 44.123 ms
pg14:
> JIT:
> Functions: 167
> Options: Inlining true, Optimization true, Expressions true, Deforming
true
> Timing: Generation 9.468 ms, Inlining 55.237 ms, Optimization 507.548
ms, Emission 347.932 ms, Total 920.185 ms
&
Yes, I did.
Step1
sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
--disable-triggers --encoding="UTF8" "Uchet"
Step2
Manual DROP/CREATE BASE from tem
On 12/10/21 17:51, Дмитрий Иванов wrote:
Yes, I did.
I reset table statistics, did (VACUUM) ANALYZE, recreated index. Nothing
changes.
I've deleted the database many times, dozens of times. Maybe something
is broken?
How did you do the upgrade?
--
Regards, Dmitry!
сб, 11 дек. 2021 г. в 06
Yes, I did.
I reset table statistics, did (VACUUM) ANALYZE, recreated index. Nothing
changes.
I've deleted the database many times, dozens of times. Maybe something is
broken?
--
Regards, Dmitry!
сб, 11 дек. 2021 г. в 06:13, Adrian Klaver :
> On 12/10/21 17:00, Дмитрий Иванов wrote:
> > Afternoo
On 12/10/21 17:00, Дмитрий Иванов wrote:
Afternoon. I was able to make the necessary changes to my base needed to
migrate win_pg12 to debian pg14.
But there is a new problem, which was not there at the initial stage so
I checked:
win_pg12:
-> Index Scan using index_class_tree_full on class c
Afternoon. I was able to make the necessary changes to my base needed to
migrate win_pg12 to debian pg14.
But there is a new problem, which was not there at the initial stage so I
checked:
win_pg12:
-> Index Scan using index_class_tree_full on class c (cost=0.28..2.50
rows=1 width=235) (actual t
Thanks Laurenz, will try these flags.
Regards,
Subhrajyoti
On Fri, Oct 1, 2021 at 5:19 PM Laurenz Albe
wrote:
> On Fri, 2021-10-01 at 14:22 +0530, Subhrajyoti Senapati wrote:
> > Was running a few sysbench tests in Postgres12.
> > Sysbench Test Config
> > oltp-readwrite-custom
> > Threads - 500
On Fri, 2021-10-01 at 14:22 +0530, Subhrajyoti Senapati wrote:
> Was running a few sysbench tests in Postgres12.
> Sysbench Test Config
> oltp-readwrite-custom
> Threads - 500
> Machine: 16 core 64G
>
> In PG server:
> shared_buffers: 16GB
> maintenance_work_memory: 16GB
> checkpoint_timeout: 1h
>
On Wed, Feb 3, 2021 at 10:07:03PM -0500, Craig McIlwee wrote:
>
> (replying to the entire list instead of Bruce only this time...)
>
>
>
> This doesn't make sense to me. Since we hard-linked, why would 12 be so
> much smaller? If it was symlinks, I could imaging that, but it doesn't
(replying to the entire list instead of Bruce only this time...)
> This doesn't make sense to me. Since we hard-linked, why would 12 be so
> much smaller? If it was symlinks, I could imaging that, but it doesn't
> use symlinks, just hard links, so it should be similar. Please look at
> the siz
from the original 11/main with data_directory being set to /var/lib/
> postgresql/11/main.
>
> If I where to run pg_dropcluster 11 main to remove the old database and conf
> files, will this destroy my running Pg12 database with hard linked files in
> 11/
> main and 12/main? In theory it s
me the issue relates to the PGDATA/postgresql.auto.conf file
being just copied from the original 11/main with data_directory being
set to /var/lib/postgresql/11/main.
If I where to run /*pg_dropcluster 11 main*/ to remove the old database
and conf files, will this destroy my running Pg12 databa
On Sat, Mar 28, 2020 at 05:53:59PM +0900, Michael Paquier wrote:
> And I'll follow up there with anything new I find. Please let me know
> if there are any objections with the revert though, this will address
> the problem reported by Justin.
Okay. Done with this part now as of dd9ac7d. Now for
On Sat, Mar 28, 2020 at 11:29:41AM -0700, Andres Freund wrote:
> I assume you're still trying to track the actual cause of the problem
> further?
That's the plan, and I'll try to spend some time on it next week. Any
new information I have will be added to the thread you have begun on
-hackers a c
Hi,
On 2020-03-28 17:47:19 +0900, Michael Paquier wrote:
> On Fri, Mar 27, 2020 at 05:10:03PM -0500, Justin King wrote:
> > This is encouraging. As I mentioned, we have a workaround in place for
> > the moment, but don't hesitate if you need anything else from me.
> > Thanks for jumping in on the
On Fri, Mar 27, 2020 at 08:23:03PM +0100, Julien Rouhaud wrote:
> FTR we reached the 200M transaxtion earlier, and I can see multiple logs of
> the
> form "automatic vacuum to prevent wraparound", so non-aggressive
> antiwraparound
> autovacuum, all on shared relations.
Thanks Julien for sharing
On Fri, Mar 27, 2020 at 05:10:03PM -0500, Justin King wrote:
> Sounds great. I will email you directly with a link!
Thanks. From the logs, the infinite loop on which autovacuum jobs are
stuck is clear. We have a repetitive number of anti-wraparound and
non-aggressive jobs happening for 7 shared
On Fri, Mar 27, 2020 at 12:12 AM Michael Paquier wrote:
>
> On Thu, Mar 26, 2020 at 09:46:47AM -0500, Justin King wrote:
> > Nope, it was just these tables that were looping over and over while
> > nothing else was getting autovac'd. I'm happy to share the full log
> > if you'd like.
>
> Thanks,
On Fri, Mar 27, 2020 at 02:12:04PM +0900, Michael Paquier wrote:
> On Thu, Mar 26, 2020 at 09:46:47AM -0500, Justin King wrote:
> > Nope, it was just these tables that were looping over and over while
> > nothing else was getting autovac'd. I'm happy to share the full log
> > if you'd like.
>
> T
On Thu, Mar 26, 2020 at 09:46:47AM -0500, Justin King wrote:
> Nope, it was just these tables that were looping over and over while
> nothing else was getting autovac'd. I'm happy to share the full log
> if you'd like.
Thanks, that could help. If that's very large, it could be a problem
to send
On Wed, Mar 25, 2020 at 8:43 PM Michael Paquier wrote:
>
> On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> > This started happening again. DEBUG1 is enabled:
>
> Thanks for enabling DEBUG1 logs while this happened.
>
> > Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
>
On Wed, Mar 25, 2020 at 07:59:56PM -0700, Andres Freund wrote:
> FWIW, this kind of thing is why I think the added skipping logic is a
> bad idea. Silently skipping things like this (same with the "bogus"
> logic in datfrozenxid computation) is dangerous. I think we should
> seriously consider back
Hi,
On 2020-03-26 10:43:36 +0900, Michael Paquier wrote:
> On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> > Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
> > 14:48:26.329 GMT [39875] DEBUG: skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_cat
On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> This started happening again. DEBUG1 is enabled:
Thanks for enabling DEBUG1 logs while this happened.
> Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
> 14:48:26.329 GMT [39875] DEBUG: skipping redundant vacuum to preve
All-
This started happening again. DEBUG1 is enabled:
Mar 25 14:48:03 cowtn postgres[39720]: [35294-1] 2020-03-25
14:48:03.972 GMT [39720] DEBUG: autovacuum: processing database
"template0"
Mar 25 14:48:06 cowtn postgres[39735]: [35294-1] 2020-03-25
14:48:06.545 GMT [39735] DEBUG: autovacuum:
Hi,
On 2020-03-24 15:12:38 +0900, Michael Paquier wrote:
> > Well, there's no logging of autovacuum launchers that don't do anything
> > due to the "skipping redundant" logic, with normal log level. If somehow
> > the horizon logic of autovacuum workers gets out of whack with what
> > vacuumlazy.c
On Mon, Mar 23, 2020 at 10:40:39PM -0700, Andres Freund wrote:
> On 2020-03-24 14:26:06 +0900, Michael Paquier wrote:
>> Nothing really fancy:
>> - autovacuum_vacuum_cost_delay to 2ms (default of v12, but we used it
>> in v11 as well).
>> - autovacuum_naptime = 15s
>> - autovacuum_max_workers = 6
>
Hi,
On 2020-03-24 14:26:06 +0900, Michael Paquier wrote:
> > Could you share what the config of the server was?
>
> Nothing really fancy:
> - autovacuum_vacuum_cost_delay to 2ms (default of v12, but we used it
> in v11 as well).
> - autovacuum_naptime = 15s
> - autovacuum_max_workers = 6
> - log_
On Mon, Mar 23, 2020 at 01:00:51PM -0700, Andres Freund wrote:
> On 2020-03-23 20:47:25 +0100, Julien Rouhaud wrote:
>>> - relfrozenxid, age(relfrozenxid) for the oldest table in the oldest
>>> database
>>> SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM pg_class WHERE
>>> relfrozenx
On Mon, Mar 23, 2020 at 4:31 PM Justin King wrote:
>
> On Mon, Mar 23, 2020 at 3:00 PM Andres Freund wrote:
> >
> > Hi,
> >
> > On 2020-03-23 20:47:25 +0100, Julien Rouhaud wrote:
> > > > - relfrozenxid, age(relfrozenxid) for the oldest table in the oldest
> > > > database
> > > > SELECT oid::
Hi,
On 2020-03-23 16:31:21 -0500, Justin King wrote:
> This is occurring in our environment right now (started about 30 min
> ago). Here 's the latest logs (grepped by vacuum):
>
> Mar 23 20:54:16 cowtn postgres[15569]: [12-1] 2020-03-23 20:54:16.542
> GMT [15569] LOG: automatic vacuum of table
On Mon, Mar 23, 2020 at 3:00 PM Andres Freund wrote:
>
> Hi,
>
> On 2020-03-23 20:47:25 +0100, Julien Rouhaud wrote:
> > > - relfrozenxid, age(relfrozenxid) for the oldest table in the oldest
> > > database
> > > SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM pg_class
> > > WHERE r
Hi,
On 2020-03-23 20:47:25 +0100, Julien Rouhaud wrote:
> > - relfrozenxid, age(relfrozenxid) for the oldest table in the oldest
> > database
> > SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM pg_class WHERE
> > relfrozenxid <> 0 ORDER BY age(relfrozenxid) DESC LIMIT 1;
>
> The vm
t; > If it's actually stuck on a single table, and that table is not large,
> > > it would be useful to get a backtrace with gdb.
> >
> > FTR, we're facing a very similar issue at work (adding Michael and Kevin in
> > Cc)
> > during performance
backtrace with gdb.
>
> FTR, we're facing a very similar issue at work (adding Michael and Kevin in
> Cc)
> during performance tests since a recent upgrade to pg12 .
>
> What seems to be happening is that after reaching 200M transaction a first
> pass
> of autovacuum freeze
ng performance tests since a recent upgrade to pg12 .
What seems to be happening is that after reaching 200M transaction a first pass
of autovacuum freeze is being run, bumping pg_database.darfrozenxid by ~ 800k
(age(datfrozenxid) still being more than autovacuum_freeze_max_age afterwards).
After
>
> We haven't isolated *which* table it is blocked on (assuming it is),
> but all autovac's cease running until we manually intervene.
>
> When we get into this state again, is there some other information
> (other than what is in pg_stat_statement or pg_stat_activity) that
> would be useful for f
Hi,
On 2020-03-20 12:42:31 -0500, Justin King wrote:
> When we get into this state again, is there some other information
> (other than what is in pg_stat_statement or pg_stat_activity) that
> would be useful for folks here to help understand what is going on?
If it's actually stuck on a single t
On Thu, Mar 19, 2020 at 6:56 PM Andres Freund wrote:
>
> Hi,
>
> On 2020-03-19 18:07:14 -0500, Justin King wrote:
> > On Thu, Mar 19, 2020 at 5:35 PM Andres Freund wrote:
> > >
> > > Hi,
> > >
> > > On 2020-03-19 10:23:48 -0500, Justin King wrote:
> > > > > From a single stats snapshot we can't a
Hi,
On 2020-03-19 18:07:14 -0500, Justin King wrote:
> On Thu, Mar 19, 2020 at 5:35 PM Andres Freund wrote:
> >
> > Hi,
> >
> > On 2020-03-19 10:23:48 -0500, Justin King wrote:
> > > > From a single stats snapshot we can't actually understand the actual xid
> > > > consumption - is it actually th
On Thu, Mar 19, 2020 at 5:35 PM Andres Freund wrote:
>
> Hi,
>
> On 2020-03-19 10:23:48 -0500, Justin King wrote:
> > > From a single stats snapshot we can't actually understand the actual xid
> > > consumption - is it actually the xid usage that triggers the vacuums?
> >
> > We have looked at thi
Hi,
On 2020-03-19 10:23:48 -0500, Justin King wrote:
> > From a single stats snapshot we can't actually understand the actual xid
> > consumption - is it actually the xid usage that triggers the vacuums?
>
> We have looked at this and the xid consumption averages around 1250
> xid/sec -- this is
gt;> > if reducing it to 50% or even 20% would allow many more HOT updates that
>> > would reduce bloat.
>>
>> I don't believe we have a default fillfactor, but I'm still trying to
>> understand why autovacs would completely stop -- that seems like a
>> bu
ce bloat.
>
> I don't believe we have a default fillfactor, but I'm still trying to
> understand why autovacs would completely stop -- that seems like a
> bug. Especially since there was no change between PG10 and PG12 and
> this problem never existed there.
>
Is there
I'm still trying to
understand why autovacs would completely stop -- that seems like a
bug. Especially since there was no change between PG10 and PG12 and
this problem never existed there.
> Also, is there any period of lower activity on your system that you could
> schedule a vacuum f
On Wed, Mar 18, 2020 at 10:13 AM Adrian Klaver
wrote:
>
> On 3/18/20 6:57 AM, Justin King wrote:
> Please reply to list also
> Ccing list
>
>
> >>> Here are the settings, these are the only ones that are not set to
> >>> default with the exception of a few tables that have been overridden
> >>> wi
Hi Andres-
Thanks for the reply, answers below.
On Tue, Mar 17, 2020 at 8:19 PM Andres Freund wrote:
>
> Hi,
>
> On 2020-03-17 17:18:57 -0500, Justin King wrote:
> > As you can see in this table, there are only ~80K rows, but billions
> > of updates. What we have observed is that the frozenxid
having frozen pages
would also mean all the autovacuums would be able to skip more pages and
therefore be faster.
>> autovacuum_vacuum_cost_delay = 20
This was changed to 2ms in PG12. You should reduce that most likely.
On 3/18/20 6:57 AM, Justin King wrote:
Please reply to list also
Ccing list
Here are the settings, these are the only ones that are not set to
default with the exception of a few tables that have been overridden
with a different value due to lots of updates and few rows:
And those values are?
t the
freeze_max_age, there are cases where it kicks off an aggressive
autovac of those tables which seems to prevent autovacs from running
elsewhere. Oddly, this is not consistent, but that condition seems to
be required. We have observed this across multiple PG12 servers (dev,
test, staging, produ
' and/or 'template1' databases hit the
> > freeze_max_age, there are cases where it kicks off an aggressive
> > autovac of those tables which seems to prevent autovacs from running
> > elsewhere. Oddly, this is not consistent, but that condition seems to
>
seems to prevent autovacs from running
elsewhere. Oddly, this is not consistent, but that condition seems to
be required. We have observed this across multiple PG12 servers (dev,
test, staging, production) all with similar workloads.
Is there anything in postgres and template1 besides what
elsewhere. Oddly, this is not consistent, but that condition seems to
be required. We have observed this across multiple PG12 servers (dev,
test, staging, production) all with similar workloads.
$ grep -i vacuum /var/log/postgresql/postgres.log | cut -b 1-9 | uniq -c
17 Mar 17 06
34 M
Hello Devrim.
Thank you for your help!
I have been able to install PostgreSQL 12.1(rpm) on CentOS 8.0(without
python2).
It has also possible to install on CentOS 7.6(without python3).
Best Regards,
Keisuke Kuroda
2019年11月17日(日) 10:40 Devrim Gündüz :
>
> Hi,
>
> On Fri, 2019-09-27 at 09:38 -0400
Hi,
On Fri, 2019-09-27 at 09:38 -0400, Tom Lane wrote:
> Another idea might be to bundle them into the plpython package
> instead of contrib (and similarly for the plperl transforms).
This went into the last week's minor updates.
Regards,
--
Devrim Gündüz
Open Source Solution Architect, Red Ha
Hi,
On Fri, 2019-09-27 at 10:50 +0900, keisuke kuroda wrote:
> CentOS8 does not have python2 installed by default, But PostgreSQL is
> dependent on python2.
>
> Do we need to install python2 when we use PostgreSQL on CentOS8?
For the archives: I fixed this in 12.1 packages. Core package do not
> Users of these (now contrib) modules need to have
> postgresql12-plpython3 installed anyway, so it's unlikely you'd be
> breaking anyone's installation.
I agree.
To use these EXTENSION(hstore_plpython,jsonb_plpython, and ltree_plpython),
we need to install plpythonu anyway.
I don't think it woul
On Thu, Oct 3, 2019 at 6:31 PM Adrian Klaver
wrote:
> On 10/3/19 9:27 AM, Igor Neyman wrote:
> > Main page (https://www.postgresql.org/) announces new release, but
> > Downloads for Windows page
> > (https://www.postgresql.org/download/windows/) doesn’t list PG12.
>
On 10/3/19 9:27 AM, Igor Neyman wrote:
Main page (https://www.postgresql.org/) announces new release, but
Downloads for Windows page
(https://www.postgresql.org/download/windows/) doesn’t list PG12.
Any clarification?
It is available:
https://www.enterprisedb.com/downloads/postgres
From: Igor Neyman [mailto:iney...@perceptron.com]
Sent: Thursday, October 03, 2019 12:27 PM
To: pgsql-general@lists.postgresql.org
Subject: PG12
Main page (https://www.postgresql.org/) announces new release, but Downloads
for Windows page (https://www.postgresql.org/download/windows/) doesn
Main page (https://www.postgresql.org/) announces new release, but Downloads
for Windows page (https://www.postgresql.org/download/windows/) doesn't list
PG12.
Any clarification?
Regards,
Igor Neyman
Re: Devrim Gündüz 2019-09-30
<21705bb57210f01b559ec2f5de8550df586324e2.ca...@gunduz.org>
> I think postgresql-contrib-py3 is really the best idea at this point,
> otherwise
> I cannot see a clean way to make this without breaking existing installations.
Users of these (now contrib) modules need
Hi,
On Fri, 2019-09-27 at 09:38 -0400, Tom Lane wrote:
> It doesn't surprise me so much that the contrib package does, though.
> Most likely, that includes the plpython transform modules
> (hstore_plpython, jsonb_plpython, etc), which are certainly going to
> depend on whichever libpython PG was
Thank you for the reply.
I understand that contrib package depend libpython.
> Another idea might be to bundle them into the plpython package
> instead of contrib (and similarly for the plperl transforms).
I think that this idea sounds good.
If I don't use plpython, it is happy for me
that don't
Re: Tom Lane 2019-09-27 <19495.1569591...@sss.pgh.pa.us>
> Another idea might be to bundle them into the plpython package
> instead of contrib (and similarly for the plperl transforms).
Fwiw, the Debian packages do that.
Christoph
keisuke kuroda writes:
> Even if I don't need to install plpythonu,
> RPM package includes "CONFIGURE = --with-python".
> Therefore I think that I need to install python2 when RPM install.
> Is my understanding correct?
The core server package shouldn't have any python dependency.
It doesn't sur
Thank you for your reply!
Even if I don't need to install plpythonu,
RPM package includes "CONFIGURE = --with-python".
Therefore I think that I need to install python2 when RPM install.
Is my understanding correct?
Best Regards.
Keisuke Kuroda
2019年9月27日(金) 13:03 Adrian Klaver :
> On 9/26/19 6:
On 9/26/19 6:50 PM, keisuke kuroda wrote:
Hi.
I tried to install PostgreSQL12 RC1 on CentOS8.
# dnf install postgresql12-server postgresql12-contrib
===
Package Arch
Hi.
I tried to install PostgreSQL12 RC1 on CentOS8.
# dnf install postgresql12-server postgresql12-contrib
===
Package ArchVersion
82 matches
Mail list logo