On 2023-May-23, Ron wrote:
> We'd never hardlink. Eliminates the ability to return to the old system if
> something goes wrong.
If you'd never hardlink, then you should run your test without the -k
option. Otherwise, the timings are meaningless.
--
Álvaro Herrera PostgreSQL Developer
On 5/23/23 13:58, Christoph Moench-Tegeder wrote:
## Ron (ronljohnso...@gmail.com):
We'd never hardlink. Eliminates the ability to return to the old
system if something goes wrong.
That's why you get yourself a recent XFS and use clone mode (still
sticks you to the same filesystem, but gets y
On 2023-05-23 13:17:24 -0500, Ron wrote:
> On 5/23/23 12:19, Peter J. Holzer wrote:
> > On 2023-05-22 21:10:48 -0500, Ron wrote:
> > > On 5/22/23 18:42, Tom Lane wrote:
> > > > It looks like the assumption was that issuing link()
^^
> > > >
## Ron (ronljohnso...@gmail.com):
> We'd never hardlink. Eliminates the ability to return to the old
> system if something goes wrong.
That's why you get yourself a recent XFS and use clone mode (still
sticks you to the same filesystem, but gets you up running much
faster).
Regards,
Christoph
On 5/23/23 12:19, Peter J. Holzer wrote:
On 2023-05-22 21:10:48 -0500, Ron wrote:
On 5/22/23 18:42, Tom Lane wrote:
It looks like the assumption was that issuing link()
requests in parallel wouldn't help much but just swamp your disk
if they're all on the same filesystem.
Maybe that could use r
On 2023-05-22 21:10:48 -0500, Ron wrote:
> On 5/22/23 18:42, Tom Lane wrote:
> > It looks like the assumption was that issuing link()
> > requests in parallel wouldn't help much but just swamp your disk
> > if they're all on the same filesystem.
> > Maybe that could use rethinking, not sure.
>
> I
On 5/22/23 5:43 PM, Adrian Klaver wrote:
From docs:
https://www.postgresql.org/docs/current/pgupgrade.html
The --jobs option allows multiple CPU cores to be used for
copying/linking of files and to dump and restore database schemas in
parallel; a good place to start is the maximum of the n
On 5/22/23 5:42 PM, Tom Lane wrote:
Jeff Ross writes:
On 5/22/23 5:24 PM, Adrian Klaver wrote:
So is the 1400G mostly in one database in the cluster?
Yes, one big database with about 80 schemas and several other smaller
databases so -j should help, right?
AFAICT from a quick look at the co
On 5/22/23 18:42, Tom Lane wrote:
Jeff Ross writes:
On 5/22/23 5:24 PM, Adrian Klaver wrote:
So is the 1400G mostly in one database in the cluster?
Yes, one big database with about 80 schemas and several other smaller
databases so -j should help, right?
AFAICT from a quick look at the code,
On 5/22/23 16:29, Jeff Ross wrote:
On 5/22/23 5:24 PM, Adrian Klaver wrote:
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
From docs:
https://www.postgresql.org/docs/current/pgupgrade.html
The --jobs option allows multiple CPU cores to be used for
copying/linking of files and to dump and res
Jeff Ross writes:
> On 5/22/23 5:24 PM, Adrian Klaver wrote:
>> So is the 1400G mostly in one database in the cluster?
> Yes, one big database with about 80 schemas and several other smaller
> databases so -j should help, right?
AFAICT from a quick look at the code, you won't get any meaningful
On 5/22/23 5:24 PM, Adrian Klaver wrote:
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to
complete a pg_upgrade with the -k (hard-links) version. This is on a
CentOS 7 server with 80
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to complete
a pg_upgrade with the -k (hard-links) version. This is on a CentOS 7
server with 80 cores.
Adding -j 40 to use half of thos
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to complete
a pg_upgrade with the -k (hard-links) version. This is on a CentOS 7
server with 80 cores.
Adding -j 40 to use half of those cores also finishes in 12 minutes
14 matches
Mail list logo