On Sat, May 20, 2023 at 4:43 PM Marcos Pegoraro wrote:
> I have a table like pg_settings, so records have name and value.
>
Hi. Maybe I'm missing something, but why aren't you simply doing:
select name, varvalue from sys_var where name = any($1)
and binding your 4 (in your examples) or 10
On 5/20/23 09:09, Pedro Gonçalves wrote:
Hi.
Good afternoon.
I’m having dificulties with localhost DBeaver postgres training account.
Had access to it and changed password, that presently don’t remember.
What can I do to get access again?
From DBeaver I was told to address this request to P
On Fri, May 19, 2023 at 9:50 PM Laurenz Albe
wrote:
> Yes, that's what I would expect. There is only one "backend_xmin" in
> "pg_stat_replication", which corresponds to the snapshot held by the oldest
> query in any database on the standby server.
>
Thanks for the pointer to pg_stat_replication
Hi pgsql-general,
Will there be any option to attend the upcoming PG conference remotely?
Will the talks and papers be posted online following the conference, if not?
Thank you,
Joe Hammerman
On 5/22/23 04:38, Rajmohan Masa wrote:
Hi Adrian,
I Found one thing in my base Directory.
Generally we are having OID in the Base Directory with a unique OID But
in my machine I found some object Id's with sequence like
121193051,121193051.1 121193051.1200 and each file having the same
s
On 5/20/23 07:09, Pedro Gonçalves wrote:
Hi.
Good afternoon.
I’m having dificulties with localhost DBeaver postgres training account.
Had access to it and changed password, that presently don’t remember.
What can I do to get access again?
From DBeaver I was told to address this request to P
Thanks all for the discussions. It sounds like there are different
questions to clear before we can get to a conclusion on if per-database KEK
is possible or not.
First question - do we, as a community, see the value of the proposal and
do we believe that value is big enough for us to make any nec
> On May 22, 2023, at 11:02, Tony Xu wrote:
> there are still some shared area between clusters.
That's not quite right. A PostgreSQL cluster (in the traditional sense, which
means one PostgreSQL server handling a particular endpoint) is isolated from
any other clusters on the same machine.
Thanks Christophe for the clarification.
That's not quite right. A PostgreSQL cluster (in the traditional sense,
> which means one PostgreSQL server handling a particular endpoint) is
> isolated from any other clusters on the same machine.
>
Thanks. I think I had a misunderstanding over the "clu
On 5/22/23 14:22, Tony Xu wrote:
Thanks Christophe for the clarification.
That's not quite right. A PostgreSQL cluster (in the traditional
sense, which means one PostgreSQL server handling a particular
endpoint) is isolated from any other clusters on the same machine.
Thanks. I thi
(please read http://deb.li/quote and don’t top-post)
On Mon, 22 May 2023, Tony Xu wrote:
>First question - do we, as a community, see the value of the proposal and
>do we believe that value is big enough for us to make any necessary changes
I’d rather like to see the energy, if there’s some inve
On 5/22/23 12:38, Ron wrote:
On 5/22/23 14:22, Tony Xu wrote:
RDS Postgresql would do the job just fine. And since you can't get to
the files (only access it via port 5432 and aws cli/web, there's no need
for TDE.
As I understand TDE whether you can get to the files is not really the
poin
> On May 22, 2023, at 13:06, Adrian Klaver wrote:
> As I understand TDE whether you can get to the files is not really the point.
> It is that someone/thing can and if they do the files are encrypted. Pretty
> sure RDS is not magical enough to have no access from any source to the file
> sys
On 5/22/23 15:06, Adrian Klaver wrote:
On 5/22/23 12:38, Ron wrote:
On 5/22/23 14:22, Tony Xu wrote:
RDS Postgresql would do the job just fine. And since you can't get to
the files (only access it via port 5432 and aws cli/web, there's no need
for TDE.
As I understand TDE whether you can
On Sat, May 20, 2023 at 9:43 AM Marcos Pegoraro wrote:
> I have a table like pg_settings, so records have name and value.
> This select is really fast, just 0.1 or 0.2 ms, but it runs millions of
> times a day, so ...
>
> Then all the time I have to select up to 10 of these records but the
> resu
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to complete
a pg_upgrade with the -k (hard-links) version. This is on a CentOS 7
server with 80 cores.
Adding -j 40 to use half of those cores also finishes in 12 minutes
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to complete
a pg_upgrade with the -k (hard-links) version. This is on a CentOS 7
server with 80 cores.
Adding -j 40 to use half of thos
On 5/22/23 5:24 PM, Adrian Klaver wrote:
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
We are moving from 10 to 15 and are in testing now.
Our development database is about 1400G and takes 12 minutes to
complete a pg_upgrade with the -k (hard-links) version. This is on a
CentOS 7 server with 80
Jeff Ross writes:
> On 5/22/23 5:24 PM, Adrian Klaver wrote:
>> So is the 1400G mostly in one database in the cluster?
> Yes, one big database with about 80 schemas and several other smaller
> databases so -j should help, right?
AFAICT from a quick look at the code, you won't get any meaningful
On 5/22/23 16:29, Jeff Ross wrote:
On 5/22/23 5:24 PM, Adrian Klaver wrote:
On 5/22/23 16:20, Jeff Ross wrote:
Hello!
From docs:
https://www.postgresql.org/docs/current/pgupgrade.html
The --jobs option allows multiple CPU cores to be used for
copying/linking of files and to dump and res
> Hi pgsql-general,
>
> Will there be any option to attend the upcoming PG conference remotely?
>
> Will the talks and papers be posted online following the conference, if not?
You can contact the conference organizer via email.
https://www.pgcon.org/2023/contact.php
Best reagards,
--
Tatsuo I
On 5/22/23 18:42, Tom Lane wrote:
Jeff Ross writes:
On 5/22/23 5:24 PM, Adrian Klaver wrote:
So is the 1400G mostly in one database in the cluster?
Yes, one big database with about 80 schemas and several other smaller
databases so -j should help, right?
AFAICT from a quick look at the code,
Hi!
Price list of main products vordlusajuhinnak contains 3 prices for
product (column toode) and has 39433 products:
create table vordlusajuhinnak( toode varchar(60), n2 numeric(8,2),
n3 numeric(8,2), n4 numeric(8,2) );
toode column in unique, may be primary key in table and contains u
23 matches
Mail list logo