"David G. Johnston" writes:
> On Wednesday, August 6, 2025, Rumpi Gravenstein wrote:
>> Here's a reproducible test case that causes the problem in different
>> schemas. The issue appears to be related to creating a table and a
>> function that has the same name as the table with a prepended unde
That having a table and function with similar names causes this problem is
a bug.
Going forward I'll be sure to avoid the problem.
On Wed, Aug 6, 2025 at 5:35 PM David G. Johnston
wrote:
> On Wednesday, August 6, 2025, Rumpi Gravenstein
> wrote:
>
>> Here's a reproducible test case that causes
On Wednesday, August 6, 2025, Rumpi Gravenstein wrote:
> Here's a reproducible test case that causes the problem in different
> schemas. The issue appears to be related to creating a table and a
> function that has the same name as the table with a prepended underscore.
>
Don’t do that. Naming
Here's a reproducible test case that causes the problem in different
schemas. The issue appears to be related to creating a table and a
function that has the same name as the table with a prepended underscore.
rumpi_test -- table name
_rumpi_test -- function name
Here's the test case;
SELECT v
On 8/6/25 1:17 PM, Rumpi Gravenstein wrote:
Just coming back to this. Don't know how to interpret this:
xxx_pub_dev_2_db=# select proname, pronamespace, oid from pg_proc where
proname like '%sa_setup%';
proname | pronamespace | oid
+--+-
_sa_s
Just coming back to this. Don't know how to interpret this:
xxx_pub_dev_2_db=# select proname, pronamespace, oid from pg_proc where
proname like '%sa_setup%';
proname | pronamespace | oid
+--+-
_sa_setup_role | 7038406 | 7869125
(1 row)
xxx_pub_
Puzzling.
I'd do:
pg_dump --schema-only xxx_pub_dev_2_db | grep -i -A5 _sa_setup_role
Note the -i. That _might_ be important.
On Wed, Aug 6, 2025 at 4:18 PM Rumpi Gravenstein wrote:
> Just coming back to this. Don't know how to interpret this:
>
> xxx_pub_dev_2_db=# select proname, pronamesp
Also, Neon [0] and Aurora [1] pricing is so high that it seems to make most
use-cases impractical (well, if you want a managed offering...). Neon's top
public tier is not even what a single modern dedicated server (or virtual
machine) can provide. I would have thought decoupling compute and stor
Ah, by "shared storage" I mean that each node can acquire exclusivity, not that
they can both R/W to it at the same time.
> Some pretty well-known cases of storage / compute separation (Aurora, Neon)
> also share the storage between instances,
That model is cool, but I think it's more of a solu
Sorry, I was referring to this:
> But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
> - Availability seems maintained (client can always promote an accessible
node)
> - Partitions between PostgreSQL nodes don't prevent the system from
What you describe doesn’t look like something very useful for the vast majority
of projects that needs a database. Why would you even want that if you can
avoid it?
If your “single node” can handle tens / hundreds of thousands requests per
second, still have very durable and highly available s
A shared storage would require a lot of extra work. That's essentially what
AWS Aurora does.
You will have to have functionality to sync in-memory states between nodes,
because all the instances will have cached data that can easily become
stale on any write operation.
That alone is not that simple
And finally, some read only benchmarks with the same postgres build.
9P:
postgres@zerofs:/mnt_9p$ pgbench -vvv -c 100 -j 40 -t 1 bench -S
pgbench (16.9 (Ubuntu 16.10-1))
starting vacuum...end.
starting vacuum pgbench_accounts...end.
transaction type:
scaling factor: 50
query mode: simple
num
I built postgres (same version, 16.9) but --with-block-size=32 (I'd really love
if this would be a initdb time flag!) and did some more testing:
synchronous_commit = off
postgres@zerofs:~$ pgbench -vvv -c 100 -j 40 -t 1 bench
pgbench (16.9 (Ubuntu 16.10-1))
starting vacuum...end.
starting va
Laurenz Albe writes:
> On Fri, 2025-07-25 at 14:10 -0400, Tom Lane wrote:
>> So ... any chance you have a data type named _sa_setup_role?
> ... it could also be a type "sa_setup_role", and "_sa_setup_role"
> is interpreted as the corresponding array type:
Oh, of course --- that's a good deal mor
On Fri, 2025-07-25 at 14:10 -0400, Tom Lane wrote:
> After looking at the code I remembered that the parser might be taking
> this as a type coercion request. With that idea, I can duplicate the
> observed behavior like so:
>
> regression=# select _sa_setup_role('af_repo_app');
> INFO: af_repo_a
I wrote:
> Well, there's got to be *something* different about that database.
After looking at the code I remembered that the parser might be taking
this as a type coercion request. With that idea, I can duplicate the
observed behavior like so:
regression=# select _sa_setup_role('af_repo_app');
Rumpi Gravenstein writes:
> Our databases are deployed with automation tools. They should all be
> created the same. They all have the same 17 extensions. I've asked a DBA
> to confirm.
Well, there's got to be *something* different about that database.
> This issue only appears in the functio
Now I'm wondering about stray entries in pg_cast. Also,
do you have any extensions loaded in that DB that aren't
in your other ones?
Our databases are deployed with automation tools. They should all be
created the same. They all have the same 17 extensions. I've asked a DBA
to confirm.
This
Rumpi Gravenstein writes:
> No ... just one version:
D'oh, actually this would have complained if there was more
than one match, so that theory is wrong:
> _pub_dev_2_db=# DROP FUNCTION if exists _sa_setup_role;
> DROP FUNCTION
Now I'm wondering about stray entries in pg_cast. Also,
do you
There is really only one function with this name. A rerun of my test
script with the suggested change:
_pub_dev_2_db=# SELECT version();
version
---
On 7/25/25 09:59, Rumpi Gravenstein wrote:
No ... just one version:
_pub_dev_2_db=# \df _sa_setup_role
Do:
\df *._sa_setup_role
--
Rumpi Gravenstein
--
Adrian Klaver
adrian.kla...@aklaver.com
No ... just one version:
_pub_dev_2_db=# SHOW server_version;
server_version
16.9
(1 row)
_pub_dev_2_db=# DROP FUNCTION if exists _sa_setup_role;
DROP FUNCTION
_pub_dev_2_db=# CREATE OR REPLACE FUNCTION _sa_setup_role(
p_role_to_be_granted varchar)
_pub_dev_2_d
Rumpi Gravenstein writes:
> I've been confound by the following behavior that I see in one of our
> PostgreSQL 16 instances. In this case I am running this script from psql.
I'd bet there is another function named _sa_setup_role() that takes
some kind of array, and the parser is resolving the am
On 7/25/25 09:36, Rumpi Gravenstein wrote:
PostgreSQL Experts,
I've been confound by the following behavior that I see in one of our
PostgreSQL 16 instances. In this case I am running this script from psql.
-
Hi,
I went ahead and did that test.
Here is the postgresql config I used for reference (note the wal options
(recycle, init_zero) as well as full_page_writes = off, because ZeroFS cannot
have torn writes by design).
https://gist.github.com/Barre/8d68f0d00446389998a31f4e60f3276d
Test was runni
> This then begs the obvious question of how fast is this with
> synchronous_commit = on?
Probably not awful, especially with commit_delay.
I'll try that and report back.
Best,
Pierre
On Fri, Jul 25, 2025, at 00:03, Jeff Ross wrote:
> On 7/24/25 13:50, Pierre Barre wrote:
>
>> It’s not “safe”
Hi Marco,
Thanks for the kind words!
> and potentially elaborate on other projects for an active/active cluster! I
> applaud you.
I wrote an argument there:
https://github.com/Barre/ZeroFS?tab=readme-ov-file#cap-theorem
I definitely want to write a proof of concept when I get some time.
Best
My humble take on this project: well done! You are opening the doors to
work on a much-needed endeavor, decouple compute from storage, and
potentially elaborate on other projects for an active/active cluster! I
applaud you.
On Thu, Jul 17, 2025, 4:59 PM Pierre Barre wrote:
> Hi everyone,
>
> I w
On 7/24/25 13:50, Pierre Barre wrote:
It’s not “safe” or “unsafe”, there’s mountains of valid workloads which don’t
require synchronous_commit. Synchronous_commit don’t make your system
automatically safe either, and if that’s a requirement, there’s many
workarounds, as you suggested, it cert
It’s not “safe” or “unsafe”, there’s mountains of valid workloads which don’t
require synchronous_commit. Synchronous_commit don’t make your system
automatically safe either, and if that’s a requirement, there’s many
workarounds, as you suggested, it certainly doesn’t make the setup useless.
Be
On Fri, Jul 18, 2025 at 12:57:39PM +0200, Pierre Barre wrote:
> - Postgres configured accordingly memory-wise as well as with
> synchronous_commit = off, wal_init_zero = off and wal_recycle = off.
Bingo. That's why it's fast (synchronous_commit = off). It's also why
it's not safe _unless_ you
On Fri, Jul 18, 2025 at 06:40:58AM +0200, Laurenz Albe wrote:
> On Fri, 2025-07-18 at 00:57 +0200, Pierre Barre wrote:
> > Looking forward to your feedback and questions!
>
> I think the biggest hurdle you will have to overcome is to
> convince notoriously paranoid DBAs that this tall stack
> prov
> "NFS" is a key word that does not inspire confidence in
PostgreSQL circles...
Coming back to this, I just implemented 9P, which should translates to proper
semantics for FSYNC.
mount -t 9p -o trans=tcp,port=5564,version=9p2000.L,msize=65536,access=user
127.0.0.1 /mnt/9p
Best,
Pierre
On Fri,
On Mon, 2025-07-21 at 10:47 +, Klaus Darilion wrote:
> (Note: I have also attached the whole email for better readability of the
> logs)
Your mail looks good enough the way it is:
https://postgr.es/m/DBAPR03MB6358854AD71C8ABA5CA10A8DF15DA%40DBAPR03MB6358.eurprd03.prod.outlook.com
> Our setup
> The interesting thing is, a few searches about the performance return mostly
> negative impressions about their object storage in comparison to the original
> S3.
I think they had a rough start, but it's quite good now from what I've
experienced. It's also dirt-cheap, and they don't bill for
Thanks, I learned something else: I didn't know Hetzner offered S3
compatible storage.
The interesting thing is, a few searches about the performance return
mostly negative impressions about their object storage in comparison to the
original S3.
Finding out what kind of performance your benchmark
Now, I'm trying to understand how CAP theorem applies here. Traditional
PostgreSQL replication has clear CAP trade-offs - you choose between
consistency and availability during partitions.
But when PostgreSQL instances share storage rather than replicate:
- Consistency seems maintained (same dat
Hi Seref,
For the benchmarks, I used Hetzner's cloud service with the following setup:
- A Hetzner s3 bucket in the FSN1 region
- A virtual machine of type ccx63 48 vCPU 192 GB memory
- 3 ZeroFS nbd devices (same s3 bucket)
- A ZFS stripped pool with the 3 devices
- 200GB zfs L2ARC
- Postgres con
Sorry, this was meant to go to the whole group:
Very interesting!. Great work. Can you clarify how exactly you're running
postgres in your tests? A specific AWS service? What's the test
infrastructure that sits above the file system?
On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre wrote:
> Hi eve
Hi Laurenz,
> I think the biggest hurdle you will have to overcome is to
> convince notoriously paranoid DBAs that this tall stack
> provides reliable service, honors fsync() etc.
Indeed, but that doesn't have to be "sudden." I think we need to gain
confidence in the whole system gradually by st
On Fri, 2025-07-18 at 00:57 +0200, Pierre Barre wrote:
> Looking forward to your feedback and questions!
I think the biggest hurdle you will have to overcome is to
convince notoriously paranoid DBAs that this tall stack
provides reliable service, honors fsync() etc.
Performance is great, but it i
On Wed, 2025-07-02 at 20:12 +0530, Gaurav Aradhya wrote:
> Can you please let me know when Postgresql 17.x shall be supported for
> Windows Server 2025? Greatly appreciated your feedback.
If you want to know if it is working, the best answer is "as soon as
someone donates a Windows 2025 buildfarm
On 7/2/25 07:42, Gaurav Aradhya wrote:
Greetings,
Can you please let me know when Postgresql 17.x shall be supported for
Windows Server 2025? Greatly appreciated your feedback.
The Windows packaging is done by EDB, someone from there will need to
see this and respond or you could contact th
On Wednesday, July 2, 2025, Gaurav Aradhya wrote:
>
> Can you please let me know when Postgresql 17.x shall be supported for
> Windows Server 2025? Greatly appreciated your feedback.
>
Impossible to guess when someone may choose to set up a build farm member
running that OS.
David J.
Hi,
Thanks for the good point:
$ sysctl vm.overcommit_memory
vm.overcommit_memory = 0
That is a difference, the old pg11 running on Ubuntu 18.4 had
disabled overcommit (vm.overcommit_memory = 2).
Anyway, on a dedicated DB server box with 123GB RAM running only vacuum (14
parallel processes (2GB
On 6/21/25 23:09, Aleš Zelený wrote:
> Hello,
> ...
>
> The application benefits from parallel queries, so despite the first
> temptation to disable parallel queries (based on log entries correlation
> only, but is that the root cause?) I did not want to disable parallel
> queries, if there is anot
Hello,
Belatedly, I came back to this issue in the release notes. Here's a
query for correctly reporting the problem and not reporting the cases
where there isn't a problem:
SELECT conrelid::pg_catalog.regclass AS "constrained table",
conname AS constraint,
confrelid::pg_catalog.re
Alvaro Herrera writes:
> I'm going to fix the query in the release notes for all past branches
> now, to avoid confusing people upgrading in the future ... hopefully not
> many, but I don't think it's going to be zero people.
OK, thanks.
regards, tom lane
On 12/5/25 15:50, Achilleas Mantzios wrote:
Dear All, Dear Tom
On 5/11/25 16:20, Tom Lane wrote:
Achilleas Mantzios writes:
We use are own version of DBmirror, we run our replication in a highly
fine grained manner. So every upgrade I have to make the code compile
and test. Up to PostgreSQL
Dear All, Dear Tom
On 5/11/25 16:20, Tom Lane wrote:
Achilleas Mantzios writes:
We use are own version of DBmirror, we run our replication in a highly
fine grained manner. So every upgrade I have to make the code compile
and test. Up to PostgreSQL 17, I only got minor compilation problems
that
Achilleas Mantzios writes:
> We use are own version of DBmirror, we run our replication in a highly
> fine grained manner. So every upgrade I have to make the code compile
> and test. Up to PostgreSQL 17, I only got minor compilation problems
> that I managed to resolve fairly easily. However t
On 2025-01-15 11:42:30 +, Sri Mrudula Attili wrote:
> The max_connections =200 and max_files_per_process =1000 as you mentioned.
>
> So should a max_files_per_process =500 helps?
>
>
> I could see from the number of connections to the databaseis not exceeding
> 20. But still these 20 are cau
-- Forwarded message -
From: Sri Mrudula Attili
Date: Wed, 15 Jan, 2025, 17:12
Subject: Re: Postgresql database terminates abruptly with too many open
files error
To: Tom Lane
Cc:
Hello Tom,
The max_connections =200 and max_files_per_process =1000 as you mentioned.
So
Hello Tom,
The max_connections =200 and max_files_per_process =1000 as you mentioned.
So should a max_files_per_process =500 helps?
I could see from the number of connections to the databaseis not
exceeding 20. But still these 20 are causing all the damage or using up
all the openfiles quot
Sri Mrudula Attili writes:
> We have a postgresql VDB(virtual database- Delphix) that keeps
> terminating due "to too many open files".
What do you have max_connections set to, and how many actually-live
server processes are there typically?
> The number of allowed openfiles at OS level are 6
On Tue, Jan 14, 2025 at 7:58 AM Sri Mrudula Attili wrote:
> Hello Team,
>
> We have a postgresql VDB(virtual database- Delphix) that keeps
> terminating due "to too many open files".
>
> Below are few alerts that we could see from the postgresql.log
>
>
> < 2025-01-14 11:37:20.724 GMT >LOG: ou
Hello,
Have you checked something like lsof to see open file descriptors to see?
Cheers,
frank
Am 14.01.25 um 13:58 schrieb Sri Mrudula Attili:
Hello Team,
We have a postgresql VDB(virtual database- Delphix) that keeps terminating due "to
too many open files".
Below are few alerts that w
Hi Alvaro,
> On 30 Nov 2024, at 08:41, Alvaro Herrera wrote:
>
> Only if you have self-referencing FKs in partitioned tables. It
> would be an interesting data point to verify whether this reports
> anything else. Also, I'd be really curious if your databases include
> the case I'm suspicious
Hello,
On 2024-Nov-29, Paul Foerster wrote:
> > On 29 Nov 2024, at 18:15, Alvaro Herrera wrote:
> > This reports case 2 as OK and case 1 as bogus, as should be. I tried
> > adding more partitions and this seems to hold correctly. I was afraid
> > though that this would fail if we create an FK
Hi Alvaro,
> On 29 Nov 2024, at 18:15, Alvaro Herrera wrote:
>
> This all was to say that the query in the release notes is undoubtedly
> wrong. After thinking some more about it, I think the fix is to add 1
> to the number of constraints:
>
> SELECT conrelid::pg_catalog.regclass AS "constrain
On 2024-Nov-27, Tom Lane wrote:
> I doubt that there's anything actually wrong with the catalog state at
> this point (perhaps Alvaro would confirm that). That leads to the
> conclusion that what's wrong is the release notes' query for fingering
> broken constraints, and it needs some additional
Hi Tom, hi Alvaro,
> On 27 Nov 2024, at 19:52, Tom Lane wrote:
>
> Okay, so I was able to reproduce this from scratch on HEAD:
great, thanks.
> I doubt that there's anything actually wrong with the catalog state at
> this point (perhaps Alvaro would confirm that). That leads to the
> conclusi
On 2024-Nov-27, Tom Lane wrote:
> I doubt that there's anything actually wrong with the catalog state at
> this point (perhaps Alvaro would confirm that). That leads to the
> conclusion that what's wrong is the release notes' query for fingering
> broken constraints, and it needs some additional
Paul Foerster writes:
> On 26 Nov 2024, at 22:25, Tom Lane wrote:
>> I'm suspicious that our repair recipe might not have accounted
>> for self-reference FKs fully, but that's just a gut feeling at
>> this point.
> Of course, it contains no secret data. Please find the full log below.
> Accordi
Hi Tom,
> On 26 Nov 2024, at 22:25, Tom Lane wrote:
>
> I would have expected an empty result too. Can you confirm that
> p_ci_pipelines used to be a partition of something? Can you show us
> the full DDL (or psql \d+ output) for the partitioned table it
> used to be part of, and for that matt
Paul Foerster writes:
>> On 26 Nov 2024, at 17:56, Adrian Klaver wrote:
>> When you run the query again do you get the same two statements?
> Yes. I can repeat the above 4 steps as much as I want. The result remains the
> same. I would have expected to have an empty result doing the final repea
Hi Adrian,
> On 26 Nov 2024, at 17:56, Adrian Klaver wrote:
>
> Did you commit the statements?
Yes. I have autocommit on, the psql default.
> Are you using concurrent sessions to do this?
No. I do this in one session. 1. select, 2. drop, 3. add, 4. select.
> When you run the query again do y
On 11/26/24 01:59, Paul Foerster wrote:
Hi,
I have a question regarding the recent security update for PostgreSQL 15.
We have a gitlab database. It used to run on the PostgreSQL 15.8 software. I
updated from 15.8 to 15.10 and executed the corrective actions as outlined in:
https://www.postgre
On Fri, Nov 22, 2024 at 9:26 AM Jethish Jethish
wrote:
> Sorry guys, what I meant was to log the tuple returned count not the
> values to get printed in logs.
>
That too is a no. Though more because is just doesn't seem useful enough
to expend effort on designing and implementing something.
The
On 11/21/24 21:02, Jethish Jethish wrote:
Hi Everyone,
Is it possible to log the rows returned by a query in the PostgreSQL logs?
https://github.com/pgaudit/pgaudit/blob/main/README.md
"pgaudit.log_rows
Specifies that audit logging should include the number of rows retrieved
or affected by
Sorry guys, what I meant was to log the tuple returned count not the values
to get printed in logs.
On Fri, Nov 22, 2024, 9:46 PM David G. Johnston
wrote:
> On Fri, Nov 22, 2024 at 12:32 AM Jethish Jethish
> wrote:
>
>> If an select query is fired I need the query returned values needs to be
>>
On Fri, Nov 22, 2024 at 12:32 AM Jethish Jethish
wrote:
> If an select query is fired I need the query returned values needs to be
> logged in my PostgreSQL log file.
>
Then the answer is no, and unlikely to become a feature anyone would
develop or that we'd accept.
A more innovative solution is
On 11/21/24 23:32, Jethish Jethish wrote:
Hi David,
If an select query is fired I need the query returned values needs to be
logged in my PostgreSQL log file.
For example if a select query returns 5 rows I need the same in the log
file
What if the query returns 500,000 rows?
What is the
Am Fri, Nov 22, 2024 at 01:02:27PM +0530 schrieb Jethish Jethish:
> If an select query is fired I need the query returned values needs to be
> logged in my PostgreSQL log file.
What do you expect your log file to be in size after, say, a
day of activity ?
Karsten
--
GPG 40BE 5B0E C98E 1713 AFA6
On Fri, Nov 22, 2024 at 01:02:27PM +0530, Jethish Jethish wrote:
> Hi David,
>
> If an select query is fired I need the query returned values needs to be
> logged in my PostgreSQL log file.
>
>
> For example if a select query returns 5 rows I need the same in the log
> file
Please note that you
Isn't "I need the query returned values" the purpose of the client
application?
On Fri, Nov 22, 2024 at 2:32 AM Jethish Jethish
wrote:
> Hi David,
>
> If an select query is fired I need the query returned values needs to be
> logged in my PostgreSQL log file.
>
>
> For example if a select query
Hi David,
If an select query is fired I need the query returned values needs to be
logged in my PostgreSQL log file.
For example if a select query returns 5 rows I need the same in the log
file
On Fri, Nov 22, 2024, 10:49 AM David G. Johnston
wrote:
> On Thursday, November 21, 2024, Jethish J
On Thursday, November 21, 2024, Jethish Jethish
wrote:
>
> Is it possible to log the rows returned by a query in the PostgreSQL logs?
>
As a practical matter, no. That said, given one can get row contents into
the server log in various ways, what is the use case for why this
capability is though
On 11/19/24 08:11, Paul Foerster wrote:
Hi,
the PostgreSQL 15.9 release notes instruct to look out for especially detached
partitions with foreign key constraints. I'm in the process of updating our
databases from 15.8 to 15.9 now and found a case where the select statement
returns a constrai
Hi Alvaro,
> On 19 Nov 2024, at 17:34, Alvaro Herrera wrote:
>
> It doesn't really matter when you do it, because the constraint only
> gets broken by running DETACH with the old server code. You have
> already run the DETACH sometime in the past (that's how the constraint
> got broken), which
Hi Adrian,
> On 19 Nov 2024, at 17:17, Adrian Klaver wrote:
>
> Read this:
>
> https://www.postgresql.org/about/news/out-of-cycle-release-scheduled-for-november-21-2024-2958/
>
> and hold off awhile.
Thanks very much. I will.
Cheers,
Paul
Hi Tom,
> On 19 Nov 2024, at 17:25, Tom Lane wrote:
>
> Generally speaking, our release notes are addressed to someone who's
> already installed the update (or a later one).
Thank you for the advice.
Cheers,
Paul
Hi Paul,
On 2024-Nov-19, Paul Foerster wrote:
> the PostgreSQL 15.9 release notes instruct to look out for especially
> detached partitions with foreign key constraints. I'm in the process
> of updating our databases from 15.8 to 15.9 now and found a case where
> the select statement returns a co
Paul Foerster writes:
> the PostgreSQL 15.9 release notes instruct to look out for especially
> detached partitions with foreign key constraints. I'm in the process of
> updating our databases from 15.8 to 15.9 now and found a case where the
> select statement returns a constraint.
> The relea
On 11/14/24 01:05, JOLAPARA Urvi (SAFRAN) wrote:
C2 - Confidential
Hello Klaver,
I have added below the log where setup is failing on FR language machine.
Log started 11/14/2024 at 09:04:33
Preferred installation mode : unattended
Trying to init installer in mode unattended
Mode unattended suc
l-gene...@postgresql.org
> Cc: KRISHNAN LINGATHAR Karupaswamy (SAFRAN)
>
> Subject: Re: postgresql-17.0-1 Application - silent installation Issue
>
> CAUTION: This message originated from an outside organization. In case of
> suspicion, click on "Report to SAFRAN Secu
group.com
-Original Message-
From: Adrian Klaver
Sent: Tuesday, November 12, 2024 10:06 PM
To: JOLAPARA Urvi (SAFRAN) ;
pgsql-gene...@postgresql.org
Cc: KRISHNAN LINGATHAR Karupaswamy (SAFRAN)
Subject: Re: postgresql-17.0-1 Application - silent installation Issue
CAUTION: This message origin
On Tue, Nov 12, 2024 at 7:47 PM Ron Johnson wrote:
> The documentation tells you, down in the Notes section.
>
> https://www.postgresql.org/docs/16/sql-createpublication.html
>
Thank you.
On 11/11/24 22:09, JOLAPARA Urvi (SAFRAN) wrote:
C2 - Confidential
This is a publicly readable list, the above has no meaning in that context.
Hello Klaver,
We are using PSADT for creating a script and installing through SCCM.
1) I don't work with Windows so I have no idea what the above
The documentation tells you, down in the Notes section.
https://www.postgresql.org/docs/16/sql-createpublication.html
On Tue, Nov 12, 2024 at 5:46 AM Jayadevan M
wrote:
> Hello all,
>
> I am using PostgreSQL 16.4. I tried to set up logical replication with
> "postgres" user and all worked fine.
RAN)
Subject: Re: postgresql-17.0-1 Application - silent installation Issue
CAUTION: This message originated from an outside organization. In case of
suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.
On 11/10/24 07:11, JOLAPARA Urvi (SAFRAN) wrote:
> C2 -
On 11/10/24 07:11, JOLAPARA Urvi (SAFRAN) wrote:
C2 - Confidential
Hello Team,
I am from Application Packaging team. we have created the package of
PostgreSQL 17.0-1 Application.
You are going to need to provide more detail on the package creation
process.
We have used the command line
On 10/21/24 04:06, Sunil Kumar wrote:
Classification: *Confidential*
If the rest of the list won't tell neither will I:)
Hi Team,
Can you please provide list of the supported hypervisors to run the
PostgreSQL Database.
Wouldn't the OS be the important factor? From the database's standpoi
On Monday, October 21, 2024, Sunil Kumar wrote:
>
> Can you please provide list of the supported hypervisors to run the
> PostgreSQL Database.
>
>
If you can run one of the build farm operating systems on your hypervisor
you should be fine. But to the extent I do not believe any build farm
member
I think the correct answer will be "all of them" :D maybe besides DOSBOX,
etc. because there are no DOS packages.
Also, I do not think you should send "Confidential" emails to public
mailing lists :)
Cheers,
Mateusz
pon., 21 paź 2024 o 13:33 Sunil Kumar napisał(a):
> Classification: *Confident
Στις 9/9/24 18:40, ο/η Tom Lane έγραψε:
Adrian Klaver writes:
On 9/9/24 03:24, Achilleas Mantzios - cloud wrote:
And the thing is that this creation via DDL is inside our design.
Certain users create some backup tables of the public data in their own
schema (via our app), then do some manipula
Adrian Klaver writes:
> On 9/9/24 03:24, Achilleas Mantzios - cloud wrote:
>> And the thing is that this creation via DDL is inside our design.
>> Certain users create some backup tables of the public data in their own
>> schema (via our app), then do some manipulations on the public data,
>> t
On 9/9/24 03:24, Achilleas Mantzios - cloud wrote:
On 9/8/24 23:46, Adrian Klaver wrote:
On 9/8/24 13:04, Achilleas Mantzios wrote:
Hi
for remote DDL execution (such as CREATE TABLE) is dblink my only
option?
You will need to define in what context you are considering options.
For instanc
On 9/8/24 23:46, Adrian Klaver wrote:
On 9/8/24 13:04, Achilleas Mantzios wrote:
Hi
for remote DDL execution (such as CREATE TABLE) is dblink my only
option?
You will need to define in what context you are considering options.
For instance you can do remote DDL operations by passing a com
1 - 100 of 1097 matches
Mail list logo