Re: Was my question inappropriate for postgres?
> > 3) "...compatible to Microsoft's M365." I who am not a Windows user, > knew what the OP meant. And if you go here: > > https://en.wikipedia.org/wiki/Microsoft_365#As_%22Microsoft_365%22 > > You will see: > > "On March 30, 2020, Microsoft announced that the consumer plans of > Office 365 would be rebranded as "Microsoft 365" (a brand also used by > Microsoft for an enterprise subscription bundle of Windows, Office 365, > and security services) on April 21, 2020, succeeding existing consumer > plans of Office 365." > > So M365 is actually more correct then 0365. > I asked a colleague, who is managing M365 licenses at MS and he agrees with your statement Adrian that M365 is more correct than O365. What he also mentioned that the case is a bit more complicated and in practice both, i.e. O365 and M365 are fine to use. pozdrawiam, best regards, mit freundlichen Grüßen, Alicja Kucharczyk *Warsaw PostgreSQL User Group* <https://www.meetup.com/pl-PL/Warsaw-PostgreSQL-Users-Group/>
Re:
śr., 27 lip 2022 o 08:08 hubert depesz lubaczewski napisał(a): > On Tue, Jul 26, 2022 at 10:48:47AM -0700, Adrian Klaver wrote: > > On 7/26/22 9:29 AM, Ron wrote: > > > On 7/26/22 10:22, Adrian Klaver wrote: > > > > On 7/26/22 08:15, Rama Krishnan wrote: > > > > > Hi Adrian > > > > > > > > > > > > > > > > > What is size of table? > > > > > > > > > > I m having two Database example > > > > > > > > > > 01. Cricket 320G > > > > > 02.badminton 250G > > > > > > > > So you are talking about an entire database not a single table, > correct? > > > > > > In a private email, he said that this is what he's trying: > > > Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp - > > > s3://dump/cricket.dump > > > > > > It failed for obvious reasons. > > From what I gather it did not fail, it just took a long time. Not sure > > adding -j to the above will improve things, pretty sure the choke point > is > > still going to be aws cp. > > It's really hard to say what is happening, because the command, as shown > wouldn't even work. > > Starting from Pg_dump vs. pg_dump, space between `-` and `d`, "Fd" as > argument, or even the idea that you *can* make -Fd dumps to stdout and > pass it to aws cp. > > depesz > I believe it's worth to look at this project: https://github.com/dimitri/pgcopydb since it is trying to solve exactly this problem pozdrawiam, best regards, mit freundlichen Grüßen, Alicja Kucharczyk *Warsaw PostgreSQL User Group* <https://www.meetup.com/pl-PL/Warsaw-PostgreSQL-Users-Group/>
Re: Feature request(?): Proxy User
czw., 28 lip 2022 o 14:18 Mateusz Henicz napisał(a): > Hi, > Did you check "SET SESSION AUTHORIZATION" or "SET ROLE" commands? > I think that is what you are looking for. > > Cheers, > Mateusz > > czw., 28 lip 2022 o 11:29 Wiwwo Staff napisał(a): > >> Sorry to post solutions and links about alternative DBMSs, but I miss >> this sort of "sudo" ad database user level: >> https://oracle-base.com/articles/misc/proxy-users-and-connect-through >> >> Is there any approach to achieve the same result in Postgresql? >> Anyone sharing the need of this functionality? >> >> Tnx! >> >> Can you elaborate what exact problem are you trying to solve here? looking at the article you sent: "Some DBA tasks, like creating private database links or setting up jobs using the DBMS_JOB package, require the administrator to log in as a specific user. This can present a problem if the administrator doesn't know the password." probably as Mateusz mentioned SET ROLE is the solution, if you are looking for something more sophisticated you might want to look what INHERIT attribute means: https://www.postgresql.org/docs/current/role-membership.html or look at functions with security definer: https://www.postgresql.org/docs/current/sql-createfunction.html "You have multiple developers working in a shared schema. Letting multiple people share the same credentials represents a security risk. Instead you create a separate proxy user for each individual, allowing them to connect to the schema owner with their own credentials. If a user leaves a project, you simply lock or drop their user, and they no longer have access to the shared schema." It's totally not needed in Postgres as the architecture and the concept of schema and users is different and you don't have any credentials to the schema, which is just a namespace in postgres
Re: Regd. the Implementation of Wallet (in Oracle) config equivalent in postgreSQL whilst the database migration
śr., 21 gru 2022 o 18:33 Chetan Kosanam napisał(a): > TCS Confidential > > Hi *Support*@ *PostgreSQL* Team, > > > > *Greetings* *!!* > > > > We are operating within one of the *AWS business* units of *TCS*. Our > team is working on the *Database migration from Oracle(on Amazon EC2) to > PostgreSQL*. The reason for this e-mailer is to seek your earnest > required support from you on the *Implementation of Wallet configuration* > ( which is in Oracle) by its *equivalent* configuration on the > *PostgreSQL* side. > > > > Thanks & Regards, > > Chetan Kosanam > > TCS, HYDERABAD > > Contact : 9502753544 > > Mailto : chetan.kosa...@tcs.com > It seems you are looking for consultant: https://www.postgresql.org/support/professional_support/
Use case for enabling log_duration other than benchmarking
Do you know any use case for enabling log_duration? Like 3rd party tools for instance. I find this parameter pretty much useless (in opposite to log_min_duration_statement) as it does not show the query text, so besides having just the timing logged it is of no use in troubleshooting and often causes huge overhead. Am I missing something? pozdrawiam, best regards, Alicja Kucharczyk
Re: Use case for enabling log_duration other than benchmarking
wt., 10 sty 2023 o 14:57 Ron napisał(a): > On 1/10/23 07:14, Alicja Kucharczyk wrote: > > Do you know any use case for enabling log_duration? Like 3rd party tools > for instance. > I find this parameter pretty much useless (in opposite to > log_min_duration_statement) as it does not show the query text, so besides > having just the timing logged it is of no use in troubleshooting and often > causes huge overhead. Am I missing something? > > > https://www.postgresql.org/docs/current/runtime-config-logging.html > > Note > > The difference between enabling log_duration and setting > log_min_duration_statement > <https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT> > to zero is that exceeding log_min_duration_statement forces the text of > the query to be logged, but this option doesn't. Thus, if log_duration is > on and log_min_duration_statement has a positive value, all durations are > logged but the query text is included only for statements exceeding the > threshold. *This behavior can be useful for gathering statistics in > high-load installations.* > thank you Ron. My question is a bit more practical - Does anyone really find it useful? What value brings the info that 20% of my query are under 1ms and 10% over 1 minute - If just checked once and then turned off - I can understand to have more visibility into the overall characteristics. But let say someone have it enabled on a production system all the time - what could be the reason for that?
create database hangs forever on WSL - autovacuum deadlock?
Hi All, this is a problem from slack channel; it looks like a deadlock. Create database hangs infinitely showing a lock in pg_stat_activity and being mutually locked by autovacuum workers. executing pg_reload_conf couple times releasing the lock. Why sighup helps in this situation? is it a problem with windows memory management? this is postgres running on windows wsl: PostgreSQL 14.1 (Ubuntu 14.1-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit 2022-01-08 07:24:06.892 CET [2347] LOG: process 2347 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 1000.918 ms 2022-01-08 07:24:06.892 CET [2347] DETAIL: Processes holding the lock: 2308, 2318. Wait queue: 2117, 2347. pid | 2117 wait_event_type | Lock wait_event | object state| active backend_xid | backend_xmin | 749 query_id | query| create database mytest1; backend_type | client backend postgres 2347 0.0 0.0 221280 2388 ?Ss 07:24 0:00 postgres: 14/main: autovacuum worker waiting postgres 2308 0.0 0.0 221284 2420 ?Ss 07:15 0:00 postgres: 14/main: autovacuum worker waiting postgres 2318 0.0 0.0 221284 2420 ?Ss 07:16 0:00 postgres: 14/main: autovacuum worker waiting postgres 2347 0.0 0.0 221280 2388 ?Ss 07:24 0:00 postgres: 14/main: autovacuum worker waiting 2022-01-08 07:36:44.359 CET [2409] postgres@postgres LOG: duration: 0.183 ms statement: SELECT pg_reload_conf(); 2022-01-08 07:36:44.359 CET [1561] LOG: received SIGHUP, reloading configuration files 2022-01-08 07:36:44.361 CET [2117] postgres@postgres LOG: process 2117 acquired ShareLock on object 1 of class 1262 of database 0 after 759697.185 ms 2022-01-08 07:36:44.361 CET [2117] postgres@postgres STATEMENT: create database mytest1; 2022-01-08 07:36:44.361 CET [2347] LOG: process 2347 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 758470.143 ms 2022-01-08 07:36:44.361 CET [2347] DETAIL: Process holding the lock: 2117. Wait queue: 2347. 2022-01-08 07:36:46.191 CET [2409] postgres@postgres LOG: duration: 0.146 ms statement: SELECT pg_reload_conf(); 2022-01-08 07:36:46.191 CET [1561] LOG: received SIGHUP, reloading configuration files 2022-01-08 07:36:46.193 CET [2347] LOG: process 2347 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 760301.845 ms 2022-01-08 07:36:46.193 CET [2347] DETAIL: Process holding the lock: 2117. Wait queue: 2347. 2022-01-08 07:36:47.532 CET [2409] postgres@postgres LOG: duration: 0.113 ms statement: SELECT pg_reload_conf(); 2022-01-08 07:36:47.533 CET [1561] LOG: received SIGHUP, reloading configuration files 2022-01-08 07:36:47.534 CET [2347] LOG: process 2347 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 761643.434 ms 2022-01-08 07:36:47.534 CET [2347] DETAIL: Process holding the lock: 2117. Wait queue: 2347. 2022-01-08 07:36:52.496 CET [2409] postgres@postgres LOG: duration: 0.141 ms statement: SELECT pg_reload_conf(); 2022-01-08 07:36:52.496 CET [1561] LOG: received SIGHUP, reloading configuration files 2022-01-08 07:36:52.499 CET [2347] LOG: process 2347 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 766607.664 ms 2022-01-08 07:36:52.499 CET [2347] DETAIL: Process holding the lock: 2117. Wait queue: 2347. 2022-01-08 07:36:52.542 CET [2117] postgres@postgres LOG: duration: 767877.903 ms statement: create database mytest1; pozdrawiam, best regards, mit freundlichen Grüßen, Alicja Kucharczyk
Re: create database hangs forever on WSL - autovacuum deadlock?
sob., 8 sty 2022 o 22:40 Thomas Munro napisał(a): > On Sat, Jan 8, 2022 at 8:11 PM Alicja Kucharczyk > wrote: > > this is postgres running on windows wsl: PostgreSQL 14.1 (Ubuntu > 14.1-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu > 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit > > Is this WSL1 (some kind of Linux system call emulator running on an NT > kernel) or WSL2 (some kind of virtual machine running a Linux kernel)? > lookd like WSL1: NAME STATE VERSION * UbuntuRunning 1
Re: huge_pages=on cause could not map anonymous shared memory: Cannot allocate memory
Hi Bogdan, The root cause here is that the number of huge pages you've configured (vm.nr_hugepages = 980) is not sufficient. Each huge page on your system is 2 MB in size, so 980 pages give you roughly 1.96 GB of memory (980 × 2 MB). However, PostgreSQL is clearly requesting about 2.2 GB of shared memory (specifically, 2204106752 bytes as shown in the error message you provided), which exceeds what's available through huge pages. That’s why PostgreSQL fails to start when huge_pages = on - it requires the entire shared memory segment to come from huge pages and refuses to fall back to regular ones. Earlier, you had the huge_pages setting commented out, which means PostgreSQL used the default value: huge_pages = try. In that mode, it first attempts to use huge pages, but if that fails (like in your case due to insufficient allocation), it falls back to standard memory pages — which is why the instance started without issues then. To fix the issue, you should increase vm.nr_hugepages to at least 1100 to fully cover the shared memory request (you can go a bit higher to be safe and then reduce it as described in the article I'm pasting the link to). Also, a side note: max_connections = 1000 is quite high for an instance with 8 GB of RAM and only 2 vCPUs. Even if huge pages are properly allocated, such a high number of connections can lead to performance issues. You might want to consider lowering it or using a connection pooler like PgBouncer. If you’d like to understand how huge pages work in PostgreSQL, including how to calculate memory needs and configure the OS properly, I wrote a detailed article some time ago (still valid). It’s in Polish, which I assume is fine for you: https://linuxpolska.com/pl/baza-wiedzy/blog/postgres-pamieci-ram-tipstricks/ best regards, Alicja Kucharczyk >
Re: Azure Database for PostgreSQL flexible server: password authentication failed
Hi Alex, pt., 28 lut 2025 o 16:05 Alexander Farber napisał(a): > Now the connection from a container in AKS works for me: > > $ psql "postgresql://postgresql:password12345!!@ > 172.21.32.4:5432/postgres?sslmode=require" > psql (17.4, server 16.6) > SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, > compression: off, ALPN: none) > Type "help" for help. > > postgres=> > good it finally worked ;) > > I must find now how to add a non-super user to it next. > You are adding users there as in any other postgres instance, just use CREATE ROLE/USER from SQL prompt or use createuser utility.
Re: Azure Database for PostgreSQL flexible server vs other Azure offerings
Hi Alex, as Christoph already said go with Flexible Server, Single Server was deprecated some time ago. you don't need to worry about availability zones, AI or replication if you don't want to use it. Just create a single instance in the region of choice (availability zone will be automatically assigned) and try it out - there is a free tier for a year to try it out. https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-deploy-on-azure-free-account maybe you can add it to your PoC there are no 3 options: "Azure Database for PostgreSQL" is just an umbrella name, not a product, the product is "Azure Database for PostgreSQL - Flexible Server". Basically it's just one for now. regards, Alicja