Is there still password max length restrictions in PG?
Hi All, Just noticed that the definition: postgres=# \d pg_shadow . usebypassrls | boolean | | | passwd | text | C | | . Looks like there is no length restriction for the password of a user. And in the code change history, 67a472d71c ("Remove arbitrary restrictions on password length.", 2020-09-03) seems having removed the length restriction. (in the history, there is 100 or even max length of 1024.) So, here, just a minor question, can we consider there is no max length restriction for the password of a user??? Need some document to make a clarification or suggestion to the user? BR, Sean He (iih...@qq.com)
Re: Is there still password max length restrictions in PG?
Hi, Thanks for your information. Even using SCRAM, when specified the content of "password", still there is a basic request about the length of it. From the source code, seems there is no restriction, right??? Is it reasonable??? BR, Sean He -- Original -- From: "Daniel Gustafsson"
Re: Is there still password max length restrictions in PG?
Thanks Daniel. That's a big help to me! -- Original -- From: "Daniel Gustafsson"
PG13 fails to startup even though the current transaction is equal to the target transaction
We've noticed what may be a regression / bug in PG13. I work at Heroku on the Data team, where we manage a fleet of Postgres services. This change has resulted in breaking the UX we offer to customers to manage their PG services. In particular, ‘forks’ and ‘point in time restores’ seem broken for PG13. I believe it is related to this patch ( https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no ) For PG12, we expect: -- We create a new Postgres service from archive and provide a recovery_target_xid -- PG replays the archive until the end of the archive is reached, and the current transaction == recovery_target_xid -- We measure the current transaction via the query SELECT pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot()) -- Since the current transaction is exactly equal to the target transaction, we perform the promotion For PG12, what we get: -- This process completes smoothly, and the new postgres service is up and running For PG13, we expect: -- We create a new Postgres service from archive and provide a recovery_target_xid -- PG replays the archive until the end of the archive is reached, and the current transaction == recovery_target_xid -- We measure the current transaction via the query SELECT pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot()) -- Since the current transaction is exactly equal to the target transaction, we perform the promotion For PG13, what we get: -- On promotion we see the postgres process exit with the following log lines: Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [18-1] sql_error_code = 0 LOG: promote trigger file found: /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [19-1] sql_error_code = 0 LOG: redo done at 0/60527E0 Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [20-1] sql_error_code = 0 LOG: last completed transaction was at log time 2021-03-17 14:42:44.901448+00 Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [21-1] sql_error_code = XX000 FATAL: recovery ended before configured recovery target was reached Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[5]: [8-1] sql_error_code = 0 LOG: startup process (PID 7) exited with exit code 1 Even though the transaction IDs are identical. It seems like the end of the archive was reached (in fact the last transaction), and while we arrived at the correct transaction id, somehow PG decided it wasn’t done replaying? Perhaps because somehow the timestamps don’t line up? (Afaict we do not set the recovery_target_time, just the recovery_target_xid) We have the `recovery_target_inclusive` set to true, which is the default. It really seems like the intent of that setting means that if the target equals the current transaction ID, recovery should be marked as complete. However we're seeing the opposite. While the current txn id == the target transaction id, the server exits. This is surprising, and doesn't line up with our expected behavior. We have a workaround. Right before promotion, if we increment the transaction of the leader database (the original PG service that we're forking from) by running `SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive timeout value to allow for the WAL segment to be written / uploaded / read), and wait until the current transaction is strictly greater than the target transaction, then the promotion seems to work fine every time for PG13. But this seems like an off by one error? What do you think? Is this a bug? Is this expected? Is this user error on our end? Thanks! Sean
Re: PG13 fails to startup even though the current transaction is equal to the target transaction
Hi Kyotaro - Thanks for the response. I think it boils down to your comment: > I'm not sure. The direct cause of the "issue" is a promotion trigger > that came before reaching recovery target. That won't happen if the > "someone" doesn't do that. I think the question is 'under what conditions is it safe to do the promotion' ? What is your recommendation in this case? The end of the archive has been reached. All transactions have been replayed. And in fact the current transaction id is exactly equal to the target recovery transaction id. So by all the indicators I can see, this recovery is in fact done. All the data that should be there is there. All the transactions that I want replayed have been replayed. (In fact all the transactions in the archive have been replayed). If we stop and wait before hitting the promotion trigger, we could wait indefinitely (if the parent service has no more incoming transactions, which means no more WAL to replay). Are you recommending that we wait until another transaction happens on the parent DB? Thanks, Sean On Mon, Mar 22, 2021 at 3:59 AM Kyotaro Horiguchi wrote: > At Thu, 18 Mar 2021 17:59:34 -0400, Sean Jezewski < > sjezew...@salesforce.com> wrote in > > We've noticed what may be a regression / bug in PG13. > > > > I work at Heroku on the Data team, where we manage a fleet of Postgres > > services. This change has resulted in breaking the UX we offer to > customers > > to manage their PG services. In particular, ‘forks’ and ‘point in time > > restores’ seem broken for PG13. > > > > I believe it is related to this patch ( > > > https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no > > ) > > > > For PG12, we expect: > > > > -- We create a new Postgres service from archive and provide a > > recovery_target_xid > > -- PG replays the archive until the end of the archive is reached, and > the > > current transaction == recovery_target_xid > > -- We measure the current transaction via the query SELECT > > pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot()) > > -- Since the current transaction is exactly equal to the target > > transaction, we perform the promotion > > > > For PG12, what we get: > > > > -- This process completes smoothly, and the new postgres service is up > and > > running > ... > > For PG13, what we get: > > > > -- On promotion we see the postgres process exit with the following log > > lines: > > > > Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: > > [18-1] sql_error_code = 0 LOG: promote trigger file found: > > /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF > > This means someone other than the server itself has placed that file > to cause the promotion, perhaps before reaching the target point of > the recovery. Even if that happened on PG12, server is uninterested > in the cause of the recovery stop and happily proceeds to > promotion. Thus, it is likely that the configured target xid actually > have not been reached at promotion at least in the PG13 case. > > > Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: > > [19-1] sql_error_code = 0 LOG: redo done at 0/60527E0 > ... > > [21-1] sql_error_code = XX000 FATAL: recovery ended before configured > > recovery target was reached > > In PG13, startup process complains like this even if recovery is > stopped by operational (or manual) promotion. There might be other > behaviors but it seems to be reasonable to give priority on > configuration in postgresql.conf over on-the-fly operations like > promotion triggering. > > > Even though the transaction IDs are identical. It seems like the end of > the > > archive was reached (in fact the last transaction), and while we arrived > at > > the correct transaction id, somehow PG decided it wasn’t done replaying? > > Perhaps because somehow the timestamps don’t line up? (Afaict we do not > set > > the recovery_target_time, just the recovery_target_xid) > > > > We have the `recovery_target_inclusive` set to true, which is the > default. > > It really seems like the intent of that setting means that if the target > > equals the current transaction ID, recovery should be marked as complete. > > However we're seeing the opposite. While the current txn id == the target > > transaction id, the server exits. This is surprising, and doesn't line up > > with our expected behavior. > > So at least the issue raised here doesn't seem relevant to how > xid-targetted PITR works. > > > We have a wo
Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash (segfault)
Not sure if this is the right avenue to follow up on this. The patch works fine. However, we're working on a modified version of the postgres_fdw in which we're trying to push as much as possible to the remote nodes, including ordering and limits. The patch causes the upper paths for the ordering and limit to be rejected as they have no relids. I've had a quick look at maybe how to pull in relids in the fdw private data but its not obvious. Obviously this isn't mainstream postgres but just wondering if anyone has looked into issues with regards to pushing order/limit to remote nodes for fdw. On Sat, Apr 27, 2019 at 3:47 PM Tom Lane wrote: > Etsuro Fujita writes: > > On Sat, Apr 27, 2019 at 2:10 AM Tom Lane wrote: > >> If we don't want to change what the core code does with fdw_exprs, > >> I think the only way to fix it is to hack postgres_fdw so that it > >> won't generate plans involving the problematic case. > > > Seems reasonable. > > >> See attached. > > > I read the patch. It looks good to me. I didn't test it, though. > > Thanks for looking! Have a good vacation ... > > regards, tom lane >
Re: WAL prefetch
. * Drop a VACUUM FULL FREEZE into any pgbench testing (or a pg_repack) and it's trivial to see the effects, even on SSD. Similarly, performing a fast shutdown of a replica and amassing a large backlog of unrecieved, unapplied WAL pages is pretty demonstrative. * "In this case I see that without prefetching, the replay process uses about 20% of a CPU. With prefetching increases this to ~60%, which is nice." With the pg_prefaulter, the IO should hit 100% utilization. Like I mentioned above, Tomas, I'd like to make sure you get this working so you can compare and improve as necessary. :~] I never got CPU utilization to 100%, but I did get disk IO utilization to 100%, and that to me was the definition of success. CPU utilization of the apply process could become 100% utilized with fast enough disks but in production I didn't have anything that wasn't spinning rust. * It looks like we're still trying to figure out the nature of this problem and the cost of various approaches. From a rapid prototyping perspective, feel free to suggest changes to the Go pg_prefaulter and toss the various experiments behind a feature flag. * "> But it is implemented in GO and using pg_waldump. Yeah, that's not too good if we want it in core." I fail to see the problem with a side-car in Go. *checks calendar* :~] * In pg_prefaulter all IOs are converted into full-page reads. * pg_prefaulter will not activate if the number of unapplied WAL pages is less than the size of 1 WAL segment (i.e. 16MB). This could be tuned further, but this low-water mark seems to work well. * pg_prefaulter won't read-ahead more than 100x WAL segments into the future. I made the unverified assumption that PostgreSQL could not process more than 1.6GB of WAL (~1/2 of the memory bandwidth available for a single process) in less than the rotational latency of a random IO (i.e. ~10ms), and that pg_prefaulter could in theory stay ahead of PG. PostgreSQL normally overtakes pg_prefaulter's ability to fault in random pages due to disk IO limitations (PostgreSQL's cache hit rate is ~99.9%, not 100% for this very reason). In practice this has worked out, but I'm sure there's room for improvement with regards to setting the high-watermark and reducing this value. #yoloconstants * I contemplated not reading in FPW but this would have been detrimental on ZFS because ZFS is a copy-on-write filesystem (vs block filesystem). For ZFS, we are using a 16K record size, compressed down to ~8K. We have to read the entire record in before we can modify half of the page. I suspect eliding prefaulting FPWs will always be a performance loss for nearly all hardware. * If there is sufficient interest in these experiences, contact me offline (or via PostgreSQL Slack) and I can setup a call to answer questions in a higher-bandwidth setting such as Zoom or Google Hangouts. I'm sorry for being late to the reply party, I've been watching posts in this thread accumulate for a while and haven't had time to respond until now. Cheers. -sc -- Sean Chittenden signature.asc Description: PGP signature