Getting away from Oracle APEX, recommendations for PostgreSQL?
Hi all, if someone wants to get away from developing Oracle APEX to become more flexible with the related database system (yes, rebuilding everything is the case here)what kind of web technology or programming language etc. would you recommend in case of using PostgreSQL? Or is there anything similar for an option? Thanks and best regards Dirk
Migration of DB2 java stored procedures to PostgreSQL
Hi, for a potential migration from DB2 on DB2/z to PostgreSQL I have to take care of a whole bunch of java stored procedures. Would that be a show stopper here or is a migration somehow possible or is it anyway the same because Java = Java? Any experiences/recommendations or helpful ressources? Many thanks and best regards Dirk
Most effective and fast way to load few Tbyte of data from flat files into postgresql
Hi, what would be the fastest or most effective way to load few (5-10) TB of data from flat files into a postgresql database, including some 1TB tables and blobs? There is the copy command but there is no way for native parallelism, right? I have found pg_bulkload but haven't tested it yet. As far I can see EDB has its EDB*Loader as a commercial option. Anything else to recommend? Thanks and best regards Dirk
High Availability, guarantee to use sync nodes
Hi all, if there are several PostgreSQL nodes replicating synchronous and I configure a guarantee that for example 3 of 5 nodes have to be in sync to go on with synchronous_standby_names and now I want to implement a load balancing scenario with active standby DBs...how or with what tool can I achieve that my read clients only use sync nodes. For sure somehow a check in the pg_stat_replication before is possible but is there something in parameters or with solutions like repmgr, patroni, etc. to do this? Thanks and best regards Dirk
UUID with variable length
Hi, I have here a situation with the usage of UUID. Here the database user allows UUIDs with less then 16 byte lengths (please don't ask :-) ). Of course there are some technical ways to do the filling of the not used bytes but I hope there is a better solution. This UUID is used as primary Key and for indexing. CHAR 16 is not preferred because of the risk of mistakenly changing the binary data. Another option would be e.g. bytea but how is good would it work from a IO an latency perspective. Or do you have some other ideas how to use a primary key datatype like UUID but with variable length? Thanks Dirk
Date Format 9999-12-31-00.00.00.000000
Hi, because of a migration from DB2 we have a lot of timestamps like -12-31-00.00.00.00 What would be the best way to handle this in Postgres also related to overhead and performance (index usage?). Or is TO_TIMESTAMP('-12-31-00.00.00.00', '-MM-DD-HH24.MI.SS.US') the only way? And isn't it possible to define this like NLS parameters in Oracle system wide? Thanks Dirk
AW: Question on postgres certified OS platforms
Hi Raj, I am not aware of any official platform compatibility matrix but the Best way to find out is the status pane of the build farm https://buildfarm.postgresql.org/cgi-bin/show_status.pl Here you can search for your requested OS and check the test results for the last 5 releases. Some details are also written in the official documentation https://www.postgresql.org/docs/13/supported-platforms.html HTH Cheers Dirk Von: Raj Iyer Gesendet: 15 October 2020 19:23 An: pgsql-gene...@postgresql.org Betreff: Question on postgres certified OS platforms Hi, Is there information available on what versions of Postgres are certified and supported on what Operating System platforms. Thanks, Raj Confidentiality Notice: Amwell is a registered trademark of American Well Corporation. This email is the property of American Well, Inc This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
Backup Restore from other node after switchover/failover
Hi, haven’t tested it yet but maybe I can get a quick answer here. We have discussed the following scenario. Few nodes as streaming replication cluster all in sync with taking backup only from one dedicated node. Now that node which is responsible for the backups goes down. For sure I have a full backup taken from that crashed node some time ago but now this node is gone and I have to take care for a restore for some reason. Am I able to make a full restore with that last backup from the offline node to any other still existing node with Recovering the archived WAL from the new selected node if I have activated archive_mode as “always”? Or is it strictly necessary to create a new full backup from the new node which is responsible for the backups after the failover or maybe after a Switchover, too? Thanks and best regards Dirk
Is there something similar like flashback query from Oracle planned for PostgreSQL
Hi, just a curious question... Is there something planned to get a behaviour like Oracle's flashback query based on the old values before deleted by vacuum? So a feature to recreate old versions of rows if still there? Or are there any related extensions or tools doing this? Thanks Dirk
Experience with node-pg-migrate
Hi, does anyone have good or bad experience with node-pg-migrate as a schema management/migrateion tool? Or maybe do you have some better recommendations? Cheers Dirk