Re: Upgrade problem
On 2023-09-11 17:04:54 +0100, Graeme wrote: > Preparing to upgrade my small cluster from Mageia 8/Pg 9 to Mageia 9/Pg 15. > I'm > at the point of running pg_upgrade but have received anerror message: > > /mga8/usr/bin/postgres: error while loading shared libraries: libssl.so.1.1: > cannot open shared object file: No such file or directory > no data was returned by command ""/mga8/usr/bin/postgres" -V" > > However: > > [root@bach lib64]# cd /mga8/usr/lib64 I'm not familiar with the Mageia update process, but the paths look strange, Is it normal that old binaries and libraries are moved into a directory named after the old version (I assume "mga8" is short for Mageia version 8") or is this something you have done? In any case, /mga8/usr/lib64 would not normally be on the library search path, Have you somehow told /mga8/usr/bin/postgres to look there? hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: Upgrade problem
On 11/09/2023 17:04, Graeme wrote: Preparing to upgrade my small cluster from Mageia 8/Pg 9 to Mageia 9/Pg 15. I'm at the point of running pg_upgrade but have received anerror message: /mga8/usr/bin/postgres: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory no data was returned by command ""/mga8/usr/bin/postgres" -V" However: [root@bach lib64]# cd /mga8/usr/lib64 [root@bach lib64]# ls -l|grep libssl -rwxr-xr-x 1 root root 426192 Jul 5 23:07 libssl3.so* lrwxrwxrwx 1 root root 13 Jun 1 09:35 libssl.so -> libssl.so.1.1* -r-xr-xr-x 1 root root 442424 Feb 27 2021 libssl.so.1.0.0* -rwxr-xr-x 1 root root 666496 Jun 1 09:36 libssl.so.1.1* Can someone suggest my next move please? Ta Graeme Gemmill Tom, Adrian, Ray: thanks for your comments. I see the problem now, trying to gat a Mga8 module to link against Mga9 libraries. I suspecy a carefully placed sym-link or two will sort the problem.
Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?
> Hello friends, > > I want to migrate a large Oracle table to PostgreSQL, which is > approximately 200GB in size and includes a LOB segment. This table has a > primary key column named "ID," which increments by one unit (similar to a > sequence). During the migration of this large table, sometimes the process > terminates prematurely (due to network issues, etc.). As a result, I have > to restart the import process either from the beginning or by using a WHERE > clause in the ora2pg configuration file to import the remaining data from > where it was previously terminated. > > However, I've noticed that the import process in ora2pg doesn't seem to > follow a consecutive order, starting from a small ID number and progressing > to a larger ID number. Instead, it appears to import data randomly from > various rows of the table. This makes it impossible to continue importing > data from the point where it was terminated using a WHERE clause in ora2pg. > > My questions are: > > - Is it correct that ora2pg imports data from the Oracle table to the > PostgreSQL table randomly and not sequentially from a smaller ID to a > larger ID? > - Or can we add additional parameters to the ora2pg.conf file to control > this process and ensure that the data is imported sequentially, following > the primary key from smallest to largest? > Thank you, > Hans >
pg_update and encoding
Preparing to use pg_update, I used initdb to create the new pgsql/data, but pg_update has exited with encodings for database "template1" do not match: old "UTF8", new "SQL_ASCII" Should I delete pgsql/data and re-create with initdb -E "UTF8"? Thanks, Graeme
Re: Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?
On 2023-09-08 17:19:01 +0700, duc hiep ha wrote: > I want to migrate a large Oracle table to PostgreSQL, which is approximately > 200GB in size and includes a LOB segment. This table has a primary key column > named "ID," which increments by one unit (similar to a sequence). During the > migration of this large table, sometimes the process terminates prematurely > (due to network issues, etc.). As a result, I have to restart the import > process either from the beginning or by using a WHERE clause in the ora2pg > configuration file to import the remaining data from where it was previously > terminated. > > However, I've noticed that the import process in ora2pg doesn't seem to follow > a consecutive order, starting from a small ID number and progressing to a > larger ID number. Instead, it appears to import data randomly from various > rows > of the table. This makes it impossible to continue importing data from the > point where it was terminated using a WHERE clause in ora2pg. > > My questions are: > > - Is it correct that ora2pg imports data from the Oracle table to the > PostgreSQL table randomly and not sequentially from a smaller ID to a larger > ID? Oracle doesn't in general order tables by primary key (it does offer "index organized tables", but the default is "heap organized"), so a simple "select * from tablename" will return rows in semi-random order. I think the rowid is in ascending order (but I can't test that at the moment) so you may be able to use the rowid in your where clause. > - Or can we add additional parameters to the ora2pg.conf file to control this > process and ensure that the data is imported sequentially, following the > primary key from smallest to largest? I can't answer that. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?
El día martes, septiembre 12, 2023 a las 02:19:19 +0200, Peter J. Holzer escribió: > > - Or can we add additional parameters to the ora2pg.conf file to control > > this > > process and ensure that the data is imported sequentially, following the > > primary key from smallest to largest? AFAIK, a simple SELECT in PostgreSQL without an ORDER BY will return the rows in random order. matthias -- Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub
Re: Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?
> On 12 Sep 2023, at 14:26, Matthias Apitz wrote: > > El día martes, septiembre 12, 2023 a las 02:19:19 +0200, Peter J. Holzer > escribió: > >>> - Or can we add additional parameters to the ora2pg.conf file to control >>> this >>> process and ensure that the data is imported sequentially, following the >>> primary key from smallest to largest? > > AFAIK, a simple SELECT in PostgreSQL without an ORDER BY will return the rows > in random order. It will return the rows in some order, without guarantees about randomness or asc/desc ordering. Any query which relies on ordering, whichever is wanted, should include an ORDER BY clause. -- Daniel Gustafsson
Re: Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?
I have not tried this in a while but I think a SELECT with a "hint" will return rows in the order of the index in the hint. This does NOT work for distributed queries. > On 09/12/2023 9:10 AM EDT Daniel Gustafsson wrote: > > > > On 12 Sep 2023, at 14:26, Matthias Apitz wrote: > > > > El día martes, septiembre 12, 2023 a las 02:19:19 +0200, Peter J. Holzer > > escribió: > > > >>> - Or can we add additional parameters to the ora2pg.conf file to control > >>> this > >>> process and ensure that the data is imported sequentially, following the > >>> primary key from smallest to largest? > > > > AFAIK, a simple SELECT in PostgreSQL without an ORDER BY will return the > > rows > > in random order. > > It will return the rows in some order, without guarantees about randomness or > asc/desc ordering. Any query which relies on ordering, whichever is wanted, > should include an ORDER BY clause. > > -- > Daniel Gustafsson
Re: pg_update and encoding
On 12/09/2023 15:52, Adrian Klaver wrote: On 9/12/23 04:49, Graeme wrote: Preparing to use pg_update, I used initdb to create the new pgsql/data, but pg_update has exited with I'm guessing that is actually pg_upgrade: https://www.postgresql.org/docs/current/pgupgrade.html encodings for database "template1" do not match: old "UTF8", new "SQL_ASCII" Should I delete pgsql/data and re-create with initdb -E "UTF8"? Yes for two reasons: 1) The upgrade will not happen. 2) From here: https://www.postgresql.org/docs/current/multibyte.html "The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is SQL_ASCII, the server interprets byte values 0–127 according to the ASCII standard, while byte values 128–255 are taken as uninterpreted characters. No encoding conversion will be done when the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters." Thanks, Graeme Thanks, done. Data now accessible Graeme
Re: pg_update and encoding
On 9/12/23 04:49, Graeme wrote: Preparing to use pg_update, I used initdb to create the new pgsql/data, but pg_update has exited with I'm guessing that is actually pg_upgrade: https://www.postgresql.org/docs/current/pgupgrade.html encodings for database "template1" do not match: old "UTF8", new "SQL_ASCII" Should I delete pgsql/data and re-create with initdb -E "UTF8"? Yes for two reasons: 1) The upgrade will not happen. 2) From here: https://www.postgresql.org/docs/current/multibyte.html "The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is SQL_ASCII, the server interprets byte values 0–127 according to the ASCII standard, while byte values 128–255 are taken as uninterpreted characters. No encoding conversion will be done when the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters." Thanks, Graeme -- Adrian Klaver adrian.kla...@aklaver.com
Re: Upgrade problem
On 9/12/23 03:51, Graeme wrote: On 11/09/2023 17:04, Graeme wrote: Ta Graeme Gemmill Tom, Adrian, Ray: thanks for your comments. I see the problem now, trying to gat a Mga8 module to link against Mga9 libraries. I suspecy a carefully placed sym-link or two will sort the problem. I'm thinking you would be better off doing a pg_dumpall of the old cluster and psql -f dump_file.sql on the new cluster per: https://www.postgresql.org/docs/current/app-pg-dumpall.html -- Adrian Klaver adrian.kla...@aklaver.com
Unqualified relations in views
For a view, how does one show what schema was used to qualify a relation, when the query used to create the view originally left the relation unqualified? The qualification of the view query seems static in all uses of the view. Using pg_get_viewdef() returns the unqualified relation, but Postgres always executes a qualified version of the view query, seemingly determined upon creation of the view. That implies the final qualifier is stored by Postgres, but I don't know how to show it. Thanks, Pete O'Such
Re: Unqualified relations in views
On Wed, 2023-09-13 at 01:58 -0400, Pete O'Such wrote: > For a view, how does one show what schema was used to qualify a relation, when > the query used to create the view originally left the relation unqualified? > > The qualification of the view query seems static in all uses of the view. > Using pg_get_viewdef() returns the unqualified relation, but Postgres always > executes a qualified version of the view query, seemingly determined upon > creation of the view. > > That implies the final qualifier is stored by Postgres, but I don't know how > to show it. PostgreSQL resolves tables and other objects according to the setting of "search_path" at CREATE VIEW time. The query itself is stored in parsed form. pg_get_viewdef() deparses the query and only prepends a schema name if the schema is not on the "search_path". So the solution is to set "search_path" empty: SET search_path = ''; SELECT pg_get_viewdef('myschema.myview'); Yours, Laurenz Albe