Re: Right version of jdbc

2023-09-30 Thread Raivo Rebane
I fix previous error what was my bad knowledge,
But new error occur which is related to postgres postgis jars.
If You are kind to answer me more;

Java code is :

public static boolean CheckIsNewInMushrooms(Connection connection, Point
AddLocation, String AddDescription) {

boolean IsNew = true;


try {

String sqlQuery = "SELECT location, description FROM mushrooms";

try (PreparedStatement preparedStatement = connection.prepareStatement(
sqlQuery)) {

ResultSet resultSet = preparedStatement.executeQuery();


while (resultSet.next()) {

Point point = (Point) resultSet.getObject("location");

String description = resultSet.getString("description");


if (AddLocation.x == point.x && AddLocation.y == point.y && AddDescription
.equals(description))

IsNew = false;

}

}

} catch (SQLException e) {

e.printStackTrace();

}


return IsNew;

}

and at line

Point point = (Point) resultSet.getObject("location");


java.lang.ClassCastException: org.postgresql.util.PGobject cannot be cast
to org.postgis.Point
at
MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
How to get Point from resultset ?
Is it related to postgis driver ?

Regards
Raivo

On Sat, Sep 30, 2023 at 9:33 AM Raivo Rebane  wrote:

> Hi,
> sometimes I am lucky and don't get the old error, but sometime not.
>
> I tried to use PreparedStatement, but I got error -
> org.postgresql.util.PSQLException: Can't use query methods that take a
> query string on a PreparedStatement.
> at
> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
> at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
>  and java code is -
>
> String deleteQuery = "DELETE FROM " + tableNam
>
> System.out.println(deleteQuery);
>
> PreparedStatement statement = connection.prepareStatement(deleteQuery);
>
> May be it's easy for me to use normal statement ?
>
>
> Raivo
>
>
> On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane  wrote:
>
>> Sorry. I accidentally usin postgres 16, which was empty
>> Many thanks for advise !!!
>>
>> Raivo
>>
>> On Sat, Sep 30, 2023 at 8:18 AM Raivo Rebane  wrote:
>>
>>> May be I have to copy PostGis draiver also to tomcat/lib ?
>>>
>>> Raivo
>>>
>>> On Sat, Sep 30, 2023 at 8:01 AM Raivo Rebane 
>>> wrote:
>>>
 Hi,
 I copied the postgres driver to Tomcat/lib.
 PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\lib> dir post*
 Directory: C:\Program Files\Apache Software Foundation\Tomcat
 9.0\lib
 Mode LastWriteTime Length Name
  - -- 
 -a28.09.2023 14:401081604 postgresql-42.6.0.jar
 but Tomcat remains to give error -
 java.sql.SQLException: No suitable driver found for
 jdbc:postgresql://localhost:5432/mushroom_database
 I am using Postgres 15. May be I have to use more older one ?


 Raivo

 On Sat, Sep 30, 2023 at 6:32 AM Craig McIlwee  wrote:

> > for some reason the postgresql jar is not in the classpath.
>
> This is due to the way that Tomcat loads drivers, which is documented
> at [1].  In short, the JDBC driver should be placed in the tomcat/lib
> directory and removed from the application's WEB-INF/lib directory.  After
> doing that, I was able to get past the "No suitable driver" exception.
>
> OP, you should read that entire page, and in particular the portion at
> [2].  By registering a "Resource" with Tomcat, you can let it manage a
> connection pool for you (if you aren't already using something like
> PgBouncer or pgpool) and, if registered at the container level, in the
> future you can share that pool across all of your web applications in the
> container.
>
> [1]
> https://tomcat.apache.org/tomcat-9.0-doc/jndi-datasource-examples-howto.html#DriverManager,_the_service_provider_mechanism_and_memory_leaks
> [2]
> https://tomcat.apache.org/tomcat-9.0-doc/jndi-datasource-examples-howto.html#PostgreSQL
>
> Craig
>
>>


Re: Right version of jdbc

2023-09-30 Thread postgresql439848

Am 30.09.23 um 08:33 schrieb Raivo Rebane:

Hi,
sometimes I am lucky and don't get the old error, but sometime not.

I tried to use PreparedStatement, but I got error -
org.postgresql.util.PSQLException: Can't use query methods that take a
query string on a PreparedStatement.
 at
org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
 at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
  and java code is -

String deleteQuery = "DELETE FROM " + tableNam

System.out.println(deleteQuery);

PreparedStatement statement = connection.prepareStatement(deleteQuery);
if your statement is already prepared with query, use 
statement.exequte(); or statement.executeQuery() without querystring;
if you have a new statement without query, use execute and such with 
query string.


May be it's easy for me to use normal statement ?


Raivo


On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane  wrote:

[snip]


Am 30.09.23 um 09:18 schrieb Raivo Rebane:

I fix previous error what was my bad knowledge,
But new error occur which is related to postgres postgis jars.
If You are kind to answer me more;

Java code is :

public static boolean CheckIsNewInMushrooms(Connection connection, Point
AddLocation, String AddDescription) {

boolean IsNew = true;


try {

String sqlQuery = "SELECT location, description FROM mushrooms";

try (PreparedStatement preparedStatement = connection.prepareStatement(
sqlQuery)) {

ResultSet resultSet = preparedStatement.executeQuery();


while (resultSet.next()) {

Point point = (Point) resultSet.getObject("location");

String description = resultSet.getString("description");


if (AddLocation.x == point.x && AddLocation.y == point.y && AddDescription
.equals(description))

IsNew = false;

}

}

} catch (SQLException e) {

e.printStackTrace();

}


return IsNew;

}

and at line

Point point = (Point) resultSet.getObject("location");


java.lang.ClassCastException: org.postgresql.util.PGobject cannot be cast
to org.postgis.Point
 at
MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
How to get Point from resultset ?
Is it related to postgis driver ?

try another way to cast to Point.
look for the way over PGgeometry like here 
https://postgis.net/docs/manual-3.3/ch07.html#idm3092


Regards
Raivo





Re: Right version of jdbc

2023-09-30 Thread Raivo Rebane
It seems so, that if my WEB-INF/lib contains postgres driver then Tomcat
gives - No suitable driver found

Raivo

On Sat, Sep 30, 2023 at 10:18 AM Raivo Rebane  wrote:

> I fix previous error what was my bad knowledge,
> But new error occur which is related to postgres postgis jars.
> If You are kind to answer me more;
>
> Java code is :
>
> public static boolean CheckIsNewInMushrooms(Connection connection, Point
> AddLocation, String AddDescription) {
>
> boolean IsNew = true;
>
>
> try {
>
> String sqlQuery = "SELECT location, description FROM mushrooms";
>
> try (PreparedStatement preparedStatement = connection.prepareStatement(
> sqlQuery)) {
>
> ResultSet resultSet = preparedStatement.executeQuery();
>
>
> while (resultSet.next()) {
>
> Point point = (Point) resultSet.getObject("location");
>
> String description = resultSet.getString("description");
>
>
> if (AddLocation.x == point.x && AddLocation.y == point.y && AddDescription
> .equals(description))
>
> IsNew = false;
>
> }
>
> }
>
> } catch (SQLException e) {
>
> e.printStackTrace();
>
> }
>
>
> return IsNew;
>
> }
>
> and at line
>
> Point point = (Point) resultSet.getObject("location");
>
>
> java.lang.ClassCastException: org.postgresql.util.PGobject cannot be cast
> to org.postgis.Point
> at
> MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
> How to get Point from resultset ?
> Is it related to postgis driver ?
>
> Regards
> Raivo
>
> On Sat, Sep 30, 2023 at 9:33 AM Raivo Rebane  wrote:
>
>> Hi,
>> sometimes I am lucky and don't get the old error, but sometime not.
>>
>> I tried to use PreparedStatement, but I got error -
>> org.postgresql.util.PSQLException: Can't use query methods that take a
>> query string on a PreparedStatement.
>> at
>> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
>> at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
>>  and java code is -
>>
>> String deleteQuery = "DELETE FROM " + tableNam
>>
>> System.out.println(deleteQuery);
>>
>> PreparedStatement statement = connection.prepareStatement(deleteQuery);
>>
>> May be it's easy for me to use normal statement ?
>>
>>
>> Raivo
>>
>>
>> On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane  wrote:
>>
>>> Sorry. I accidentally usin postgres 16, which was empty
>>> Many thanks for advise !!!
>>>
>>> Raivo
>>>
>>> On Sat, Sep 30, 2023 at 8:18 AM Raivo Rebane 
>>> wrote:
>>>
 May be I have to copy PostGis draiver also to tomcat/lib ?

 Raivo

 On Sat, Sep 30, 2023 at 8:01 AM Raivo Rebane 
 wrote:

> Hi,
> I copied the postgres driver to Tomcat/lib.
> PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\lib> dir
> post*
> Directory: C:\Program Files\Apache Software Foundation\Tomcat
> 9.0\lib
> Mode LastWriteTime Length Name
>  - -- 
> -a28.09.2023 14:401081604 postgresql-42.6.0.jar
> but Tomcat remains to give error -
> java.sql.SQLException: No suitable driver found for
> jdbc:postgresql://localhost:5432/mushroom_database
> I am using Postgres 15. May be I have to use more older one ?
>
>
> Raivo
>
> On Sat, Sep 30, 2023 at 6:32 AM Craig McIlwee  wrote:
>
>> > for some reason the postgresql jar is not in the classpath.
>>
>> This is due to the way that Tomcat loads drivers, which is documented
>> at [1].  In short, the JDBC driver should be placed in the tomcat/lib
>> directory and removed from the application's WEB-INF/lib directory.  
>> After
>> doing that, I was able to get past the "No suitable driver" exception.
>>
>> OP, you should read that entire page, and in particular the portion
>> at [2].  By registering a "Resource" with Tomcat, you can let it manage a
>> connection pool for you (if you aren't already using something like
>> PgBouncer or pgpool) and, if registered at the container level, in the
>> future you can share that pool across all of your web applications in the
>> container.
>>
>> [1]
>> https://tomcat.apache.org/tomcat-9.0-doc/jndi-datasource-examples-howto.html#DriverManager,_the_service_provider_mechanism_and_memory_leaks
>> [2]
>> https://tomcat.apache.org/tomcat-9.0-doc/jndi-datasource-examples-howto.html#PostgreSQL
>>
>> Craig
>>
>>>


Re: [EXT] YNT: Need help tuning a query

2023-09-30 Thread Amn Ojee Uw

Wow!! This is what I call cryptic!!

On 9/29/23 2:46 a.m., Vladimir Sitnikov wrote:

Oh, I misplaced the added where conditions.
It should have been as follows, however, the overall idea is the same

--- orignial.sql
+++ tuned_v2.sql
@@ -83,6 +83,7 @@
                                AND (judg1.jrt_opt_out_flag <> 'Y' OR 
judg1.jrt_opt_out_flag IS NULL)

                     ) sub0
                LEFT OUTER JOIN
+                    LATERAL
                    (  SELECT sub4.case_year_number,
                                  sub4.judge_wld_id,
                                  sub4.judge_id,
@@ -99,6 +100,7 @@
                                  jrtf1.higher_judge_id,
                                  jrtf1.case_document_id
                             ) sub4
+                     WHERE sub4.judge_id = sub0.judge_id
                 GROUP BY sub4.case_year_number,
                                 sub4.judge_wld_id,
                                 sub4.judge_id,
@@ -106,6 +108,7 @@
                   ) sub1
                ON sub1.judge_id = sub0.judge_id
              LEFT OUTER JOIN
+                  LATERAL
                  (SELECT sub5.case_year_number,
                                sub5.judge_wld_id,
                                sub5.judge_id,
@@ -129,6 +132,7 @@
                                ),
                                jrtf2.case_document_id
                               ) sub5
+                  WHERE sub5.judge_id = sub0.judge_id
               GROUP BY sub5.case_year_number,
                               sub5.judge_wld_id,
                               sub5.judge_id,

SELECT agg_sub.judge_id,
       agg_sub.display_name,
       agg_sub.active_flag,
       agg_sub.judge_court_level,
       agg_sub.jrt_fact_first_year_trial,
       agg_sub.jrt_fact_last_year_trial,
       agg_sub.jrt_fact_totalcount_trial,
       agg_sub.filtered_first_year_trial,
       agg_sub.filtered_last_year_trial,
       agg_sub.jrt_fact_count_trial,
       agg_sub.jrt_fact_first_year_appeal,
       agg_sub.jrt_fact_last_year_appeal,
       agg_sub.jrt_fact_totalcount_appeal,
       agg_sub.filtered_first_year_appeal,
       agg_sub.filtered_last_year_appeal,
       agg_sub.jrt_fact_count_appeal,
       appellate_flag_sub.appellate_flag
   FROM (SELECT sub3.judge_id,
             sub3.display_name,
             sub3.active_flag,
             sub3.judge_court_level,
             (MIN(sub3.trial_unfilt_case_year_number)) AS 
jrt_fact_first_year_trial,
             (MAX(sub3.trial_unfilt_case_year_number)) AS 
jrt_fact_last_year_trial,
             (SUM(sub3.trial_unfilt_subcount)) AS 
jrt_fact_totalcount_trial,
             (MIN(sub3.trial_filt_case_year_number)) AS 
filtered_first_year_trial,
             (MAX(sub3.trial_filt_case_year_number)) AS 
filtered_last_year_trial,

             (SUM(sub3.trial_filt_subcount)) AS jrt_fact_count_trial,
             (MIN(sub3.appeal_unfilt_case_year_number)) AS 
jrt_fact_first_year_appeal,
             (MAX(sub3.appeal_unfilt_case_year_number)) AS 
jrt_fact_last_year_appeal,
             (SUM(sub3.appeal_unfilt_subcount)) AS 
jrt_fact_totalcount_appeal,
             (MIN(sub3.appeal_filt_case_year_number)) AS 
filtered_first_year_appeal,
             (MAX(sub3.appeal_filt_case_year_number)) AS 
filtered_last_year_appeal,

             (SUM(sub3.appeal_filt_subcount)) AS jrt_fact_count_appeal
           FROM (SELECT sub0.judge_id,
                    sub0.display_name,
                    sub0.active_flag,
                    sub0.judge_court_level,
                    (CASE WHEN sub2.grouping_flg = 'T' AND 
sub2.judge_wld_id = sub0.judge_wld_id THEN sub2.case_year_number ELSE 
NULL END) AS trial_unfilt_case_year_number,
                    (CASE WHEN sub2.grouping_flg = 'T'AND 
sub2.judge_wld_id = sub0.judge_wld_id THEN sub2.subcount ELSE NULL END 
                      ) AS trial_unfilt_subcount,
                    (CASE WHEN sub2.grouping_flg = 'T' AND 
sub2.judge_wld_id = sub0.judge_wld_id THEN (CASE WHEN 
sub2.case_year_number BETWEEN sub0.low_case_year_number AND 
sub0.high_case_year_number
                           THEN sub2.case_year_number ELSE NULL END) 
ELSE NULL END) AS trial_filt_case_year_number,
                    (CASE WHEN sub2.grouping_flg = 'T' AND 
sub2.judge_wld_id = sub0.judge_wld_id THEN (CASE WHEN 
sub2.case_year_number BETWEEN sub0.low_case_year_number AND 
sub0.high_case_year_number
                           THEN sub2.subcount ELSE NULL END       ) 
ELSE NULL END        ) AS trial_filt_subcount,
                    (CASE WHEN sub1.grouping_flg = 'A'AND 
sub1.judge_wld_id = sub0.judge_wld_id THEN sub1.case_year_number WHEN 
sub2.grouping_flg = 'A'   AND sub2.judge_wld_id = sub0.judge_wld_id
                           THEN sub2.case_year_number ELSE NULL END   
      ) AS appeal_unfilt_case_year_number,

                    (
                    CASE WHEN sub1.grouping_flg = 'A'
                    AND sub1.judge_wld_id = sub0.judge_wld_id T

Re: Right version of jdbc

2023-09-30 Thread Raivo Rebane
Thanks for very much for help.
It seems that I can do the test project for new job.
In future I use help of chat.gpt

Regards
Raivo

On Sat, Sep 30, 2023 at 11:15 AM  wrote:

> Am 30.09.23 um 08:33 schrieb Raivo Rebane:
> > Hi,
> > sometimes I am lucky and don't get the old error, but sometime not.
> >
> > I tried to use PreparedStatement, but I got error -
> > org.postgresql.util.PSQLException: Can't use query methods that take a
> > query string on a PreparedStatement.
> >  at
> >
> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
> >  at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
> >   and java code is -
> >
> > String deleteQuery = "DELETE FROM " + tableNam
> >
> > System.out.println(deleteQuery);
> >
> > PreparedStatement statement = connection.prepareStatement(deleteQuery);
> if your statement is already prepared with query, use
> statement.exequte(); or statement.executeQuery() without querystring;
> if you have a new statement without query, use execute and such with
> query string.
> >
> > May be it's easy for me to use normal statement ?
> >
> >
> > Raivo
> >
> >
> > On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane 
> wrote:
> [snip]
>
>
> Am 30.09.23 um 09:18 schrieb Raivo Rebane:
> > I fix previous error what was my bad knowledge,
> > But new error occur which is related to postgres postgis jars.
> > If You are kind to answer me more;
> >
> > Java code is :
> >
> > public static boolean CheckIsNewInMushrooms(Connection connection, Point
> > AddLocation, String AddDescription) {
> >
> > boolean IsNew = true;
> >
> >
> > try {
> >
> > String sqlQuery = "SELECT location, description FROM mushrooms";
> >
> > try (PreparedStatement preparedStatement = connection.prepareStatement(
> > sqlQuery)) {
> >
> > ResultSet resultSet = preparedStatement.executeQuery();
> >
> >
> > while (resultSet.next()) {
> >
> > Point point = (Point) resultSet.getObject("location");
> >
> > String description = resultSet.getString("description");
> >
> >
> > if (AddLocation.x == point.x && AddLocation.y == point.y &&
> AddDescription
> > .equals(description))
> >
> > IsNew = false;
> >
> > }
> >
> > }
> >
> > } catch (SQLException e) {
> >
> > e.printStackTrace();
> >
> > }
> >
> >
> > return IsNew;
> >
> > }
> >
> > and at line
> >
> > Point point = (Point) resultSet.getObject("location");
> >
> >
> > java.lang.ClassCastException: org.postgresql.util.PGobject cannot be cast
> > to org.postgis.Point
> >  at
> >
> MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
> > How to get Point from resultset ?
> > Is it related to postgis driver ?
> try another way to cast to Point.
> look for the way over PGgeometry like here
> https://postgis.net/docs/manual-3.3/ch07.html#idm3092
> >
> > Regards
> > Raivo
>
>
>


Re: Right version of jdbc

2023-09-30 Thread Dave Cramer
Again, can you please post the solution so others can learn as well ?

Dave Cramer
www.postgres.rocks


On Sat, 30 Sept 2023 at 06:49, Raivo Rebane  wrote:

> Thanks for very much for help.
> It seems that I can do the test project for new job.
> In future I use help of chat.gpt
>
> Regards
> Raivo
>
> On Sat, Sep 30, 2023 at 11:15 AM  wrote:
>
>> Am 30.09.23 um 08:33 schrieb Raivo Rebane:
>> > Hi,
>> > sometimes I am lucky and don't get the old error, but sometime not.
>> >
>> > I tried to use PreparedStatement, but I got error -
>> > org.postgresql.util.PSQLException: Can't use query methods that take a
>> > query string on a PreparedStatement.
>> >  at
>> >
>> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
>> >  at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
>> >   and java code is -
>> >
>> > String deleteQuery = "DELETE FROM " + tableNam
>> >
>> > System.out.println(deleteQuery);
>> >
>> > PreparedStatement statement = connection.prepareStatement(deleteQuery);
>> if your statement is already prepared with query, use
>> statement.exequte(); or statement.executeQuery() without querystring;
>> if you have a new statement without query, use execute and such with
>> query string.
>> >
>> > May be it's easy for me to use normal statement ?
>> >
>> >
>> > Raivo
>> >
>> >
>> > On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane 
>> wrote:
>> [snip]
>>
>>
>> Am 30.09.23 um 09:18 schrieb Raivo Rebane:
>> > I fix previous error what was my bad knowledge,
>> > But new error occur which is related to postgres postgis jars.
>> > If You are kind to answer me more;
>> >
>> > Java code is :
>> >
>> > public static boolean CheckIsNewInMushrooms(Connection connection, Point
>> > AddLocation, String AddDescription) {
>> >
>> > boolean IsNew = true;
>> >
>> >
>> > try {
>> >
>> > String sqlQuery = "SELECT location, description FROM mushrooms";
>> >
>> > try (PreparedStatement preparedStatement = connection.prepareStatement(
>> > sqlQuery)) {
>> >
>> > ResultSet resultSet = preparedStatement.executeQuery();
>> >
>> >
>> > while (resultSet.next()) {
>> >
>> > Point point = (Point) resultSet.getObject("location");
>> >
>> > String description = resultSet.getString("description");
>> >
>> >
>> > if (AddLocation.x == point.x && AddLocation.y == point.y &&
>> AddDescription
>> > .equals(description))
>> >
>> > IsNew = false;
>> >
>> > }
>> >
>> > }
>> >
>> > } catch (SQLException e) {
>> >
>> > e.printStackTrace();
>> >
>> > }
>> >
>> >
>> > return IsNew;
>> >
>> > }
>> >
>> > and at line
>> >
>> > Point point = (Point) resultSet.getObject("location");
>> >
>> >
>> > java.lang.ClassCastException: org.postgresql.util.PGobject cannot be
>> cast
>> > to org.postgis.Point
>> >  at
>> >
>> MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
>> > How to get Point from resultset ?
>> > Is it related to postgis driver ?
>> try another way to cast to Point.
>> look for the way over PGgeometry like here
>> https://postgis.net/docs/manual-3.3/ch07.html#idm3092
>> >
>> > Regards
>> > Raivo
>>
>>
>>


Re: Right version of jdbc

2023-09-30 Thread Raivo Rebane
I have put it to github in the old place -
https://github.com/raaivore/APIexperiment.git

Raivo

On Sat, Sep 30, 2023 at 1:53 PM Dave Cramer 
wrote:

> Again, can you please post the solution so others can learn as well ?
>
> Dave Cramer
> www.postgres.rocks
>
>
> On Sat, 30 Sept 2023 at 06:49, Raivo Rebane  wrote:
>
>> Thanks for very much for help.
>> It seems that I can do the test project for new job.
>> In future I use help of chat.gpt
>>
>> Regards
>> Raivo
>>
>> On Sat, Sep 30, 2023 at 11:15 AM  wrote:
>>
>>> Am 30.09.23 um 08:33 schrieb Raivo Rebane:
>>> > Hi,
>>> > sometimes I am lucky and don't get the old error, but sometime not.
>>> >
>>> > I tried to use PreparedStatement, but I got error -
>>> > org.postgresql.util.PSQLException: Can't use query methods that take a
>>> > query string on a PreparedStatement.
>>> >  at
>>> >
>>> org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:145)
>>> >  at MushroomAPIs.Clean.deleteAllRecordsFromTable(Clean.java:34)
>>> >   and java code is -
>>> >
>>> > String deleteQuery = "DELETE FROM " + tableNam
>>> >
>>> > System.out.println(deleteQuery);
>>> >
>>> > PreparedStatement statement = connection.prepareStatement(deleteQuery);
>>> if your statement is already prepared with query, use
>>> statement.exequte(); or statement.executeQuery() without querystring;
>>> if you have a new statement without query, use execute and such with
>>> query string.
>>> >
>>> > May be it's easy for me to use normal statement ?
>>> >
>>> >
>>> > Raivo
>>> >
>>> >
>>> > On Sat, Sep 30, 2023 at 8:27 AM Raivo Rebane 
>>> wrote:
>>> [snip]
>>>
>>>
>>> Am 30.09.23 um 09:18 schrieb Raivo Rebane:
>>> > I fix previous error what was my bad knowledge,
>>> > But new error occur which is related to postgres postgis jars.
>>> > If You are kind to answer me more;
>>> >
>>> > Java code is :
>>> >
>>> > public static boolean CheckIsNewInMushrooms(Connection connection,
>>> Point
>>> > AddLocation, String AddDescription) {
>>> >
>>> > boolean IsNew = true;
>>> >
>>> >
>>> > try {
>>> >
>>> > String sqlQuery = "SELECT location, description FROM mushrooms";
>>> >
>>> > try (PreparedStatement preparedStatement = connection.prepareStatement(
>>> > sqlQuery)) {
>>> >
>>> > ResultSet resultSet = preparedStatement.executeQuery();
>>> >
>>> >
>>> > while (resultSet.next()) {
>>> >
>>> > Point point = (Point) resultSet.getObject("location");
>>> >
>>> > String description = resultSet.getString("description");
>>> >
>>> >
>>> > if (AddLocation.x == point.x && AddLocation.y == point.y &&
>>> AddDescription
>>> > .equals(description))
>>> >
>>> > IsNew = false;
>>> >
>>> > }
>>> >
>>> > }
>>> >
>>> > } catch (SQLException e) {
>>> >
>>> > e.printStackTrace();
>>> >
>>> > }
>>> >
>>> >
>>> > return IsNew;
>>> >
>>> > }
>>> >
>>> > and at line
>>> >
>>> > Point point = (Point) resultSet.getObject("location");
>>> >
>>> >
>>> > java.lang.ClassCastException: org.postgresql.util.PGobject cannot be
>>> cast
>>> > to org.postgis.Point
>>> >  at
>>> >
>>> MushroomAPIs.ProcAddMushrooms.CheckIsNewInMushrooms(ProcAddMushrooms.java:45)
>>> > How to get Point from resultset ?
>>> > Is it related to postgis driver ?
>>> try another way to cast to Point.
>>> look for the way over PGgeometry like here
>>> https://postgis.net/docs/manual-3.3/ch07.html#idm3092
>>> >
>>> > Regards
>>> > Raivo
>>>
>>>
>>>


Re: cache lookup failed for function 0

2023-09-30 Thread pf
Hi Adrian,On Fri, 29 Sep 2023 14:27:48 -0700 Adrian Klaver wrote:

>On 9/29/23 1:37 PM, p...@pfortin.com wrote:
>> Hi,
>>
>> select version();
>> PostgreSQL 15.4 on x86_64-mageia-linux-gnu,
>> compiled by gcc (Mageia 12.3.0-3.mga9) 12.3.0, 64-bit
>>
>> As a test, rather than use INSERT, I recently wrote a python test script
>> to import some 8M & 33M record files with COPY instead.  These worked with
>> last weekend's data dump.  Next, I wanted to look into importing a subset
>> of columns using the below logic; but I'm getting "ERROR: cache lookup
>> failed for function 0". Re-running the same full imports that worked
>> Saturday, I now get the same error.
>>
>> Could something in the DB cause this "function" error?
>>
>>
>> Simplified statements; just trying to import a subset of columns:
>>
>> DROP TABLE IF EXISTS t;
>>
>> CREATE TABLE IF NOT EXISTS t (
>> f1 text, f2 text, f3 text, f4 text, f5 text );
>>
>> COPY t ( -- import only a subset of columns  
>
>I'm going to say it is the
>
>( -- import only a subset of columns
>
>I suspect the -- comment is the issue.

I wish it was that easy. It's not in the code; I added that as a
clarification within the email only.

>You need to show the actual Python code for a more complete answer.

Attached...

Source file (44,530 records):
https://s3.amazonaws.com/dl.ncsbe.gov/data/ncvoter53.zip
Attached is a 10 row (incl. header) sample file.

$ dbcopy_voter2 /tmp/ncvoter53_10.txt test ncv53 


>> f1, f3, f5 ) FROM '/tmp/foo.txt'
>> WITH (FORMAT CSV, HEADER, DELIMITER ' ', ENCODING 'ISO-8859-1') ;
>>
>> ERROR: cache lookup failed for function 0
>>Where: COPY t, line 1
>> 1 statement failed.
>>
>> "function"?  Is this referring to an implied/internal function? Searching
>> has not provided any clue, yet...
>>
>> There are no user functions in the database:
>> ostgres=# \df
>> List of functions
>>   Schema | Name | Result data type | Argument data types | Type
>> +--+--+-+--
>> (0 rows)
>>
>> It feels like something changed since the previously working script no
>> longer works...
>>
>> Clues?
>>
>> Thanks,
>> Pierre
>>
>>
>>  
>
>




Re: cache lookup failed for function 0

2023-09-30 Thread Pierre Fortin
Ignore this message; I wondered where it went to -- Looks like I
accidentally hit Ctrl+Enter; just did that to another...  SIGH...

On Sat, 30 Sep 2023 09:30:08 -0400 p...@pfortin.com wrote:

>On Fri, 29 Sep 2023 14:27:48 -0700 Adrian Klaver wrote:




Re: cache lookup failed for function 0

2023-09-30 Thread Adrian Klaver

On 9/30/23 07:01, p...@pfortin.com wrote:

Hi Adrian,On Fri, 29 Sep 2023 14:27:48 -0700 Adrian Klaver wrote:


On 9/29/23 1:37 PM, p...@pfortin.com wrote:

Hi,



I'm going to say it is the

( -- import only a subset of columns

I suspect the -- comment is the issue.


I wish it was that easy. It's not in the code; I added that as a
clarification within the email only.


You need to show the actual Python code for a more complete answer.


Attached...

Source file (44,530 records):
https://s3.amazonaws.com/dl.ncsbe.gov/data/ncvoter53.zip
Attached is a 10 row (incl. header) sample file.


I am not seeing the sample file.



$ dbcopy_voter2 /tmp/ncvoter53_10.txt test ncv53 


Not sure what the above is supposed to be doing?

What I was looking for is the Python code snippet where you actually run 
the COPY.


Also per Tom's post information on whether there are extensions 
installed or if there is an event trigger running?






f1, f3, f5 ) FROM '/tmp/foo.txt'



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Gradual migration from integer to bigint?

2023-09-30 Thread grimy . outshine830
On Sat, Sep 30, 2023 at 03:55:20PM +1000, James Healy wrote:

> It did make me curious though: would it be possible for postgres to
> support gradual migration from integer to bigint in a more
> transparent way, where new and updated tuples are written as bigint,
> but existing tuples can be read as integer?

Language police: this is the *opposite* of "transparent". "trasparent"
and "automated" are not synonyms.

-- 
Ian




Re: cache lookup failed for function 0

2023-09-30 Thread pf
On Fri, 29 Sep 2023 18:21:02 -0400 Tom Lane wrote:

>p...@pfortin.com writes:
>> As a test, rather than use INSERT, I recently wrote a python test script
>> to import some 8M & 33M record files with COPY instead.  These worked with
>> last weekend's data dump.  Next, I wanted to look into importing a subset
>> of columns using the below logic; but I'm getting "ERROR: cache lookup
>> failed for function 0". Re-running the same full imports that worked
>> Saturday, I now get the same error.   
>
>> Could something in the DB cause this "function" error?  
>
>"cache lookup failed" certainly smells like a server internal error,
>but we'd have heard about it if the trivial case you show could reach
>such a problem.  I'm thinking there's things you haven't told us.
>What extensions do you have installed?  Maybe an event trigger?

I have one production DB with fuzzystrmatch installed; but it's not in
any other DB. I'm trying to import into a test DB, and not yet at the
point of understanding or using triggers. This is a very simple setup,
other than the volume of data. The production DB has many tables, mostly
in the range of 8M-33M rows.  

>Also, the reference to ENCODING 'ISO-8859-1' makes me wonder what
>encoding conversion is being performed.

The source files are mostly UTF-8; some files have the 1/2 (0xbd)
character in street addresses, hence the ISO...

>   regards, tom lane

Thanks,
Pierre




Re: cache lookup failed for function 0

2023-09-30 Thread pf
On Sat, 30 Sep 2023 08:50:45 -0700 Adrian Klaver wrote:

>On 9/30/23 07:01, p...@pfortin.com wrote:
>> Hi Adrian,On Fri, 29 Sep 2023 14:27:48 -0700 Adrian Klaver wrote:
>>   
>>> On 9/29/23 1:37 PM, p...@pfortin.com wrote:  
 Hi,  
>
>>> I'm going to say it is the
>>>
>>> ( -- import only a subset of columns
>>>
>>> I suspect the -- comment is the issue.  
>> 
>> I wish it was that easy. It's not in the code; I added that as a
>> clarification within the email only.
>>   
>>> You need to show the actual Python code for a more complete answer.  
>> 
>> Attached...
>> 
>> Source file (44,530 records):
>> https://s3.amazonaws.com/dl.ncsbe.gov/data/ncvoter53.zip
>> Attached is a 10 row (incl. header) sample file.  
>
>I am not seeing the sample file.

Sorry; I think I hit CTRL+Enter which sent that message before I was done.
Then, I was outside with the HVAC guy repairing my heat pump... 

Python script and sample file attached...

Invoke it with:  "dbcopy_voter2"
e.g.:
>> $ dbcopy_voter2 /tmp/ncvoter53_10.txt test ncv53   
>
>Not sure what the above is supposed to be doing?

Importing the data via a COPY.

>What I was looking for is the Python code snippet where you actually run 
>the COPY.

Attached this time...  Again, sorry about the incomplete message earlier.

>Also per Tom's post information on whether there are extensions 
>installed or if there is an event trigger running?

Replied to Tom.

Thanks,
Pierre


dbcopy_voter2
Description: Binary data
"county_id" "county_desc"   "voter_reg_num" "ncid"  "last_name" 
"first_name""middle_name"   "name_suffix_lbl"   "status_cd" 
"voter_status_desc" "reason_cd" "voter_status_reason_desc"  
"res_street_address""res_city_desc" "state_cd"  "zip_code"  
"mail_addr1""mail_addr2""mail_addr3""mail_addr4""mail_city" 
"mail_state""mail_zipcode"  "full_phone_number" "confidential_ind"  
"registr_dt""race_code" "ethnic_code"   "party_cd"  "gender_code"   
"birth_year""age_at_year_end"   "birth_state"   "drivers_lic"   
"precinct_abbrv""precinct_desc" "municipality_abbrv"
"municipality_desc" "ward_abbrv""ward_desc" "cong_dist_abbrv"   
"super_court_abbrv" "judic_dist_abbrv"  "nc_senate_abbrv"   
"nc_house_abbrv""county_commiss_abbrv"  "county_commiss_desc"   
"township_abbrv""township_desc" "school_dist_abbrv" 
"school_dist_desc"  "fire_dist_abbrv"   "fire_dist_desc"
"water_dist_abbrv"  "water_dist_desc"   "sewer_dist_abbrv"  
"sewer_dist_desc"   "sanit_dist_abbrv"  "sanit_dist_desc"   
"rescue_dist_abbrv" "rescue_dist_desc"  "munic!
 _dist_abbrv"   "munic_dist_desc"   "dist_1_abbrv"  "dist_1_desc"   
"vtd_abbrv" "vtd_desc"
"53""LEE"   "30002652"  "CL31767"   "AARON" "BARBARA"   "M" 
" " "R" "REMOVED"   "RD""DECEASED"  "REMOVED"   ""  
""  ""  " " " " " " " " " " " " " " ""  
"N" "10/02/2000""B" "NL""DEM"   "F" "1953"  "70""MD"
"N" ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""  ""
"53""LEE"   "30036973"  "CL60219"   "AAZAMI""KIARASH"   
""  ""  "I" "INACTIVE"  "IU""CONFIRMATION RETURNED 
UNDELIVERABLE"   "2111  WIMBERLY WOODS DR   ""SANFORD"   "NC"
"27330" "2111 WIMBERLY WOODS DR"""  ""  ""  "SANFORD"   
"NC""27330" "8189153666""N" "09/14/2016""O" "UN""UNA"   
"M" "1972"  "51"""  "Y" "D1""PRECINCT D1"   "01"
"SANFORD"   "1" "WARD #1"   "09""11A"   "11""12""051"   
"4" "COMMISSION 4"  "1" "WEST SANFORD"  ""  ""  ""  ""  
""  ""  ""  ""  ""  ""  ""  ""  "01"
"SANFORD"   "12""12TH PROSECUTORIAL""D" "D"
"53""LEE"   "30036974"  "CL60220"   "AAZAMI""LINDSAY"   
"LYNETTE"   " " "R" "REMOVED"   "RM""REMOVED AFTER 2 FED 
GENERAL ELECTIONS IN INACTIVE STATUS"  "REMOVED"   ""  ""  ""   
   " " " " " " " " " " " " " " ""  "N" 
"09/14/2016""W" "UN""REP"   "F" "1984"  "39"""  "Y" 
""  ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""  ""  ""  ""  ""  ""  ""  ""  
""  ""  ""
"53""LEE"   "30002228"  "CL31343"   "ABARCA""HUGO"  "J" 
"JR""I" "INACTIVE"  "IN""CONFIRMATION NOT RETURNED" "809  
AMHERS

Re: cache lookup failed for function 0

2023-09-30 Thread Tom Lane
p...@pfortin.com writes:
> Python script and sample file attached...

This runs fine for me, both in HEAD and 15.4.

(Well, it fails at the last GRANT, seemingly because you wrote
"{table}" not "{TABLE}".  But the COPY goes through fine.)

Assuming that you verified that this specific test case fails
for you, we're down to just a couple of theories:

1. There's some relevant info you didn't supply yet (is it
really a completely plain-vanilla installation?  Maybe some
odd choice of locale?)

2. There's something wrong with your Postgres installation.

It's hard to get much further than that with the info that
we have.

regards, tom lane




Re: Gradual migration from integer to bigint?

2023-09-30 Thread Bruce Momjian
On Sat, Sep 30, 2023 at 03:55:20PM +1000, James Healy wrote:
> My organization has a number of very large tables (most 100s of GB, a
> couple over a Tb) that were created many years ago by a tool that
> defaulted to integer PKs rather than bigint. Those PKs have a number
> of integer FKs in related tables as well. We shouldn't have let them
> get so big, but that's a conversation for another day.
> 
> Some are approaching overflow and we're slowly doing the work to
> migrate to bigint. Mostly via the well understood "add a new id_bigint
> column, populate on new tuples, backfill the old, switch the PK"
> method. The backfill is slow on these large tables, but it works and
> there's plenty of blog posts and documentation to follow.
> 
> It did make me curious though: would it be possible for postgres to
> support gradual migration from integer to bigint in a more transparent
> way, where new and updated tuples are written as bigint, but existing
> tuples can be read as integer?
> 
> I assume maybe a complication is that the catalog says the column is
> either 32bit int or 64bit bigint and making that conditional is hard.
> There's presumably other considerations I'm unaware of too. My core
> question: are there significant technical blockers to supporting this
> kind of gradual in place migration, or has it just not been enough of
> a problem that it's received attention?

I think this talk will help you:

https://www.youtube.com/watch?v=XYRgTazYuZ4

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




Re: cache lookup failed for function 0

2023-09-30 Thread pf
On Sat, 30 Sep 2023 13:20:14 -0400 Tom Lane wrote:

>p...@pfortin.com writes:
>> Python script and sample file attached...  
>
>This runs fine for me, both in HEAD and 15.4.
>
>(Well, it fails at the last GRANT, seemingly because you wrote
>"{table}" not "{TABLE}".  But the COPY goes through fine.)

Sorry, I ripped out some other test code which would have been
unnecessarily confusing; but couldn't check the remaining bits due to the
prior failure...

>Assuming that you verified that this specific test case fails
>for you, we're down to just a couple of theories:
>
>1. There's some relevant info you didn't supply yet (is it
>really a completely plain-vanilla installation?  Maybe some
>odd choice of locale?)

As vanilla as it gets... Standard locale (C). The only odd thing that
happened: a system update the other day installed and started something
called tracker-miners which I was not happy with:
https://bugs.mageia.org/show_bug.cgi?id=32340
I was quite upset that the distro guys would install/run what they thought
could be a "cool" tool (as you'll see in the above link).  

>2. There's something wrong with your Postgres installation.

Scary; but that's what I'm suspecting too now...

>It's hard to get much further than that with the info that
>we have.

Understood; I wasn't sure which rabbit hole to go down; but this is
starting to smell like a DB issue...  I'll see about re-installing the
server and pray the DB is sane...  If the above tracker-miners found my
postgres installation and added something to it; then I will be EXTREMELY
upset...  

Thanks for confirming my simple SQL runs...

>   regards, tom lane

Thanks Tom!




Re: cache lookup failed for function 0

2023-09-30 Thread Adrian Klaver

On 9/30/23 11:32, p...@pfortin.com wrote:

On Sat, 30 Sep 2023 13:20:14 -0400 Tom Lane wrote:




As vanilla as it gets... Standard locale (C). The only odd thing that
happened: a system update the other day installed and started something
called tracker-miners which I was not happy with:
https://bugs.mageia.org/show_bug.cgi?id=32340
I was quite upset that the distro guys would install/run what they thought
could be a "cool" tool (as you'll see in the above link).


2. There's something wrong with your Postgres installation.


Scary; but that's what I'm suspecting too now...


The script ran on my installation also. It would seem it is something on 
your end. Has there been any issues with your machine and/or database 
e.g. crashes or other odd behavior(not counting the tracker-miners update)?


FYI, since you are using psycopg2 you might want to use the built in 
tools for:


   a) Dynamic SQL
https://www.psycopg.org/docs/sql.html

   b) COPY
https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from



Understood; I wasn't sure which rabbit hole to go down; but this is
starting to smell like a DB issue...  I'll see about re-installing the
server and pray the DB is sane...  If the above tracker-miners found my
postgres installation and added something to it; then I will be EXTREMELY
upset...


From here:

https://github.com/GNOME/tracker-miners

it looks like the software use SQLite for data storage.



Thanks for confirming my simple SQL runs...


regards, tom lane


Thanks Tom!




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: cache lookup failed for function 0

2023-09-30 Thread pf
On Sat, 30 Sep 2023 13:40:37 -0700 Adrian Klaver wrote:

>On 9/30/23 11:32, p...@pfortin.com wrote:
>> On Sat, 30 Sep 2023 13:20:14 -0400 Tom Lane wrote:
>>   
>
>> As vanilla as it gets... Standard locale (C). The only odd thing that
>> happened: a system update the other day installed and started something
>> called tracker-miners which I was not happy with:
>> https://bugs.mageia.org/show_bug.cgi?id=32340
>> I was quite upset that the distro guys would install/run what they thought
>> could be a "cool" tool (as you'll see in the above link).
>>   
>>> 2. There's something wrong with your Postgres installation.  
>> 
>> Scary; but that's what I'm suspecting too now...  
>
>The script ran on my installation also. It would seem it is something on 
>your end. Has there been any issues with your machine and/or database 
>e.g. crashes or other odd behavior(not counting the tracker-miners update)?

As I told Tom, the "test" DB has this issue; the production and test1 DBs
are fine; I should have thought to check those first... Sorry for the
noise.

>FYI, since you are using psycopg2 you might want to use the built in 
>tools for:
>
>a) Dynamic SQL
>   https://www.psycopg.org/docs/sql.html
>
>b) COPY
>   https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from

Thanks for these links... They look very interesting.  Someone on this
list suggested Practical SQL by Anthony DeBarros which we're working
through...  He should see sales from all over the US now  :)

We were working with a python script (using INSERT) someone wrote a couple
years ago; with my version using COPY[1], imports have gone from 2+ hours
to under a minute each for 8M and 33M row files.  If the above links help
even more that will be amazing!!  THANK YOU!!

[1] the script I sent is just a test script I threw together to check out
COPY features -- only started; but well impressed.


>> Understood; I wasn't sure which rabbit hole to go down; but this is
>> starting to smell like a DB issue...  I'll see about re-installing the
>> server and pray the DB is sane...  If the above tracker-miners found my
>> postgres installation and added something to it; then I will be EXTREMELY
>> upset...  
>
> From here:
>
>https://github.com/GNOME/tracker-miners
>
>it looks like the software use SQLite for data storage.
>
>> 
>> Thanks for confirming my simple SQL runs...
>>   
>>> regards, tom lane  
>> 
>> Thanks Tom!
>> 
>>   
>




Re: cache lookup failed for function 0

2023-09-30 Thread Adrian Klaver

On 9/30/23 14:54, p...@pfortin.com wrote:

On Sat, 30 Sep 2023 13:40:37 -0700 Adrian Klaver wrote:


On 9/30/23 11:32, p...@pfortin.com wrote:




As I told Tom, the "test" DB has this issue; the production and test1 DBs
are fine; I should have thought to check those first... Sorry for the
noise.


Still there was an issue with a database. Did you track down what was 
wrong with "test"?




We were working with a python script (using INSERT) someone wrote a couple
years ago; with my version using COPY[1], imports have gone from 2+ hours
to under a minute each for 8M and 33M row files.  If the above links help
even more that will be amazing!!  THANK YOU!!


The move to COPY will have accounted for the speed up. Using the 
psycopg2 COPY methods probably won't make a material difference, I just 
find them easier to use. Also in the new psycopg(3) there are more ways 
to use them per:


https://www.psycopg.org/psycopg3/docs/basic/copy.html.




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: cache lookup failed for function 0

2023-09-30 Thread Tom Lane
Adrian Klaver  writes:
> On 9/30/23 14:54, p...@pfortin.com wrote:
>> As I told Tom, the "test" DB has this issue; the production and test1 DBs
>> are fine; I should have thought to check those first... Sorry for the
>> noise.

> Still there was an issue with a database. Did you track down what was 
> wrong with "test"?

Yeah, it'd be interesting to try to figure out what's wrong.
One idea is to pg_dump both test and test1 and compare the dump
scripts, assuming you expect identical contents in both.

(If you expect not-quite-identical contents, I think there's
a DB comparison tool that's been recommended around here.)

regards, tom lane




Re: Gradual migration from integer to bigint?

2023-09-30 Thread James Healy
On Sun, 1 Oct 2023 at 04:35, Bruce Momjian  wrote:
> I think this talk will help you:
>
> https://www.youtube.com/watch?v=XYRgTazYuZ4

Thanks, I hadn't seen that talk and it's a good summary of the issue
and available solutions.

However it doesn't really address the question of a gradual migration
process that can read 32bit ints but insert/update as 64bit bigints. I
remain curious about whether the postgres architecture just makes that
implausible, or if it could be done and just hasn't because the
options for a more manual migration are Good Enough.

James




Re: Gradual migration from integer to bigint?

2023-09-30 Thread Tom Lane
James Healy  writes:
> However it doesn't really address the question of a gradual migration
> process that can read 32bit ints but insert/update as 64bit bigints. I
> remain curious about whether the postgres architecture just makes that
> implausible, or if it could be done and just hasn't because the
> options for a more manual migration are Good Enough.

I think what you're asking for is a scheme whereby some rows in a
table have datatype X in a particular column while other rows in
the very same physical table have datatype Y in the same column.
That is not happening, because there'd be no way to tell which
case applies to any particular row.

You could fantasize about labeling individual rows somehow, but
it's mere fantasy because there's noplace to put such labels.
To the limited extent that we can find spare space in the
existing page layout, there are far better use-cases (see
nearby discussions about 64-bit XIDs, for example).  And nobody
is going to advocate breaking on-disk compatibility for this,
especially not a break that adds more per-row overhead.

So really the only way forward for this would be to provide more
automation for the existing conversion processes involving table
rewrites.  That's possible perhaps, but it doesn't really sound
compelling enough to justify a lot of work.

regards, tom lane




Re: Gradual migration from integer to bigint?

2023-09-30 Thread Ron

On 9/30/23 22:37, Tom Lane wrote:
[snip]

especially not a break that adds more per-row overhead.

So really the only way forward for this would be to provide more
automation for the existing conversion processes involving table
rewrites.


When altering an unindexed INT to BIGINT, do all of the indices get rewritten?

--
Born in Arizona, moved to Babylonia.




Re: cache lookup failed for function 0

2023-09-30 Thread pf


Hi Adrian & Tom,

On Sat, 30 Sep 2023 15:57:32 -0700 Adrian Klaver wrote:

>On 9/30/23 14:54, p...@pfortin.com wrote:
>> On Sat, 30 Sep 2023 13:40:37 -0700 Adrian Klaver wrote:
>>   
>>> On 9/30/23 11:32, p...@pfortin.com wrote:  
>
>> 
>> As I told Tom, the "test" DB has this issue; the production and test1 DBs
>> are fine; I should have thought to check those first... Sorry for the
>> noise.  
>
>Still there was an issue with a database. Did you track down what was 
>wrong with "test"?
Background:  it's one of 18 databases on:
/dev/nvme1n1p1  3.6T  1.3T  2.2T  38% /mnt/work

Tried using a never before used table name; same error.  I wondered if an
old SEQUENCE might be the cause; nope.

OK.. after:
* back up a table 
* try import -- fails
* drop backed-up table
rinse and repeat until the test database is empty..
Still fails.

Backed up the empty db (attached).

I see the string "SET client_encoding = 'WIN1252';" in the dump -- some
files come from a remote colleague; but this has never been an issue
before...

Next, I was going to connect to test1 with SQL-workbench/J when I saw
this: "Cannot invoke "workbench.storage.RowData.getValue(int)" because
"" is null" on the SQL Source tab.

Emptied test1. Imported the table just fine. Dropped it. Dumped the db
(attached) which is 2 bytes smaller.  "test1" appears 4 times, so this
dump should be 2 bytes larger, not smaller; it also says 'UTF8' instead
of 'WIN1252'.   Lots of other diffs...

Hope there's something in there you guys can make sense of...



Regards,
Pierre

>> We were working with a python script (using INSERT) someone wrote a couple
>> years ago; with my version using COPY[1], imports have gone from 2+ hours
>> to under a minute each for 8M and 33M row files.  If the above links help
>> even more that will be amazing!!  THANK YOU!!  
>
>The move to COPY will have accounted for the speed up. Using the 
>psycopg2 COPY methods probably won't make a material difference, I just 
>find them easier to use. Also in the new psycopg(3) there are more ways 
>to use them per:
>
>   https://www.psycopg.org/psycopg3/docs/basic/copy.html.
>
>
>
>


test_empty.backup
Description: Binary data


test1_empty.backup
Description: Binary data