Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Paul Förster
Hi Adrian,

> On 27. Sep, 2020, at 19:30, Adrian Klaver  wrote:
> 
> Does:
> 
> SELECT
>lanname, proname, probin
> FROM
>pg_proc
> JOIN
>pg_language
> ON
>pg_language.oid = pg_proc.prolang
> WHERE
>pg_language.lanname='plpythonu'
> AND
>   probin IS NOT NULL;
> 
> show anything? This would need to be repeated for each cluster in database.

nope, nothing on any database, not even on template1. template0 does not allow 
connections.

Cheers,
Paul



Question about using ICU

2020-09-28 Thread Paul Förster
Hi,

I have a general question about the use of ICU. Currently, we have PostgreSQL 
compiled from source (Linux) without ICU support. All database clusters and 
databases are UTF8 and of course relying on glibc.

With the sooner or later upcoming glibc release 2.28, there will probably a big 
reindex operation be necessary. To avoid that, I'd like to use ICU.

Compiling --with-icu into the new software, is there a way to make a whole 
database cluster and all its databases use an ICU without having to manually 
change all tables or indexes?

Cheers,
Paul



Re: Question about using ICU

2020-09-28 Thread Laurenz Albe
On Mon, 2020-09-28 at 10:02 +0200, Paul Förster wrote:
> I have a general question about the use of ICU. Currently, we have PostgreSQL 
> compiled
>  from source (Linux) without ICU support. All database clusters and databases 
> are UTF8
>  and of course relying on glibc.
> 
> With the sooner or later upcoming glibc release 2.28, there will probably a 
> big reindex
>  operation be necessary. To avoid that, I'd like to use ICU.
> 
> Compiling --with-icu into the new software, is there a way to make a whole 
> database cluster
>  and all its databases use an ICU without having to manually change all 
> tables or indexes?

There have been efforts to add this functionality:
https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
but it didn't get done.

Your best bet is to manually change the definition of all columns to use the 
new collation.
psql's \gexec may help.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Returning timestamp with timezone at specified timezone irrespective of client timezone

2020-09-28 Thread Laurenz Albe
On Sun, 2020-09-27 at 17:16 -0400, aNullValue (Drew Stemen) wrote:
> I've attempted to obtain help with this problem from several other places, 
> but numerous
>  individuals recommended I ask this mailing list.
> 
> What I need is for the ability to return a timestamp with timezone, using the 
> UTC
>  offset that corresponds to a column-defined timezone, irrespective of the 
> client/session configured timezone.

Try a function like this:

CREATE FUNCTION format_timestamp(
   ts timestamp with time zone,
   time_zone text
) RETURNS text
   LANGUAGE plpgsql IMMUTABLE STRICT AS
$$DECLARE
   tz text;
   result text;
BEGIN
   tz := current_setting('timezone');
   PERFORM set_config('timezone', time_zone, TRUE);
   result := ts AT TIME ZONE 'UTC' AT TIME ZONE 'UTC';
   PERFORM set_config('timezone', tz, TRUE);
   RETURN result;
END;$$;

SELECT format_timestamp(current_timestamp, '+08');

   format_timestamp
---
 2020-09-28 17:15:25.083677+08
(1 row)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Question about using ICU

2020-09-28 Thread Paul Förster
Hi Laurenz,

> On 28. Sep, 2020, at 11:04, Laurenz Albe  wrote:
> 
> There have been efforts to add this functionality:
> https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
> but it didn't get done.

seems to be a rather complex thing according to the thread.

> Your best bet is to manually change the definition of all columns to use the 
> new collation.
> psql's \gexec may help.

but then I'd have to do a reindex anyway, right? My goal was to avoid the 
reindex altogether, if possible.

Cheers,
Paul



how to switch off: WARNING: psql major version 11, server major version 13

2020-09-28 Thread Matthias Apitz


Hello,

Maybe it's a FAQ, but I haven't seen the answer. I want to switch of the
warning (because I know the fact of version not matching):

$ psql -Usisis -dsisis
SET
psql (11.4, server 13.0)
WARNING: psql major version 11, server major version 13.
 Some psql features might not work.
Type "help" for help.

sisis=#

I tried to do it with the ~/.psqlrc file:

$ cat ~/.psqlrc
SET client_min_messages = 'error'

The command gets executed (as the 'SET' shows), but does not silent the
warning.

Thanks

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
Без книги нет знания, без знания нет коммунизма (Влaдимир Ильич Ленин)
Without books no knowledge - without knowledge no communism (Vladimir Ilyich 
Lenin)
Sin libros no hay saber - sin saber no hay comunismo. (Vladimir Ilich Lenin)




Re: how to switch off: WARNING: psql major version 11, server major version 13

2020-09-28 Thread Paul Förster
Hi Matthias,

> On 28. Sep, 2020, at 12:06, Matthias Apitz  wrote:
> 
> 
> Hello,
> 
> Maybe it's a FAQ, but I haven't seen the answer. I want to switch of the
> warning (because I know the fact of version not matching):
> 
> $ psql -Usisis -dsisis
> SET
> psql (11.4, server 13.0)
> WARNING: psql major version 11, server major version 13.
> Some psql features might not work.
> Type "help" for help.
> 
> sisis=#
> 
> I tried to do it with the ~/.psqlrc file:
> 
> $ cat ~/.psqlrc
> SET client_min_messages = 'error'
> 
> The command gets executed (as the 'SET' shows), but does not silent the
> warning.

try the -q switch to psql:

$ psql -q

That should do.

Cheers,
Paul




Re: Question about using ICU

2020-09-28 Thread Laurenz Albe
On Mon, 2020-09-28 at 11:45 +0200, Paul Förster wrote:
> > Your best bet is to manually change the definition of all columns to use 
> > the new collation.
> > psql's \gexec may help.
> 
> but then I'd have to do a reindex anyway, right? My goal was to avoid the 
> reindex altogether, if possible.

That couldn't be avoided anyway if you change the collation no matter
if you do it on the database or on the column level.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: how to switch off: WARNING: psql major version 11, server major version 13

2020-09-28 Thread Matthias Apitz
El día lunes, septiembre 28, 2020 a las 12:17:26p. m. +0200, Paul Förster 
escribió:

> > $ psql -Usisis -dsisis
> > SET
> > psql (11.4, server 13.0)
> > WARNING: psql major version 11, server major version 13.
> > Some psql features might not work.
> > Type "help" for help.
> > 
> > sisis=#
> > 
> 
> try the -q switch to psql:
> 
> $ psql -q
> 
> That should do.

Yes, I know this flag. But this removes also the lines

psql (11.4, server 13.0)
Type "help" for help.

I only want remove the WARNING lines.

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
Без книги нет знания, без знания нет коммунизма (Влaдимир Ильич Ленин)
Without books no knowledge - without knowledge no communism (Vladimir Ilyich 
Lenin)
Sin libros no hay saber - sin saber no hay comunismo. (Vladimir Ilich Lenin)




Re: how to switch off: WARNING: psql major version 11, server major version 13

2020-09-28 Thread Ian Barwick

On 2020/09/28 19:06, Matthias Apitz wrote:


Hello,

Maybe it's a FAQ, but I haven't seen the answer. I want to switch of the
warning (because I know the fact of version not matching):

$ psql -Usisis -dsisis
SET
psql (11.4, server 13.0)
WARNING: psql major version 11, server major version 13.
  Some psql features might not work.
Type "help" for help.

sisis=#

I tried to do it with the ~/.psqlrc file:

$ cat ~/.psqlrc
SET client_min_messages = 'error'

The command gets executed (as the 'SET' shows), but does not silent the
warning.


This warning is generated as a printf() statement by psql itself and is
not affected by "client_min_messages" (which sets the level of error
messages to be returned from the server).

Looking at the source, there's no other way (apart from the -q/--quiet
option) to suppress this warning.


Regards

Ian Barwick


--
Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: how to switch off: WARNING: psql major version 11, server major version 13

2020-09-28 Thread Ireneusz Pluta/wp.pl

W dniu 2020-09-28 o 13:39, Matthias Apitz pisze:

El día lunes, septiembre 28, 2020 a las 12:17:26p. m. +0200, Paul Förster 
escribió:


$ psql -Usisis -dsisis
SET
psql (11.4, server 13.0)
WARNING: psql major version 11, server major version 13.
 Some psql features might not work.
Type "help" for help.

sisis=#


try the -q switch to psql:

$ psql -q

That should do.

Yes, I know this flag. But this removes also the lines

psql (11.4, server 13.0)
Type "help" for help.

I only want remove the WARNING lines.

matthias


Matthias,

just install the psql binary of the 13.0 release in a separate path, and use it when connecting to 
the 13.0 server.






Re: Question about using ICU

2020-09-28 Thread Paul Förster
Hi Laurenz,

> On 28. Sep, 2020, at 13:13, Laurenz Albe  wrote:
>> 
>> but then I'd have to do a reindex anyway, right? My goal was to avoid the 
>> reindex altogether, if possible.
> 
> That couldn't be avoided anyway if you change the collation no matter
> if you do it on the database or on the column level.

ok, thanks.

Cheers,
Paul



DB upgrade version compatibility

2020-09-28 Thread Brajendra Pratap Singh
Can we upgrade the 9.6version open source postgresql db directly to
12.3version ?

Thanks and Regards,
Singh


Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Adrian Klaver

On 9/28/20 12:46 AM, Paul Förster wrote:

Hi Adrian,


On 27. Sep, 2020, at 19:30, Adrian Klaver  wrote:

Does:

SELECT
lanname, proname, probin
FROM
pg_proc
JOIN
pg_language
ON
pg_language.oid = pg_proc.prolang
WHERE
pg_language.lanname='plpythonu'
AND
   probin IS NOT NULL;

show anything? This would need to be repeated for each cluster in database.


nope, nothing on any database, not even on template1. template0 does not allow 
connections.


Well I'm out of ideas. That means circling back to having Python 2 
installed, should the powers that be agree.




Cheers,
Paul




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Paul Förster
Hi Adrian,

> On 28. Sep, 2020, at 15:34, Adrian Klaver  wrote:
> 
> Well I'm out of ideas. That means circling back to having Python 2 installed, 
> should the powers that be agree.

they don't. But fortunately, it seems that the number of applications which use 
Python code inside a database, seems rather small.

One of them is pgwatch2. I will check if there's an update which then uses 
Python 3.

Thanks for helping.

Cheers,
Paul



Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Adrian Klaver

On 9/28/20 6:37 AM, Paul Förster wrote:

Hi Adrian,


On 28. Sep, 2020, at 15:34, Adrian Klaver  wrote:

Well I'm out of ideas. That means circling back to having Python 2 installed, 
should the powers that be agree.


they don't. But fortunately, it seems that the number of applications which use 
Python code inside a database, seems rather small.

One of them is pgwatch2. I will check if there's an update which then uses 
Python 3.


So pgwatch2 installs functions that use plpythonu?

How does that work if there is no plpython language installed?



Thanks for helping.

Cheers,
Paul




--
Adrian Klaver
adrian.kla...@aklaver.com




re PG 9.6x and found xmin from before relfrozenxid and removal of pg_internal.init file(s)

2020-09-28 Thread Reid Thompson
We have a planned upgrade that would permanentlty remedy this.

regarding the below errors on our PG 9.6.x instance.

2020-09-28 09:08:15.741 EDT,,,26212,,5f71e03f.6664,1,,2020-09-28 09:08:15 
EDT,250/9136136,0,ERROR,XX001,"found xmin 2675436435 from before relfrozenxid 
321165377","automatic vacuum of table 
""postgres.pg_catalog.pg_authid"""""
2020-09-28 09:08:15.751 EDT,,,26212,,5f71e03f.6664,2,,2020-09-28 09:08:15 
EDT,250/9136138,0,ERROR,XX001,"found xmin 2675019557 from before relfrozenxid 
321165377","automatic vacuum of table 
""postgres.pg_catalog.pg_auth_members"""""

I know that a db restart will reset the 'counter' (reset the
pg_internal.init files??)  on this issue.  We would rather avoid a
restart if possible. Some research on the internet found a page
suggesting that removal of pg_internal.init file(s) on a running system
would provide the same temporary resolution as a restart???   Is this a
valid temporary work around for this issue?  If it is, is there a
particular pg_internal.init file that needs to be removed, or just
remove all pg_internal.init that can be found?


Thanks,
reid



Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Paul Förster
Hi Adrian,

> On 28. Sep, 2020, at 16:03, Adrian Klaver  wrote:
> 
> So pgwatch2 installs functions that use plpythonu?
> How does that work if there is no plpython language installed?

at work, the extension is installed everywhere. But it seems we have an old 
version. The current version seems to use plpython3u.

https://github.com/cybertec-postgresql/pgwatch2

Or more specifically:
https://github.com/cybertec-postgresql/pgwatch2#integration-of-os-level-metrics

At least, at a very quick first look, this one is updated and hence off the 
list to check.

Ok, but this is going to be off-topic.

Cheers,
Paul



Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Adrian Klaver

On 9/28/20 7:22 AM, Paul Förster wrote:

Hi Adrian,


On 28. Sep, 2020, at 16:03, Adrian Klaver  wrote:

So pgwatch2 installs functions that use plpythonu?
How does that work if there is no plpython language installed?


at work, the extension is installed everywhere. But it seems we have an old 
version. The current version seems to use plpython3u.

https://github.com/cybertec-postgresql/pgwatch2

Or more specifically:
https://github.com/cybertec-postgresql/pgwatch2#integration-of-os-level-metrics

At least, at a very quick first look, this one is updated and hence off the 
list to check.

Ok, but this is going to be off-topic.


Not necessarily, if it is installing plpythonu functions.



Cheers,
Paul




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_upgrade Python version issue on openSUSE

2020-09-28 Thread Paul Förster
Hi Adrian,

> On 28. Sep, 2020, at 16:30, Adrian Klaver  wrote:
> Not necessarily, if it is installing plpythonu functions.

I'll have to check that anyway. I'm already logged out of work, so I won't do 
that now. 😇

Cheers,
Paul



Re: re PG 9.6x and found xmin from before relfrozenxid and removal of pg_internal.init file(s)

2020-09-28 Thread Tom Lane
Reid Thompson  writes:
> We have a planned upgrade that would permanentlty remedy this.
> regarding the below errors on our PG 9.6.x instance.

> 2020-09-28 09:08:15.741 EDT,,,26212,,5f71e03f.6664,1,,2020-09-28 09:08:15 
> EDT,250/9136136,0,ERROR,XX001,"found xmin 2675436435 from before relfrozenxid 
> 321165377","automatic vacuum of table 
> ""postgres.pg_catalog.pg_authid"""""
> 2020-09-28 09:08:15.751 EDT,,,26212,,5f71e03f.6664,2,,2020-09-28 09:08:15 
> EDT,250/9136138,0,ERROR,XX001,"found xmin 2675019557 from before relfrozenxid 
> 321165377","automatic vacuum of table 
> ""postgres.pg_catalog.pg_auth_members"""""

> I know that a db restart will reset the 'counter' (reset the
> pg_internal.init files??)  on this issue.  We would rather avoid a
> restart if possible. Some research on the internet found a page
> suggesting that removal of pg_internal.init file(s) on a running system
> would provide the same temporary resolution as a restart???   Is this a
> valid temporary work around for this issue?  If it is, is there a
> particular pg_internal.init file that needs to be removed, or just
> remove all pg_internal.init that can be found?

I'm a bit dubious that that'd actually help, but it's perfectly safe
if you want to try it.  pg_internal.init is just a cache file that
will be rebuilt if it's missing.

regards, tom lane




Modifying database schema without losing data

2020-09-28 Thread Rich Shepard

I've been developing a business tracking application for my own use and it's
worked well up to now. But, I need to modify it by adding a table with
attributes from two other tables. I've not drawn a E-R diagram so I show the
two existing tables here:

CREATE TABLE Organizations (
  org_id serial PRIMARY KEY,
  org_name varchar(64) DEFAULT '??' NOT NULL,
  org_addr1 varchar(64),
  org_addr2 varchar(64),
  org_city varchar(16),
  state_code char(2),
  org_postcode varchar(10),
  org_country char(2) DEFAULT 'US' NOT NULL,
  main_phone varchar(16),
  org_fax varchar(12),
  org_url varchar(64),
  industry varchar(24) DEFAULT 'Other' NOT NULL
   REFERENCES industries(ind_name)
   ON UPDATE CASCADE
   ON DELETE RESTRICT,
  status varchar(20) DEFAULT 'Opportunity' NOT NULL
 REFERENCES statusTypes(stat_name)
 ON UPDATE CASCADE
 ON DELETE RESTRICT,
  comment text 
);


CREATE TABLE People (
  person_id serial PRIMARY KEY,
  lname varchar(15) NOT NULL,
  fname varchar(15) NOT NULL,
  job_title varchar(32),
  org_id int DEFAULT '0' NOT NULL
  REFERENCES Organizations(org_id)
  ON UPDATE CASCADE
  ON DELETE RESTRICT,
  site_name varchar(64),
  site_addr varchar(32),
  site_city varchar(16),
  state_code char(2),
  site_postcode varchar(10),
  site_country char(2) DEFAULT 'US' NOT NULL,
  direct_phone varchar(15),
  direct_fax varchar(15),
  cell_phone varchar(15),
  site_phone varchar(15),
  ext varchar(6),
  email varchar(64),
  active boolean DEFAULT TRUE NOT NULL,
  comment text
);

What I should have noticed when I designed this tool is that addresses and
phone/e-mail addresses can be duplicated when there's only a single
location. Now I have some prospective clients with multiple locations but I
have no names of individuals. So, I want to add a Location table with
addresses and contact information. Each row in that table will have a serial PK
and will use a FK to reference the Organization table. People will now
reference the Locations table rather than the Organization table.

There are data in each of these tables and my research in my books and on
the web have not provided any insights on how to modify the existing schema
and get date into their new appropriate table homes.

I think the long way is to dump the database and manually move rows (using
emacs) from their current table to the new one, as appropriate, but there're
probably much better ways to do this and I'm eager to learn.

Regards,

Rich




Re: Modifying database schema without losing data

2020-09-28 Thread Adrian Klaver

On 9/28/20 10:15 AM, Rich Shepard wrote:
I've been developing a business tracking application for my own use and 
it's

worked well up to now. But, I need to modify it by adding a table with
attributes from two other tables. I've not drawn a E-R diagram so I show 
the

two existing tables here:

CREATE TABLE Organizations (
   org_id serial PRIMARY KEY,
   org_name varchar(64) DEFAULT '??' NOT NULL,
   org_addr1 varchar(64),
   org_addr2 varchar(64),
   org_city varchar(16),
   state_code char(2),
   org_postcode varchar(10),
   org_country char(2) DEFAULT 'US' NOT NULL,
   main_phone varchar(16),
   org_fax varchar(12),
   org_url varchar(64),
   industry varchar(24) DEFAULT 'Other' NOT NULL
    REFERENCES industries(ind_name)
    ON UPDATE CASCADE
    ON DELETE RESTRICT,
   status varchar(20) DEFAULT 'Opportunity' NOT NULL
  REFERENCES statusTypes(stat_name)
  ON UPDATE CASCADE
  ON DELETE RESTRICT,
   comment text );

CREATE TABLE People (
   person_id serial PRIMARY KEY,
   lname varchar(15) NOT NULL,
   fname varchar(15) NOT NULL,
   job_title varchar(32),
   org_id int DEFAULT '0' NOT NULL
   REFERENCES Organizations(org_id)
   ON UPDATE CASCADE
   ON DELETE RESTRICT,
   site_name varchar(64),
   site_addr varchar(32),
   site_city varchar(16),
   state_code char(2),
   site_postcode varchar(10),
   site_country char(2) DEFAULT 'US' NOT NULL,
   direct_phone varchar(15),
   direct_fax varchar(15),
   cell_phone varchar(15),
   site_phone varchar(15),
   ext varchar(6),
   email varchar(64),
   active boolean DEFAULT TRUE NOT NULL,
   comment text
);

What I should have noticed when I designed this tool is that addresses and
phone/e-mail addresses can be duplicated when there's only a single
location. Now I have some prospective clients with multiple locations but I
have no names of individuals. So, I want to add a Location table with
addresses and contact information. Each row in that table will have a 
serial PK

and will use a FK to reference the Organization table. People will now
reference the Locations table rather than the Organization table.

There are data in each of these tables and my research in my books and on
the web have not provided any insights on how to modify the existing schema
and get date into their new appropriate table homes.

I think the long way is to dump the database and manually move rows (using
emacs) from their current table to the new one, as appropriate, but 
there're

probably much better ways to do this and I'm eager to learn.


You could use INSERT INTO location(new_fields,) SELECT the_fields FROM 
the_table(s).




Regards,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Modifying database schema without losing data

2020-09-28 Thread Rich Shepard

On Mon, 28 Sep 2020, Adrian Klaver wrote:


You could use INSERT INTO location(new_fields,) SELECT the_fields FROM
the_table(s).


Well, duh! I could have thought of that. That's exactly what I'll do:

Create the new table, move data from the old table into it, then drop
columns in the old table ... after checking all data's there.

Thanks, Adrian.

Stay well,

Rich






Re: re PG 9.6x and found xmin from before relfrozenxid and removal of pg_internal.init file(s)

2020-09-28 Thread Reid Thompson
On Mon, 2020-09-28 at 12:15 -0400, Tom Lane wrote:
> Reid Thompson  writes:
> > We have a planned upgrade that would permanentlty remedy this.
> > regarding the below errors on our PG 9.6.x instance.
> 
> > 2020-09-28 09:08:15.741 EDT,,,26212,,5f71e03f.6664,1,,2020-09-28 09:08:15 
> > EDT,250/9136136,0,ERROR,XX001,"found xmin 2675436435 from before 
> > relfrozenxid 321165377","automatic vacuum of table
> > ""postgres.pg_catalog.pg_authid"""""
> > 2020-09-28 09:08:15.751 EDT,,,26212,,5f71e03f.6664,2,,2020-09-28 09:08:15 
> > EDT,250/9136138,0,ERROR,XX001,"found xmin 2675019557 from before 
> > relfrozenxid 321165377","automatic vacuum of table
> > ""postgres.pg_catalog.pg_auth_members"""""
> 
> > I know that a db restart will reset the 'counter' (reset the
> > pg_internal.init files??)  on this issue.  We would rather avoid a
> > restart if possible. Some research on the internet found a page
> > suggesting that removal of pg_internal.init file(s) on a running system
> > would provide the same temporary resolution as a restart???   Is this a
> > valid temporary work around for this issue?  If it is, is there a
> > particular pg_internal.init file that needs to be removed, or just
> > remove all pg_internal.init that can be found?
> 
> I'm a bit dubious that that'd actually help, but it's perfectly safe
> if you want to try it.  pg_internal.init is just a cache file that
> will be rebuilt if it's missing.
> 
>     regards, tom lane


appears to allow to vacuum to complete... and stops the error messages
to the log file.

[local]:db-server-101:5432 rthompso@=> set role super;
SET
Time: 0.900 ms
[local]:db-server-101:5432 rthompso@=# vacuum pg_catalog.pg_authid; vacuum 
pg_catalog.pg_auth_members;
ERROR:  found xmin 2675436435 from before relfrozenxid 321165377
Time: 14.698 ms
ERROR:  found xmin 2675019557 from before relfrozenxid 321165377
Time: 3.956 ms
[local]:db-server-101:5432 rthompso@=# vacuum pg_catalog.pg_authid; vacuum 
pg_catalog.pg_auth_members;
ERROR:  found xmin 2675436435 from before relfrozenxid 321165377
Time: 13.889 ms
ERROR:  found xmin 2675019557 from before relfrozenxid 321165377
Time: 2.479 ms
[local]:db-server-101:5432 rthompso@=# vacuum pg_catalog.pg_authid; vacuum 
pg_catalog.pg_auth_members;
VACUUM
Time: 22.778 ms
VACUUM
Time: 4.486 ms
[local]:db-server-101:5432 rthompso@=# vacuum verbose pg_catalog.pg_authid; 
vacuum verbose pg_catalog.pg_auth_members;
INFO:  vacuuming "pg_catalog.pg_authid"
INFO:  index "pg_authid_rolname_index" now contains 123 row versions in 2 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "pg_authid_oid_index" now contains 123 row versions in 2 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_authid": found 0 removable, 123 nonremovable row versions in 3 out 
of 3 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 55 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
Time: 25.909 ms
INFO:  vacuuming "pg_catalog.pg_auth_members"
INFO:  index "pg_auth_members_role_member_index" now contains 143 row versions 
in 2 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "pg_auth_members_member_role_index" now contains 143 row versions 
in 2 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_auth_members": found 0 removable, 143 nonremovable row versions in 1 
out of 1 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 13 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
Time: 6.494 ms


$ grep pg_auth *Mon*csv
...snip...
2020-09-28 14:00:23.564 EDT,,,9998,,5f7224b7.270e,1,,2020-09-28 14:00:23 
EDT,271/14248144,0,ERROR,XX001,"found xmin 2675436435 from before relfrozenxid 
321165377","automatic vacuum of table 
""postgres.pg_catalog.pg_authid"""""
2020-09-28 14:00:23.575 EDT,,,9998,,5f7224b7.270e,2,,2020-09-28 14:00:23 
EDT,271/14248146,0,ERROR,XX001,"found xmin 2675019557 from before relfrozenxid 
321165377","automatic vacuum of table 
""postgres.pg_catalog.pg_auth_members"""""
2020-09-28 14:00:28.571 EDT,,,10283,,5f7224bc.282b,1,,2020-09-28 14:00:28 
EDT,271/14248188,0,ERROR,XX001,"found xmin 2675436435 from before relfrozenxid 
321165377","automatic vacuum of table 
""postgres.pg_catalog.pg_authid"""""
2020-09-28 14:00:28.581 EDT,,,10283,,5f7224bc.282b,2,,2020-09-28 14:00:28 
EDT,271/14248190,0,ERROR,XX001,"found xmin 2675019557 from before relfrozenxid 
321165377",,

Re: Modifying database schema without losing data

2020-09-28 Thread Adam Scott
What if a person is a member of more than one Org?  Consider a person_org
table.

I see mention of a site in the person table.  It may also be the case that
you need a site table.

Often, you want a table for the Person and a Contact (or Address)  table
separately.  This allows for having more than one contact for a Person.

Org(id,  )
Person(id, person_org_id, person_site_id, ...  )
Person_Org(id, org_id, person_id, )
Contact(id, person_id, address, city, state, zip , email, )
Site(id, name, address,  )
Person_Site(id, person_id, site_id, ... )

This way a person can be a member of more than one org, at one or  more
sites, and have one or more contacts.



On Mon, Sep 28, 2020 at 10:15 AM Rich Shepard 
wrote:

> I've been developing a business tracking application for my own use and
> it's
> worked well up to now. But, I need to modify it by adding a table with
> attributes from two other tables. I've not drawn a E-R diagram so I show
> the
> two existing tables here:
>
> CREATE TABLE Organizations (
>org_id serial PRIMARY KEY,
>org_name varchar(64) DEFAULT '??' NOT NULL,
>org_addr1 varchar(64),
>org_addr2 varchar(64),
>org_city varchar(16),
>state_code char(2),
>org_postcode varchar(10),
>org_country char(2) DEFAULT 'US' NOT NULL,
>main_phone varchar(16),
>org_fax varchar(12),
>org_url varchar(64),
>industry varchar(24) DEFAULT 'Other' NOT NULL
> REFERENCES industries(ind_name)
> ON UPDATE CASCADE
> ON DELETE RESTRICT,
>status varchar(20) DEFAULT 'Opportunity' NOT NULL
>   REFERENCES statusTypes(stat_name)
>   ON UPDATE CASCADE
>   ON DELETE RESTRICT,
>comment text
> );
>
> CREATE TABLE People (
>person_id serial PRIMARY KEY,
>lname varchar(15) NOT NULL,
>fname varchar(15) NOT NULL,
>job_title varchar(32),
>org_id int DEFAULT '0' NOT NULL
>REFERENCES Organizations(org_id)
>ON UPDATE CASCADE
>ON DELETE RESTRICT,
>site_name varchar(64),
>site_addr varchar(32),
>site_city varchar(16),
>state_code char(2),
>site_postcode varchar(10),
>site_country char(2) DEFAULT 'US' NOT NULL,
>direct_phone varchar(15),
>direct_fax varchar(15),
>cell_phone varchar(15),
>site_phone varchar(15),
>ext varchar(6),
>email varchar(64),
>active boolean DEFAULT TRUE NOT NULL,
>comment text
> );
>
> What I should have noticed when I designed this tool is that addresses and
> phone/e-mail addresses can be duplicated when there's only a single
> location. Now I have some prospective clients with multiple locations but I
> have no names of individuals. So, I want to add a Location table with
> addresses and contact information. Each row in that table will have a
> serial PK
> and will use a FK to reference the Organization table. People will now
> reference the Locations table rather than the Organization table.
>
> There are data in each of these tables and my research in my books and on
> the web have not provided any insights on how to modify the existing schema
> and get date into their new appropriate table homes.
>
> I think the long way is to dump the database and manually move rows (using
> emacs) from their current table to the new one, as appropriate, but
> there're
> probably much better ways to do this and I'm eager to learn.
>
> Regards,
>
> Rich
>
>
>


Re: re PG 9.6x and found xmin from before relfrozenxid and removal of pg_internal.init file(s)

2020-09-28 Thread Tom Lane
Reid Thompson  writes:
> On Mon, 2020-09-28 at 12:15 -0400, Tom Lane wrote:
>> I'm a bit dubious that that'd actually help, but it's perfectly safe
>> if you want to try it.  pg_internal.init is just a cache file that
>> will be rebuilt if it's missing.

> appears to allow to vacuum to complete... and stops the error messages
> to the log file.

Ah, after digging around in our git history, this symptom seems to match
this bug fix:

Author: Andres Freund 
Branch: master Release: REL_11_BR [a54e1f158] 2018-06-12 11:13:21 -0700
Branch: REL_10_STABLE Release: REL_10_5 [2ce64caaf] 2018-06-12 11:13:21 -0700
Branch: REL9_6_STABLE Release: REL9_6_10 [6a46aba1c] 2018-06-12 11:13:21 -0700
Branch: REL9_5_STABLE Release: REL9_5_14 [14b3ec6f3] 2018-06-12 11:13:21 -0700
Branch: REL9_4_STABLE Release: REL9_4_19 [817f9f9a8] 2018-06-12 11:13:22 -0700
Branch: REL9_3_STABLE Release: REL9_3_24 [9b9b622b2] 2018-06-12 11:13:22 -0700

Fix bugs in vacuum of shared rels, by keeping their relcache entries 
current.

When vacuum processes a relation it uses the corresponding relcache
entry's relfrozenxid / relminmxid as a cutoff for when to remove
tuples etc. Unfortunately for nailed relations (i.e. critical system
catalogs) bugs could frequently lead to the corresponding relcache
entry being stale.

This set of bugs could cause actual data corruption as vacuum would
potentially not remove the correct row versions, potentially reviving
them at a later point.  After 699bf7d05c some corruptions in this vein
were prevented, but the additional error checks could also trigger
spuriously. Examples of such errors are:
  ERROR: found xmin ... from before relfrozenxid ...
and
  ERROR: found multixact ... from before relminmxid ...
To be caused by this bug the errors have to occur on system catalog
tables.

The two bugs are:

1) Invalidations for nailed relations were ignored, based on the
   theory that the relcache entry for such tables doesn't
   change. Which is largely true, except for fields like relfrozenxid
   etc.  This means that changes to relations vacuumed in other
   sessions weren't picked up by already existing sessions.  Luckily
   autovacuum doesn't have particularly longrunning sessions.

2) For shared *and* nailed relations, the shared relcache init file
   was never invalidated while running.  That means that for such
   tables (e.g. pg_authid, pg_database) it's not just already existing
   sessions that are affected, but even new connections are as well.
   That explains why the reports usually were about pg_authid et. al.

To fix 1), revalidate the rd_rel portion of a relcache entry when
invalid. This implies a bit of extra complexity to deal with
bootstrapping, but it's not too bad.  The fix for 2) is simpler,
simply always remove both the shared and local init files.

Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion:
https://postgr.es/m/20180525203736.crkbg36muzxrj...@alap3.anarazel.de

https://postgr.es/m/CAMa1XUhKSJd98JW4o9StWPrfS=11bpgg+_gdmxe25tvuy4s...@mail.gmail.com

https://postgr.es/m/cakmfjucqbuodrfxpdx39wha3vjyxwerg_zdvxzncr6+5wog...@mail.gmail.com

https://postgr.es/m/cagewt-ujgpmlq09gxcufmzazsgjc98vxhefbf-tppb0fb13...@mail.gmail.com
Backpatch: 9.3-


That bug is pretty narrow, but it explains failures on pg_authid
and related catalogs.  So updating to 9.6.10 or later should be
enough to prevent a recurrence.

regards, tom lane




Re: Modifying database schema without losing data

2020-09-28 Thread Rich Shepard

On Mon, 28 Sep 2020, Adam Scott wrote:


What if a person is a member of more than one Org?  Consider a person_org
table.


Adam,

Not applicable. An individual is employed by a single organization.


I see mention of a site in the person table.  It may also be the case that
you need a site table.


Yep. That's what I need to add.


Often, you want a table for the Person and a Contact (or Address)  table
separately. This allows for having more than one contact for a Person.


Possible, but I've not encountered more than a couple of phone numbers per
person (work and mobile). I don't need home addresses or phones.


This way a person can be a member of more than one org, at one or  more
sites, and have one or more contacts.


Doesn't apply to businesses in my market areas.

Thanks for the thoughts,

Rich




Re: Modifying database schema without losing data

2020-09-28 Thread Tom Lane
Rich Shepard  writes:
> On Mon, 28 Sep 2020, Adam Scott wrote:
>> What if a person is a member of more than one Org?  Consider a person_org
>> table.

> Not applicable. An individual is employed by a single organization.

No part-timers in your universe?  (My friends in the restaurant business
would surely find the above pretty laughable.)

regards, tom lane




Re: Modifying database schema without losing data

2020-09-28 Thread Rich Shepard

On Mon, 28 Sep 2020, Tom Lane wrote:


No part-timers in your universe? (My friends in the restaurant business
would surely find the above pretty laughable.)


Tom,

Not in the markets I serve; at least, not at the environmental manager
level. I don't work for retail businesses; primarily natural resource
industries and others that require storm water discharge permits (no process
waters are allowed off-site when untreated in any state in the western US).

The only part-timers are external consultants hired for a specific purpose.

Stay well,

Rich





DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Joe Abbate

Hello,

I'm considering creating a TYPE for what may be called a "possibly 
imprecise date" (pidate).  The most obvious use is for recording dates 
such as births or deaths of historical individuals, where we may know 
that someone died precisely on a given year-month-day, but the birth may 
only be known down to year-month or just the year (or perhaps we know 
precisely the baptism date [Adam Smith], but not the actual birth, so we 
want to record the former but qualified so it can be annotated on 
display).  Another use is for publications, like magazines that are 
issued on a monthly basis or journals that are issued on a quarterly or 
seasonal basis.


We currently have two instances of this kind, using a standard DATE 
column plus a CHAR(1) column that encodes (on a limited basis for now) 
the YMD, YM or Y level of precision, and a simple SQL function to return 
a textual representation of the pidate.  It would be nice to generalize 
this before going further.


The first option I explored was creating a composite type with the two 
attributes, but that doesn't allow specification of DEFAULTs, NOT NULL 
or CHECK expressions on the precision code attribute.  It seems I'd have 
to create a DOMAIN first, then use DATE and that domain to create a 
composite TYPE, to finally use the latter in actual tables.  That 
layering looks cumbersome.


Another option, which I havent't tried, is to subvert PG by creating an 
empty table, since that creates a "record type", but even if possible 
that would be a hack.


Finally there's the base TYPE.  This entails writing some seven 
functions "in C or another low-level language" (does PG support *any* 
other such language?), plus installing a library with those functions in 
a production environment.  Doable, yes, but not very friendly either.


Am I overlooking something or is the practice of creating abstractions 
in object-relational databases mostly unchanged?


Regards,

Joe




Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Rob Sargent



> On Sep 28, 2020, at 3:14 PM, Joe Abbate  wrote:
> 
> Hello,
> 
> I'm considering creating a TYPE for what may be called a "possibly imprecise 
> date" (pidate).  The most obvious use is for recording dates such as births 
> or deaths of historical individuals, where we may know that someone died 
> precisely on a given year-month-day, but the birth may only be known down to 
> year-month or just the year (or perhaps we know precisely the baptism date 
> [Adam Smith], but not the actual birth, so we want to record the former but 
> qualified so it can be annotated on display).  Another use is for 
> publications, like magazines that are issued on a monthly basis or journals 
> that are issued on a quarterly or seasonal basis.
> 
> We currently have two instances of this kind, using a standard DATE column 
> plus a CHAR(1) column that encodes (on a limited basis for now) the YMD, YM 
> or Y level of precision, and a simple SQL function to return a textual 
> representation of the pidate.  It would be nice to generalize this before 
> going further.
> 
> The first option I explored was creating a composite type with the two 
> attributes, but that doesn't allow specification of DEFAULTs, NOT NULL or 
> CHECK expressions on the precision code attribute.  It seems I'd have to 
> create a DOMAIN first, then use DATE and that domain to create a composite 
> TYPE, to finally use the latter in actual tables.  That layering looks 
> cumbersome.
> 
> Another option, which I havent't tried, is to subvert PG by creating an empty 
> table, since that creates a "record type", but even if possible that would be 
> a hack.
> 
> Finally there's the base TYPE.  This entails writing some seven functions "in 
> C or another low-level language" (does PG support *any* other such 
> language?), plus installing a library with those functions in a production 
> environment.  Doable, yes, but not very friendly either.
> 
> Am I overlooking something or is the practice of creating abstractions in 
> object-relational databases mostly unchanged?
> 
> Regards,
> 
> Joe
> 
> 
just record all three fields (day, month, year) with nulls and do the to-date 
as needed.





Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Tom Lane
Joe Abbate  writes:
> I'm considering creating a TYPE for what may be called a "possibly 
> imprecise date" (pidate).

> The first option I explored was creating a composite type with the two 
> attributes, but that doesn't allow specification of DEFAULTs, NOT NULL 
> or CHECK expressions on the precision code attribute.  It seems I'd have 
> to create a DOMAIN first, then use DATE and that domain to create a 
> composite TYPE, to finally use the latter in actual tables.  That 
> layering looks cumbersome.

Agreed.

> Another option, which I havent't tried, is to subvert PG by creating an 
> empty table, since that creates a "record type", but even if possible 
> that would be a hack.

Won't help.  Even if the table has constraints, when its rowtype is used
in a standalone context, it only has the features that a standalone
composite type would have (ie, no constraints).

> Am I overlooking something or is the practice of creating abstractions 
> in object-relational databases mostly unchanged?

Domain-over-composite might be a slightly simpler answer than your first
one.  It's only available in relatively late-model PG, and I'm not sure
about its performance relative to your other design, but it is an
alternative to think about.

Note that attaching NOT NULL constraints at the domain level is almost
never a good idea, because then you find yourself with a semantically
impossible situation when, say, a column of that type is on the nullable
side of an outer join.  We allow such constraints, but they will be
nominally violated in cases like that.

regards, tom lane




Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Joe Abbate

Hello Rob,

On 28/9/20 17:17, Rob Sargent wrote:

just record all three fields (day, month, year) with nulls and do the to-date 
as needed.


That is not sufficient.  An earlier implementation had something like a 
CHAR(8) to record MMDD, but how can you indicate, for example, an 
issue date of a bimonthly magazine, say July-Aug 2020?  We can store 
2020-07-01 in the DATE attribute, but we need another attribute to 
indicate it's really two months.  Also, by storing three separate 
columns, you loose the beauty of the PG DATE abstraction.


Joe






Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Ron

On 9/28/20 4:31 PM, Joe Abbate wrote:

Hello Rob,

On 28/9/20 17:17, Rob Sargent wrote:
just record all three fields (day, month, year) with nulls and do the 
to-date as needed.


That is not sufficient.  An earlier implementation had something like a 
CHAR(8) to record MMDD, but how can you indicate, for example, an 
issue date of a bimonthly magazine, say July-Aug 2020?  We can store 
2020-07-01 in the DATE attribute, but we need another attribute to 
indicate it's really two months.  Also, by storing three separate columns, 
you loose the beauty of the PG DATE abstraction.


The Gramps  genealogy program has figured 
it out; maybe it's source code can lend you some clues.


--
Angular momentum makes the world go 'round.


Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Gavan Schneider

On 29 Sep 2020, at 7:31, Joe Abbate wrote:


Hello Rob,

On 28/9/20 17:17, Rob Sargent wrote:
just record all three fields (day, month, year) with nulls and do the 
to-date as needed.


That is not sufficient.  An earlier implementation had something like 
a CHAR(8) to record MMDD, but how can you indicate, for example, 
an issue date of a bimonthly magazine, say July-Aug 2020?  We can 
store 2020-07-01 in the DATE attribute, but we need another attribute 
to indicate it's really two months.  Also, by storing three separate 
columns, you loose the beauty of the PG DATE abstraction.


This is only a partial “fix” and goes nowhere near solving the full 
wrapper/abstraction problem…


Consider expressing all the component fields as a range. This allows you 
the ability to be a precise as you need and still have the benefits of 
well defined comparison functions.


Regards
Gavan Schneider
——
Gavan Schneider, Sodwalls, NSW, Australia
Explanations exist; they have existed for all time; there is always a 
well-known solution to every human problem — neat, plausible, and 
wrong. The ancients, in the case at bar, laid the blame upon the gods: 
sometimes they were remote and surly, and sometimes they were kind. In 
the Middle Ages lesser powers took a hand in the matter, and so one 
reads of works of art inspired by Our Lady, by the Blessed Saints, by 
the souls of the departed, and even by the devil.  H. L. Mencken, 1920





Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Joe Abbate

Hello Tom,

On 28/9/20 17:25, Tom Lane wrote:

Domain-over-composite might be a slightly simpler answer than your first
one.  It's only available in relatively late-model PG, and I'm not sure
about its performance relative to your other design, but it is an
alternative to think about.


"Domain-over-composite" meaning create a TYPE first (DATE, CHAR(1)) and 
then a DOMAIN based on that type?  (1) How late model are we talking? 
The DOMAIN syntax doesn't seem changed from PG 11 to PG 13? (2) Can a 
CHECK constraint specify attributes of the composite?



Note that attaching NOT NULL constraints at the domain level is almost
never a good idea, because then you find yourself with a semantically
impossible situation when, say, a column of that type is on the nullable
side of an outer join.  We allow such constraints, but they will be
nominally violated in cases like that.


NULLs: Tony Hoare's "billion dollars of pain and damage" transported to SQL.

Joe




Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Joe Abbate

Hello Gavan,

On 28/9/20 17:52, Gavan Schneider wrote:
Consider expressing all the component fields as a range. This allows you 
the ability to be a precise as you need and still have the benefits of 
well defined comparison functions.


I did consider that, but it's a tradeoff between 80% of the cases being 
a single precise date, 18% being a single date with some imprecision 
around a single month, and the rest with "ranges" of months or other 
intervals.


Joe




Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Adrian Klaver

On 9/28/20 2:58 PM, Joe Abbate wrote:

Hello Tom,

On 28/9/20 17:25, Tom Lane wrote:

Domain-over-composite might be a slightly simpler answer than your first
one.  It's only available in relatively late-model PG, and I'm not sure
about its performance relative to your other design, but it is an
alternative to think about.


"Domain-over-composite" meaning create a TYPE first (DATE, CHAR(1)) and 
then a DOMAIN based on that type?  (1) How late model are we talking? 
The DOMAIN syntax doesn't seem changed from PG 11 to PG 13? (2) Can a 
CHECK constraint specify attributes of the composite?



Note that attaching NOT NULL constraints at the domain level is almost
never a good idea, because then you find yourself with a semantically
impossible situation when, say, a column of that type is on the nullable
side of an outer join.  We allow such constraints, but they will be
nominally violated in cases like that.


NULLs: Tony Hoare's "billion dollars of pain and damage" transported to 
SQL.


Except that the case Tom is talking about would occur due to something like:

select table_a left join table_b on table_a.id = table_b.id where 
table_b.id is null;


That has been very useful to me and I'm not sure that how anything you 
replace NULL with to represent 'unknown' would change the situation.




Joe





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DOMAIN/composite TYPE vs. base TYPE

2020-09-28 Thread Tom Lane
Joe Abbate  writes:
> On 28/9/20 17:25, Tom Lane wrote:
>> Domain-over-composite might be a slightly simpler answer than your first
>> one.  It's only available in relatively late-model PG, and I'm not sure
>> about its performance relative to your other design, but it is an
>> alternative to think about.

> "Domain-over-composite" meaning create a TYPE first (DATE, CHAR(1)) and 
> then a DOMAIN based on that type?

Right.

regression=# create type t1 as (d date, t char(1));
CREATE TYPE
regression=# create domain dt1 as t1 check((value).t in ('a', 'b'));
CREATE DOMAIN

> (1) How late model are we talking? 
> The DOMAIN syntax doesn't seem changed from PG 11 to PG 13?

Back to 11, looks like.  The syntax didn't change, but v10 complains

ERROR:  "t1" is not a valid base type for a domain

>> Note that attaching NOT NULL constraints at the domain level is almost
>> never a good idea, because then you find yourself with a semantically
>> impossible situation when, say, a column of that type is on the nullable
>> side of an outer join.  We allow such constraints, but they will be
>> nominally violated in cases like that.

> NULLs: Tony Hoare's "billion dollars of pain and damage" transported to SQL.

I dunno, outer joins are awfully useful.  It is true that the SQL
committee has stuck too many not-quite-consistent meanings on NULL,
but on the other hand, several different kinds of NULL might be
worse.

regards, tom lane




Gurjeet Singh Index Adviser User Interface

2020-09-28 Thread Yessica Brinkmann
 Goodnight,
I would like to ask the following question:
As some of you may recall, I was doing my university thesis as a
modification of Gurjeet Singh's Index Adviser.
Now I have finished the programming part.
But I have the following problem:
Gurjeet Singh's Index Adviser readme describes how to use an Index Adviser
interface called pg_advise_index tool.
The readme mentions the following:
i) pg_advise_index tool.
-

Create a file that contains all the queries (semicolon terminated; may
be multi-line) that are expected to be executed by the application; and
feed this file to the pg_advise_index tool with appropriate options.

pg_advise_index -d DB -h host -U user -s 10M -o advisory.sql workload.sql

pg_advise_index will open a connection with the PostgreSQL server by
setting appropriate session level options that will force the backend to
load
the pg_index_adviser plugin. It will then prepend the keywords EXPLAIN to
each
of the queries found in the workload file, and execute them against the
backend.
For each query EXPLAINed, the backend will generate advice for each index
that
might have been beneficial in executing these queries.

At the end, pg_advise_index will enumerate all the indexes suggested for
the current session, and output the CREATE INDEX statements for each of
them.
Optinally, if the -size option was specified, pg_advise_index will output
suggestions
for only those indexes, that fit into that size.
--

To test my thesis, I need to use this interface pg_advise_index tool, but
unfortunately I have not been able to use it yet.
I would like to know if any of you have any experience testing or using
this interface, or if you understand what the readme says anyway.
In that case, please, can you help me. Now to test my thesis.
I've actually already tried to run this interface pg_advise_index tool in
various ways but haven't been able to.
What I don't understand specifically is from which directory should I run
the command
pg_advise_index -d DB -h host -U user -s 10M -o advisory.sql workload.sql
I will greatly appreciate any help, please.

The Index Adviser readme link is as follows:
https://github.com/gurjeet/pg_adviser/blob/master/index_adviser/README.index_adviser
Best regards,

Yessica Brinkmann