Re: Clustered Index in PG

2020-05-04 Thread Laurenz Albe
On Sun, 2020-05-03 at 16:55 +0530, brajmohan saxena wrote:
> Is there cluster index in PG ? Do we have Indexed Organized Table concepts of 
> Oracle in PG ?

No.

You can use CLUSTER to rewrite a table in index order, but the order won't be 
maintained.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





12.2: Howto check memory-leak in worker?

2020-05-04 Thread Peter
Hi all,
  I have something that looks a bit insane:

# ps axl | grep 6145
  UID   PID  PPID CPU PRI NI VSZRSS MWCHAN   STAT TT TIME 
COMMAND
  770  6145 1   0  20  0  241756868 select   SsJ   -  0:24.62 
/usr/local/bin/postgres -D 
  770  6147  6145   0  23  0  243804 109784 select   IsJ   -  3:18.52 
postgres: checkpointer(
  770  6148  6145   0  20  0  241756  21348 select   SsJ   -  2:02.83 
postgres: background writer
  770  6149  6145   0  20  0  241756   7240 select   SsJ   - 16:36.80 
postgres: walwriter(pos
  770  6150  6145   0  20  0   21980876 select   SsJ   -  0:13.92 
postgres: archiver   last w
  770  6151  6145   0  20  0   21980980 select   SsJ   -  0:58.45 
postgres: stats collector  
  770  6152  6145   0  20  0  241756   1268 select   IsJ   -  0:02.07 
postgres: logical replicati
  770 43315  6145   0  21  0  251844   7520 select   IsJ   -  1:07.74 
postgres: admin postgres 19
  770 43317  6145   0  25  0  251764   8684 select   IsJ   -  1:28.89 
postgres: admin bareos 192.
  770 43596  6145   0  20  0  245620   4476 select   IsJ   -  0:00.12 
postgres: admin bareos 192.
  770 43761  6145   0  20  0  245620   4476 select   IsJ   -  0:00.15 
postgres: admin bareos 192.
  770 90206  6145   0  52  0 1331256 219720 racctDsJ   -563:45.41 
postgres: bareos bareos 192

The 90206 is continuously growing. It is the unspecific, all-purpose
worker for the www.bareos.com backup tool, so it is a bit difficult to
figure what precisely it does - but it tends to be rather simple
straight-forward queries, so it is unlikely to have dozens of "internal sort
operations and hash tables".

What I can say that at times this worker is completely idle in
ClientRead, but does not shrink in memory. Is this a normal behaviour?

Here is a more dynamic picture: it continues to add 2048kB chunks (and
does not do noticeable paging):

  UID   PID  PPID CPU PRI NI VSZRSS MWCHAN   STAT TT TIME 
COMMAND
Mon May  4 13:33:09 CEST 2020
  770 90206  6145   0  91  0 1335352 226900 -RsJ   -569:09.19 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:33:39 CEST 2020
  770 90206  6145   0  93  0 1335352 227696 -RsJ   -569:28.48 
postgres: bareos bareos idle (postgres)
Mon May  4 13:34:09 CEST 2020
  770 90206  6145   0  92  0 1337400 228116 -RsJ   -569:47.46 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:34:39 CEST 2020
  770 90206  6145   0  92  0 1337400 228596 -RsJ   -570:06.56 
postgres: bareos bareos UPDATE (postgres)
Mon May  4 13:35:09 CEST 2020
  770 90206  6145   0  92  0 1337400 228944 -RsJ   -570:25.62 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:35:40 CEST 2020
  770 90206  6145   0  52  0 1337400 229288 racctDsJ   -570:44.33 
postgres: bareos bareos UPDATE (postgres)
Mon May  4 13:36:10 CEST 2020
  770 90206  6145   0  91  0 1337400 229952 -RsJ   -571:03.20 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:36:40 CEST 2020
  770 90206  6145   0  52  0 1337400 223772 racctDsJ   -571:21.50 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:37:10 CEST 2020
  770 90206  6145   0  91  0 1337400 224448 -RsJ   -571:40.63 
postgres: bareos bareos idle (postgres)
Mon May  4 13:37:40 CEST 2020
  770 90206  6145   0  91  0 1339448 225464 -RsJ   -571:58.36 
postgres: bareos bareos SELECT (postgres)
Mon May  4 13:38:10 CEST 2020
  770 90206  6145   0  52  0 1339448 215620 select   SsJ   -572:14.24 
postgres: bareos bareos idle (postgres)
Mon May  4 13:38:40 CEST 2020
  770 90206  6145   0  81  0 1339448 215320 -RsJ   -572:21.09 
postgres: bareos bareos idle (postgres)
Mon May  4 13:39:10 CEST 2020


OS is FreeBSD 11.3-RELEASE-p8 r360175M i386
PostgreSQL 12.2 on i386-portbld-freebsd11.3, compiled by gcc9 (FreeBSD Ports 
Collection) 9.3.0, 32-bit

autovacuum is Disabled.

The memory-specific config is:
> shared_buffers = 200MB
> temp_buffers = 40MB
> work_mem = 80MB
> maintenance_work_mem = 250MB
> dynamic_shared_memory_type = posix
> random_page_cost = 2.0
> effective_cache_size = 1GB
(others are left at default)

I remember vaguely that there are means to have a closer look into
what is using the memory, but do not recall the specifics. Some
pointers or ideas to proceed would be gladly appreciated (Dtrace
should work) - processes will usually fail with OOM at this size, due
to machine configuration - I'm waiting for that now (it is a very very
old pentium3 machine ;) ). 

cheerio,
PMc




Re: walreceiver termination

2020-05-04 Thread Justin King
Would there be anyone that might be able to help troubleshoot this
issue -- or at least give me something that would be helpful to look
for?

https://www.postgresql.org/message-id/flat/CAGH8ccdWLLGC7qag5pDUFbh96LbyzN_toORh2eY32-2P1%3Dtifg%40mail.gmail.com
https://www.postgresql.org/message-id/flat/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com
https://dba.stackexchange.com/questions/116569/postgresql-docker-incorrect-resource-manager-data-checksum-in-record-at-46f-6

I'm not the first one to report something similar and all the
complaints have a different filesystem in common -- particularly ZFS
(or btrfs, in the bottom case).  Is there anything more we can do here
to help narrow down this issue?  I'm happy to help, but I honestly
wouldn't even know where to begin.

Thanks-

Justin King
flightaware.com

On Thu, Apr 23, 2020 at 4:40 PM Justin King  wrote:
>
> On Thu, Apr 23, 2020 at 3:06 PM Tom Lane  wrote:
> >
> > Justin King  writes:
> > > I assume it would be related to the following:
> > > LOG:  incorrect resource manager data checksum in record at 2D6/C259AB90
> > > since the walreceiver terminates just after this - but I'm unclear
> > > what precisely this means.
> >
> > What it indicates is corrupt data in the WAL stream.  When reading WAL
> > after crash recovery, we assume that that indicates end of WAL.  When
> > pulling live data from a source server, it suggests some actual problem
> > ... but killing the walreceiver and trying to re-fetch the data might
> > be a reasonable response to that.  I'm not sure offhand what the startup
> > code thinks it's doing in this context.  It might either be attempting
> > to retry, or concluding that it's come to the end of WAL and it ought
> > to promote to being a live server.  If you don't see the walreceiver
> > auto-restarting then I'd suspect that the latter is happening.
> >
> > regards, tom lane
>
> walrecevier is definitely not restarting -- replication stops cold
> right at that segment.  I'm a little unclear where to go from here --
> is there additional info that would be useful?




Re: 12.2: Howto check memory-leak in worker?

2020-05-04 Thread Dilip Kumar
On Mon, May 4, 2020 at 5:43 PM Peter  wrote:
>
> Hi all,
>   I have something that looks a bit insane:
>
> # ps axl | grep 6145
>   UID   PID  PPID CPU PRI NI VSZRSS MWCHAN   STAT TT TIME 
> COMMAND
>   770  6145 1   0  20  0  241756868 select   SsJ   -  0:24.62 
> /usr/local/bin/postgres -D
>   770  6147  6145   0  23  0  243804 109784 select   IsJ   -  3:18.52 
> postgres: checkpointer(
>   770  6148  6145   0  20  0  241756  21348 select   SsJ   -  2:02.83 
> postgres: background writer
>   770  6149  6145   0  20  0  241756   7240 select   SsJ   - 16:36.80 
> postgres: walwriter(pos
>   770  6150  6145   0  20  0   21980876 select   SsJ   -  0:13.92 
> postgres: archiver   last w
>   770  6151  6145   0  20  0   21980980 select   SsJ   -  0:58.45 
> postgres: stats collector
>   770  6152  6145   0  20  0  241756   1268 select   IsJ   -  0:02.07 
> postgres: logical replicati
>   770 43315  6145   0  21  0  251844   7520 select   IsJ   -  1:07.74 
> postgres: admin postgres 19
>   770 43317  6145   0  25  0  251764   8684 select   IsJ   -  1:28.89 
> postgres: admin bareos 192.
>   770 43596  6145   0  20  0  245620   4476 select   IsJ   -  0:00.12 
> postgres: admin bareos 192.
>   770 43761  6145   0  20  0  245620   4476 select   IsJ   -  0:00.15 
> postgres: admin bareos 192.
>   770 90206  6145   0  52  0 1331256 219720 racctDsJ   -563:45.41 
> postgres: bareos bareos 192
>
> The 90206 is continuously growing. It is the unspecific, all-purpose
> worker for the www.bareos.com backup tool, so it is a bit difficult to
> figure what precisely it does - but it tends to be rather simple
> straight-forward queries, so it is unlikely to have dozens of "internal sort
> operations and hash tables".
>
> What I can say that at times this worker is completely idle in
> ClientRead, but does not shrink in memory. Is this a normal behaviour?
>
> Here is a more dynamic picture: it continues to add 2048kB chunks (and
> does not do noticeable paging):
>
>   UID   PID  PPID CPU PRI NI VSZRSS MWCHAN   STAT TT TIME 
> COMMAND
> Mon May  4 13:33:09 CEST 2020
>   770 90206  6145   0  91  0 1335352 226900 -RsJ   -569:09.19 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:33:39 CEST 2020
>   770 90206  6145   0  93  0 1335352 227696 -RsJ   -569:28.48 
> postgres: bareos bareos idle (postgres)
> Mon May  4 13:34:09 CEST 2020
>   770 90206  6145   0  92  0 1337400 228116 -RsJ   -569:47.46 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:34:39 CEST 2020
>   770 90206  6145   0  92  0 1337400 228596 -RsJ   -570:06.56 
> postgres: bareos bareos UPDATE (postgres)
> Mon May  4 13:35:09 CEST 2020
>   770 90206  6145   0  92  0 1337400 228944 -RsJ   -570:25.62 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:35:40 CEST 2020
>   770 90206  6145   0  52  0 1337400 229288 racctDsJ   -570:44.33 
> postgres: bareos bareos UPDATE (postgres)
> Mon May  4 13:36:10 CEST 2020
>   770 90206  6145   0  91  0 1337400 229952 -RsJ   -571:03.20 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:36:40 CEST 2020
>   770 90206  6145   0  52  0 1337400 223772 racctDsJ   -571:21.50 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:37:10 CEST 2020
>   770 90206  6145   0  91  0 1337400 224448 -RsJ   -571:40.63 
> postgres: bareos bareos idle (postgres)
> Mon May  4 13:37:40 CEST 2020
>   770 90206  6145   0  91  0 1339448 225464 -RsJ   -571:58.36 
> postgres: bareos bareos SELECT (postgres)
> Mon May  4 13:38:10 CEST 2020
>   770 90206  6145   0  52  0 1339448 215620 select   SsJ   -572:14.24 
> postgres: bareos bareos idle (postgres)
> Mon May  4 13:38:40 CEST 2020
>   770 90206  6145   0  81  0 1339448 215320 -RsJ   -572:21.09 
> postgres: bareos bareos idle (postgres)
> Mon May  4 13:39:10 CEST 2020
>
>
> OS is FreeBSD 11.3-RELEASE-p8 r360175M i386
> PostgreSQL 12.2 on i386-portbld-freebsd11.3, compiled by gcc9 (FreeBSD Ports 
> Collection) 9.3.0, 32-bit
>
> autovacuum is Disabled.
>
> The memory-specific config is:
> > shared_buffers = 200MB
> > temp_buffers = 40MB
> > work_mem = 80MB
> > maintenance_work_mem = 250MB
> > dynamic_shared_memory_type = posix
> > random_page_cost = 2.0
> > effective_cache_size = 1GB
> (others are left at default)
>
> I remember vaguely that there are means to have a closer look into
> what is using the memory, but do not recall the specifics. Some
> pointers or ideas to proceed would be gladly appreciated (Dtrace
> should work) - processes will usually fail with OOM at this size, due
> to machine configuration - I'm waiting for that now (it is a very very
> old pentium3 machine ;) ).

One idea is that you can attach your process in gdb and call
MemoryContextStats(TopMemoryContext).  This will show which context is
using how much mem

Re: 12.2: Howto check memory-leak in worker?

2020-05-04 Thread Adrian Klaver

On 5/4/20 4:56 AM, Peter wrote:

Hi all,
   I have something that looks a bit insane:

# ps axl | grep 6145
   UID   PID  PPID CPU PRI NI VSZRSS MWCHAN   STAT TT TIME 
COMMAND
   770  6145 1   0  20  0  241756868 select   SsJ   -  0:24.62 
/usr/local/bin/postgres -D
   770  6147  6145   0  23  0  243804 109784 select   IsJ   -  3:18.52 
postgres: checkpointer(
   770  6148  6145   0  20  0  241756  21348 select   SsJ   -  2:02.83 
postgres: background writer
   770  6149  6145   0  20  0  241756   7240 select   SsJ   - 16:36.80 
postgres: walwriter(pos
   770  6150  6145   0  20  0   21980876 select   SsJ   -  0:13.92 
postgres: archiver   last w
   770  6151  6145   0  20  0   21980980 select   SsJ   -  0:58.45 
postgres: stats collector
   770  6152  6145   0  20  0  241756   1268 select   IsJ   -  0:02.07 
postgres: logical replicati
   770 43315  6145   0  21  0  251844   7520 select   IsJ   -  1:07.74 
postgres: admin postgres 19
   770 43317  6145   0  25  0  251764   8684 select   IsJ   -  1:28.89 
postgres: admin bareos 192.
   770 43596  6145   0  20  0  245620   4476 select   IsJ   -  0:00.12 
postgres: admin bareos 192.
   770 43761  6145   0  20  0  245620   4476 select   IsJ   -  0:00.15 
postgres: admin bareos 192.
   770 90206  6145   0  52  0 1331256 219720 racctDsJ   -563:45.41 
postgres: bareos bareos 192

The 90206 is continuously growing. It is the unspecific, all-purpose
worker for the www.bareos.com backup tool, so it is a bit difficult to
figure what precisely it does - but it tends to be rather simple
straight-forward queries, so it is unlikely to have dozens of "internal sort
operations and hash tables".



Is there any relevant information in the bareos or Postgres logs?




OS is FreeBSD 11.3-RELEASE-p8 r360175M i386
PostgreSQL 12.2 on i386-portbld-freebsd11.3, compiled by gcc9 (FreeBSD Ports 
Collection) 9.3.0, 32-bit

autovacuum is Disabled.


Any particular reason for above?



cheerio,
PMc





--
Adrian Klaver
adrian.kla...@aklaver.com




Thoughts on how to avoid a massive integer update.

2020-05-04 Thread Fehrle, Brian
Hi all,

This is a shot in the dark in hopes to find a magic bullet to fix an issue I 
have, I can’t personally think of any solution myself.

I have a database with hundreds of terabytes of data, where every table has an 
integer column referencing a small table. For reasons out of my control and 
cannot change, I NEED to update every single row in all these tables, changing 
the integer value to a different integer.

Since I have to deal with dead space, I can only do a couple tables at a time, 
then do a vacuum full after each one.
Another option is to build a new table with the new values, then drop the old 
one and swap in the new, either way is very time consuming.

Initial tests suggest this effort will take several months to complete, not to 
mention cause blocking issues on tables being worked on.

Does anyone have any hackery ideas on how to achieve this in less time? I was 
looking at possibly converting the integer column type to another that would 
present the integer differently, like a hex value, but everything still ends up 
requiring all data to be re-written to disk. In a well designed database (I 
didn’t design it :) ), I would simply change the data in the referenced table 
(200 total rows), however the key being referenced isn’t just an arbitrary ID, 
it’s actual ‘data’, and must be changed.

Thanks for any thoughts or ideas,

  *   Brian F


Re: Thoughts on how to avoid a massive integer update.

2020-05-04 Thread Adrian Klaver

On 5/4/20 2:32 PM, Fehrle, Brian wrote:

Hi all,

This is a shot in the dark in hopes to find a magic bullet to fix an 
issue I have, I can’t personally think of any solution myself.


I have a database with hundreds of terabytes of data, where every table 
has an integer column referencing a small table. For reasons out of my 
control and cannot change, I NEED to update every single row in all 
these tables, changing the integer value to a different integer.


Since I have to deal with dead space, I can only do a couple tables at a 
time, then do a vacuum full after each one.


Why?
A regular vacuum would mark the space as available.

More below.

Another option is to build a new table with the new values, then drop 
the old one and swap in the new, either way is very time consuming.


Initial tests suggest this effort will take several months to complete, 
not to mention cause blocking issues on tables being worked on.


Does anyone have any hackery ideas on how to achieve this in less time? 
I was looking at possibly converting the integer column type to another 
that would present the integer differently, like a hex value, but 
everything still ends up requiring all data to be re-written to disk. In 
a well designed database (I didn’t design it :) ), I would simply change 
the data in the referenced table (200 total rows), however the key being 
referenced isn’t just an arbitrary ID, it’s actual ‘data’, and must be 
changed.


I'm not following above.

Could you show an example table relationship?




Thanks for any thoughts or ideas,

  * Brian F




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Thoughts on how to avoid a massive integer update.

2020-05-04 Thread Rob Sargent



On 5/4/20 3:32 PM, Fehrle, Brian wrote:


Hi all,

This is a shot in the dark in hopes to find a magic bullet to fix an 
issue I have, I can’t personally think of any solution myself.


I have a database with hundreds of terabytes of data, where every 
table has an integer column referencing a small table. For reasons out 
of my control and cannot change, I NEED to update every single row in 
all these tables, changing the integer value to a different integer.


Since I have to deal with dead space, I can only do a couple tables at 
a time, then do a vacuum full after each one.
Another option is to build a new table with the new values, then drop 
the old one and swap in the new, either way is very time consuming.


Initial tests suggest this effort will take several months to 
complete, not to mention cause blocking issues on tables being worked on.


Does anyone have any hackery ideas on how to achieve this in less 
time? I was looking at possibly converting the integer column type to 
another that would present the integer differently, like a hex value, 
but everything still ends up requiring all data to be re-written to 
disk. In a well designed database (I didn’t design it :) ), I would 
simply change the data in the referenced table (200 total rows), 
however the key being referenced isn’t just an arbitrary ID, it’s 
actual ‘data’, and must be changed.


Thanks for any thoughts or ideas,

  * Brian F

Is the new value computable from the original value with a single 
function? Could you cover your tables with viewa transforming the column 
with said function?


No?  You'll need to divorce each table from that lookup table and any 
foreign key relationships to avoid all those lookups with ever update.


Re: 12.2: Howto check memory-leak in worker?

2020-05-04 Thread Peter
On Mon, May 04, 2020 at 12:55:38PM -0700, Adrian Klaver wrote:

! > The 90206 is continuously growing. It is the unspecific, all-purpose
! > worker for the www.bareos.com backup tool, so it is a bit difficult to
! > figure what precisely it does - but it tends to be rather simple
! > straight-forward queries, so it is unlikely to have dozens of "internal sort
! > operations and hash tables".
! > 
! 
! Is there any relevant information in the bareos or Postgres logs?

No. Only the final exitus-in-tabula.
 
! > autovacuum is Disabled.
! 
! Any particular reason for above?

Yes. Disk spindown doesn't work so, and I see no reason to have a
backup zone that is used few times a day spinning 24 hours.

BTW, I would greatly appreciate if we would reconsider the need for
the server to read the postmaster.pid file every few seconds (probably
needed for something, I don't know).
That makes it necessary to set atime=off to get a spindown, and I
usually prefer to have atime=on so I can see what my stuff is
currently doing.


cheerio,
PMc




Re: 12.2: Howto check memory-leak in worker?

2020-05-04 Thread Thomas Munro
On Tue, May 5, 2020 at 10:13 AM Peter  wrote:
> BTW, I would greatly appreciate if we would reconsider the need for
> the server to read the postmaster.pid file every few seconds (probably
> needed for something, I don't know).
> That makes it necessary to set atime=off to get a spindown, and I
> usually prefer to have atime=on so I can see what my stuff is
> currently doing.

That'd be this:

/*
 * Once a minute, verify that postmaster.pid hasn't been removed or
 * overwritten.  If it has, we force a shutdown.  This avoids having
 * postmasters and child processes hanging around after their database
 * is gone, and maybe causing problems if a new database cluster is
 * created in the same place.  It also provides some protection
 * against a DBA foolishly removing postmaster.pid and manually
 * starting a new postmaster.  Data corruption is likely to ensue from
 * that anyway, but we can minimize the damage by aborting ASAP.
 */
if (now - last_lockfile_recheck_time >= 1 * SECS_PER_MINUTE)
{
if (!RecheckDataDirLockFile())




Temporary table has become problematically persistent

2020-05-04 Thread Ricky Ramirez
Hello,

We have run into a strange situation with our database. A temporary table
was created some time ago and that session has since died. However, the
temporary table stuck around. It's been around long enough that postgres
stopped accepting writes to prevent transaction ID wraparound. Problem is,
it's a temporary table and the session that it's associated with is gone,
so we cannot vacuum it nor even drop the table. We even restarted postgres
and the temporary table was still there!

The table shows up in pg_class as a temporary table with zero tuples. The
table replicated over to other hosts. We were able to promote a replica and
then single user mode the replica and drop the offending table. I still
have the original broken database available for debugging. Can I provide
any additional debugging information?

Thanks,

Ricky


Installing Postgis25_11

2020-05-04 Thread Clifford Snow
When I try to install PostGIS version 2.5 to a Postgresql-11 installation
on Fedora 32 I get warning that nothing provides for these packages:
 - nothing provides libproj.so.19()(64bit) needed by
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides libSFCGAL.so.1()(64bit) needed by
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides SFCGAL needed by postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides gdal30-libs >= 3.0.4 needed by
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides geos38 >= 3.8.1 needed by
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides proj70 >= 7.0.0 needed by
postgis25_11-2.5.4-1.f32.x86_64

The yum repository [1] doesn't contain packages like these or does Fedora.

Any recommendation on how to proceed?

Thanks in advance,
Clifford

-- 
@osm_washington
www.snowandsnow.us
OpenStreetMap: Maps with a human touch


Re: Thoughts on how to avoid a massive integer update.

2020-05-04 Thread Rob Sargent



On 5/4/20 3:32 PM, Fehrle, Brian wrote:


Hi all,

This is a shot in the dark in hopes to find a magic bullet to fix an 
issue I have, I can’t personally think of any solution myself.


I have a database with hundreds of terabytes of data, where every 
table has an integer column referencing a small table. For reasons out 
of my control and cannot change, I NEED to update every single row in 
all these tables, changing the integer value to a different integer.


Since I have to deal with dead space, I can only do a couple tables at 
a time, then do a vacuum full after each one.
Another option is to build a new table with the new values, then drop 
the old one and swap in the new, either way is very time consuming.


Initial tests suggest this effort will take several months to 
complete, not to mention cause blocking issues on tables being worked on.


Does anyone have any hackery ideas on how to achieve this in less 
time? I was looking at possibly converting the integer column type to 
another that would present the integer differently, like a hex value, 
but everything still ends up requiring all data to be re-written to 
disk. In a well designed database (I didn’t design it :) ), I would 
simply change the data in the referenced table (200 total rows), 
however the key being referenced isn’t just an arbitrary ID, it’s 
actual ‘data’, and must be changed.


Thanks for any thoughts or ideas,

  * Brian F



Here's my wife solution:  add a column to small table, fill with 
duplicate of the original column.  Change the foreign keys of the huge 
tables to reference the new column in the small table.  Update the 
original values to what you now need them to be.  (Not sure how much 
re-index cost you'll see here, but non-zero I'm guessing)


Your reports now must of course return the new value via joins to the 
dictionary(?) table.  Views my be your best bet here.


Re: Temporary table has become problematically persistent

2020-05-04 Thread Tom Lane
Ricky Ramirez  writes:
> We have run into a strange situation with our database. A temporary table
> was created some time ago and that session has since died. However, the
> temporary table stuck around. It's been around long enough that postgres
> stopped accepting writes to prevent transaction ID wraparound. Problem is,
> it's a temporary table and the session that it's associated with is gone,
> so we cannot vacuum it nor even drop the table. We even restarted postgres
> and the temporary table was still there!

Hm, what PG version is this?  I thought we'd fixed that hazard quite some
while back.

regards, tom lane




Re: Temporary table has become problematically persistent

2020-05-04 Thread Ricky Ramirez
Forgive me for forgetting the basics PG 11.2 on Ubuntu Bionic, amd64. No
extensions enabled.

On Mon, May 4, 2020 at 4:54 PM Tom Lane  wrote:

> Ricky Ramirez  writes:
> > We have run into a strange situation with our database. A temporary table
> > was created some time ago and that session has since died. However, the
> > temporary table stuck around. It's been around long enough that postgres
> > stopped accepting writes to prevent transaction ID wraparound. Problem
> is,
> > it's a temporary table and the session that it's associated with is gone,
> > so we cannot vacuum it nor even drop the table. We even restarted
> postgres
> > and the temporary table was still there!
>
> Hm, what PG version is this?  I thought we'd fixed that hazard quite some
> while back.
>
> regards, tom lane
>


Re: Installing Postgis25_11

2020-05-04 Thread Virendra Kumar
I guess it is long route, once I was setting up POSTGIS on one of our cluster. 
I had to install all of these components by downloading them from their sources.

Regards,
Virendra

   On Monday, May 4, 2020, 4:40:53 PM PDT, Clifford Snow 
 wrote:  
 
 When I try to install PostGIS version 2.5 to a Postgresql-11 installation on 
Fedora 32 I get warning that nothing provides for these packages: - nothing 
provides libproj.so.19()(64bit) needed by postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides libSFCGAL.so.1()(64bit) needed by 
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides SFCGAL needed by postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides gdal30-libs >= 3.0.4 needed by 
postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides geos38 >= 3.8.1 needed by postgis25_11-2.5.4-1.f32.x86_64
  - nothing provides proj70 >= 7.0.0 needed by postgis25_11-2.5.4-1.f32.x86_64

The yum repository [1] doesn't contain packages like these or does Fedora.
Any recommendation on how to proceed?
Thanks in advance,Clifford

-- 
@osm_washington
www.snowandsnow.usOpenStreetMap: Maps with a human touch  

COPY blocking \dt+?

2020-05-04 Thread pinker
Hi,
I'm running standard pgbench and what's kinda strange copy pgbench_accounts
from stdin is blocking my other query which is \dt+.
Does copy hold any exclusive lock or there is something wrong with my
system?
i'm using pgbench=>  SELECT version();
-[ RECORD 1 ]---
version | PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit



-[ RECORD 1
]+--

datid| 27483
datname  | pgbench
pid  | 931408
usesysid | 14419
usename  | gucio
application_name | pgbench
client_addr  | 212.180.214.106
client_hostname  | 
client_port  | 23041
backend_start| 2020-05-05 00:47:12.182801+00
xact_start   | 2020-05-05 00:47:12.542264+00
query_start  | 2020-05-05 00:53:26.402305+00
state_change | 2020-05-05 00:53:26.402305+00
wait_event_type  | 
wait_event   | 
state| active
backend_xid  | 3919
backend_xmin | 3919
query| copy pgbench_accounts from stdin
backend_type | client backend
-[ RECORD 2
]+--

datid| 27483
datname  | pgbench
pid  | 932736
usesysid | 14419
usename  | gucio
application_name | psql
client_addr  | 212.180.214.106
client_hostname  | 
client_port  | 8718
backend_start| 2020-05-05 00:48:10.031429+00
xact_start   | 2020-05-05 00:56:34.324414+00
query_start  | 2020-05-05 00:56:34.324414+00
state_change | 2020-05-05 00:56:34.324414+00
wait_event_type  | Lock
wait_event   | relation
state| active
backend_xid  | 
backend_xmin | 3919
query| SELECT n.nspname as "Schema",


+
 |   c.relname as "Name",   


+
 |   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN
'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN
'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table
' WHEN 'p' THEN 'table' WHEN 'I' THEN 'index' END as "Type",+
 |   pg_catalog.pg_get_userbyid(c.relowner) as "Owner", 


+
 |  
pg_catalog.pg_size_pretty(pg_catalog.pg_table_size(c.oid)) as "Size",   

   
+
 |   pg_catalog.obj_description(c.oid, 'pg_class') as
"Description"   
  
+
 | FROM pg_catalog.pg_class c   


+
 |  LEFT JOIN pg_catalog.pg_namespace n ON n.oid =
c.relnamespace  
 
+
 | WHERE c.relkind IN ('r','p','')  


+
 |   AND n.nspname <> 'pg_catalog'  


+
 |   AND n.nspname <> 'information_schema'  
 

Re: Temporary table has become problematically persistent

2020-05-04 Thread Jerry Sievers
Ricky Ramirez  writes:

> Hello,
>
> We have run into a strange situation with our database. A temporary
> table was created some time ago and that session has since died.
> However, the temporary table stuck around. It's been around long
> enough that postgres stopped accepting writes to prevent
> transaction ID wraparound. Problem is, it's a temporary table and the
> session that it's associated with is gone, so we cannot vacuum it nor
> even drop the table. We even restarted postgres and the temporary
> table was still there!

What happened when you tried to drop the temp table?

>
> The table shows up in pg_class as a temporary table with zero tuples.
> The table replicated over to other hosts. We were able to promote a
> replica and then single user mode the replica and drop the offending
> table. I still have the original broken database available for
> debugging. Can I provide any additional debugging information?
>
> Thanks,
>
> Ricky
>
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net




Re: COPY blocking \dt+?

2020-05-04 Thread David Rowley
On Tue, 5 May 2020 at 13:05, pinker  wrote:
> I'm running standard pgbench and what's kinda strange copy pgbench_accounts
> from stdin is blocking my other query which is \dt+.
> Does copy hold any exclusive lock or there is something wrong with my
> system?

COPY does not. However, pgbench does TRUNCATE the tables before doing
the COPY, so that'll be most likely what's causing your query to get
blocked.

David




Re: COPY blocking \dt+?

2020-05-04 Thread pinker
thank you David.
So it would need to run inside single transaction to cause lock, right? do
you know if pgbench is opening transaction?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: COPY blocking \dt+?

2020-05-04 Thread David Rowley
On Tue, 5 May 2020 at 14:50, pinker  wrote:
> So it would need to run inside single transaction to cause lock, right? do
> you know if pgbench is opening transaction?

The TRUNCATE and COPY are done in the same transaction.  You can see
the code in 
https://github.com/postgres/postgres/blob/REL_11_6/src/bin/pgbench/pgbench.c#L3599

David