Re: How to run a task continuously in the background

2019-07-15 Thread Luca Ferrari
On Fri, Jul 12, 2019 at 7:06 AM Dirk Mika  wrote:
>
>
>
> A cron job will only run once a minute, not wake up every second.
>
>
>
> I would like to avoid external programs if possible. In the current Oracle 
> environment, there are potentially multiple schemas on a server in which 
> processing can be active. And processing can be started, monitored and 
> stopped from a client application. And only for the schema with which the 
> application is connected.
>

Creating a background worker that invokes a stored procedure once per
second? 
But this is not so simple to put in place.

Otherwise pg_cron with a function that performs a pg_sleep of one
second in a loop.

Anyway, it seems to me you are better refactoring your solution: it
seems you need to process data when _new data_ comes, not once per
second, so it sounds to me like a trigger could solve the problem.

Luca




Re: help understanding pgbench results

2019-07-15 Thread Fabio Pardi
Hi Luca


(I tried to reproduce your tests, but I got similar results over different 
checkpoint_completion_target)

The rest is in line here below:

On 12/07/2019 12:04, Luca Ferrari wrote:

> 
> shared_buffers = 1 GB
> checkpoint_timeout = 5 min
> 
> I've created a pgbench database as follows (around 4.5 GB):
> % pgbench -i -s 300 -F 100 --foreign-keys --unlogged-tables -h
> 127.0.0.1 -U luca pgbench
> 
> and I've tested three times (each time after a restart) with the following:
> % pgbench  -T 600  -j 4 -c 4  -h 127.0.0.1 -U luca -P 60  pgbench
> 
> 
> Since tables are unlogged, I was expecting no much difference in
> setting checkpoint_completion_target, but I got (average results):
> - checkpoint_completion_target = 0.1  ==> 755 tps
> - checkpoint_completation_target = 0.5 ==> 767 tps
> - checkpoint_completion_target = 0.9 ==> 681 tps

unlogged tables are not written to WAL, therefore checkpoints do not fit into 
the picture (unless something else is writing data..).

> 
> so while there is not a big different in the first two cases, it seems
> throttling I/O reduces the tps, and I don't get why. Please note that
> there is some small activity while benchmarking, and that's why I ran
> at least three tests for each setting.

It is not a good idea to have anything running in the background. 

Also is always a good idea to run tests multiple times, and I think that 3 is 
the bare minimum. 
You want to make sure your tests are as reliable as possible, means having 
similar results between each other, therefore you might post all the results, 
not only the average, so people can give their interpretation of the data.


Back to your question, your tests run for 10 minutes, and checkpoints happen 
every 5, so we should expect to see 2 checkpoints per test, which might 
influence your results. How long is a checkpoint spread over time, is given by 
checkpoint_completion_target

Assuming that the 'background activity' writes data, a value of 
(checkpoint_completion_target) 0.9 means that when your test starts, the system 
might be still busy in writing data from the previous checkpoint (which started 
before your pgbench test was launched). That is less likely to happen with a 
value of 0.1



Maybe looking at the graphs (CPU, disk) of your server might point to something.
Also the postgres logs should be able to tell you more, eg: when a checkpoint 
starts, finishes, and how much stuff it wrote.


I hope I gave you enough inputs to better understand what is going on.

regards,

fabio pardi




CRecordset::Open postgresql procedure call don't work

2019-07-15 Thread jeanclaude marzin
‌Hi
I migrate from MSSQL to postgresql 11. I translate MSSQL stored procedure to 
Postgresql one :

CREATE PROCEDURE procacp ()
LANGUAGE SQL
AS $$
SELECT tabjdbexploit.jdbeid, tabjdbexploit.jdbeproc,
    tabjdbexploit.jdbedate, tabjdbexploit.jdbetypemsg,
    tabjdbexploit.jdbeurg, tabjdbexploit.jdbeconfid,
    tabjdbexploit.jdbeetat, tabmsgacp.acpid,  
    tabmsgacp.acpnumserie,
    tabmsgacp.acpsignserv, tabmsgacp.acpnumligne,
    tabmsgacp.acpiaorigine, tabmsgacp.acpgdh,
    tabmsgacp.acperv,  tabmsgacp.acpcdu,
    tabmsgacp.acpdir, tabmsgacp.acppere,
    tabmsgacp.acpcomplement, tabmsgacp.acpsection

FROM tabjdbexploit INNER JOIN
    tabmsgacp ON tabjdbexploit.jdbeid = tabmsgacp.acpid

ORDER BY tabjdbexploit.jdbedate ASC
$$;


All seems OK in PgAdmin 4, 
procedure is created

I use ODBC and Crecorset in C++

When i use the call strSQL = "{CALL procacp()}". It don't work :

Message error : ERROR: procacp() is a procedure

I don't know what to do

Thanks in advance for any assistance

Jean-Claude

Re: CRecordset::Open postgresql procedure call don't work

2019-07-15 Thread John McKown
On Mon, Jul 15, 2019 at 6:40 AM jeanclaude marzin 
wrote:

> ‌Hi
> I migrate from MSSQL to postgresql 11. I translate MSSQL stored procedure
> to Postgresql one :
>
> CREATE PROCEDURE procacp ()
> LANGUAGE SQL
> AS $$
> SELECT tabjdbexploit.jdbeid, tabjdbexploit.jdbeproc,
> tabjdbexploit.jdbedate, tabjdbexploit.jdbetypemsg,
> tabjdbexploit.jdbeurg, tabjdbexploit.jdbeconfid,
> tabjdbexploit.jdbeetat, tabmsgacp.acpid,
> tabmsgacp.acpnumserie,
> tabmsgacp.acpsignserv, tabmsgacp.acpnumligne,
> tabmsgacp.acpiaorigine, tabmsgacp.acpgdh,
> tabmsgacp.acperv,  tabmsgacp.acpcdu,
> tabmsgacp.acpdir, tabmsgacp.acppere,
> tabmsgacp.acpcomplement, tabmsgacp.acpsection
>
> FROM tabjdbexploit INNER JOIN
> tabmsgacp ON tabjdbexploit.jdbeid = tabmsgacp.acpid
>
> ORDER BY tabjdbexploit.jdbedate ASC
> $$;
>
>
> All seems OK in PgAdmin 4, procedure is created
>
> I use ODBC and Crecorset in C++
>
> When i use the call strSQL = "{CALL procacp()}". It don't work :
>
> Message error : ERROR: procacp() is a procedure
>
> I don't know what to do
>
> Thanks in advance for any assistance
>
> Jean-Claude


I am fairly sure that you need to make an FUNCTION and not a PROCEDURE.
Functions return values. Procedures do not. Why not try replacing the word
PROCEDURE with FUNCTION and give it another try?

ref: https://www.postgresql.org/docs/11/xfunc.html


-- 
We all agree on the necessity of compromise. We just can't agree on when
it's necessary to compromise. -- Larry Wall

Maranatha! <><
John McKown


Re: CRecordset::Open postgresql procedure call don't work

2019-07-15 Thread Pavel Stehule
Hi

Dne po 15. 7. 2019 13:40 uživatel jeanclaude marzin <
jeanclaude.mar...@sfr.fr> napsal:

> ‌Hi
> I migrate from MSSQL to postgresql 11. I translate MSSQL stored procedure
> to Postgresql one :
>
> CREATE PROCEDURE procacp ()
> LANGUAGE SQL
> AS $$
> SELECT tabjdbexploit.jdbeid, tabjdbexploit.jdbeproc,
> tabjdbexploit.jdbedate, tabjdbexploit.jdbetypemsg,
> tabjdbexploit.jdbeurg, tabjdbexploit.jdbeconfid,
> tabjdbexploit.jdbeetat, tabmsgacp.acpid,
> tabmsgacp.acpnumserie,
> tabmsgacp.acpsignserv, tabmsgacp.acpnumligne,
> tabmsgacp.acpiaorigine, tabmsgacp.acpgdh,
> tabmsgacp.acperv,  tabmsgacp.acpcdu,
> tabmsgacp.acpdir, tabmsgacp.acppere,
> tabmsgacp.acpcomplement, tabmsgacp.acpsection
>
> FROM tabjdbexploit INNER JOIN
> tabmsgacp ON tabjdbexploit.jdbeid = tabmsgacp.acpid
>
> ORDER BY tabjdbexploit.jdbedate ASC
> $$;
>
>
>
It cannot to work - PostgreSQL procedures are like Oracle's procedures -
cannot returns any result. Only OUT variables can be changed.

Regards

Pavel


>
>
>
> All seems OK in PgAdmin 4, procedure is created
>
> I use ODBC and Crecorset in C++
>
> When i use the call strSQL = "{CALL procacp()}". It don't work :
>
> Message error : ERROR: procacp() is a procedure
>
> I don't know what to do
>
> Thanks in advance for any assistance
>
> Jean-Claude


Re: help understanding pgbench results

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 1:35 PM Fabio Pardi  wrote:
> unlogged tables are not written to WAL, therefore checkpoints do not fit into 
> the picture (unless something else is writing data..).

That's my thought, and I was not expecting any big change in tps due
to checkpoint_completion_target on unlogged tables.

> It is not a good idea to have anything running in the background.

Yes, I know, but the activity in the database is a task importing data
on a per-schedule basis, always importing the same number of tuples
(and therefore the same data size). In other words, it is a very
constant and predictable workload.

>
> Also is always a good idea to run tests multiple times, and I think that 3 is 
> the bare minimum.
> You want to make sure your tests are as reliable as possible, means having 
> similar results between each other, therefore you might post all the results, 
> not only the average, so people can give their interpretation of the data.
>

I'm trying to prepare a virual machine to run more tests in a
completely isolated environment.
But I was not trying to benchmarking the database, rather guessing
what caused the different tps in such environment.


> Assuming that the 'background activity' writes data, a value of 
> (checkpoint_completion_target) 0.9 means that when your test starts, the 
> system might be still busy in writing data from the previous checkpoint 
> (which started before your pgbench test was launched). That is less likely to 
> happen with a value of 0.1

Uhm...but in the logged table tests a value of 0.9 increases the tps,
that as far as I understand is in contrast with what you are stating.

Anyway, I'll test more and report back some more results.

Thanks,
Luca




after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
Hi all,
this should be trivial, but if I dump and restore the very same
database the restored one is bigger than the original one.
I did vacuumed the database foo, then dumped and restored into bar,
and the latter, even when vacuumed, remains bigger then the original
one.
No other activity was running on the cluster.

What am I missing here?

% vacuumdb --full foo
vacuumdb: vacuuming database "foo"

% pg_dump -Fd -f backup_foo.d -U postgres foo

% createdb bar
% pg_restore -Fd backup_foo.d -U postgres -d bar

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2690212355

% vacuumdb --full bar
vacuumdb: vacuuming database "bar"

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2688193183

% psql -c 'select version();' -U postgres template1

version
-
 PostgreSQL 11.3 on amd64-portbld-freebsd12.0, compiled by FreeBSD
clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM
6.0.1), 64-bit
(1 row)




Re: help understanding pgbench results

2019-07-15 Thread Fabio Pardi



On 15/07/2019 15:14, Luca Ferrari wrote:

>> Assuming that the 'background activity' writes data, a value of 
>> (checkpoint_completion_target) 0.9 means that when your test starts, the 
>> system might be still busy in writing data from the previous checkpoint 
>> (which started before your pgbench test was launched). That is less likely 
>> to happen with a value of 0.1
> 
> Uhm...but in the logged table tests a value of 0.9 increases the tps,
> that as far as I understand is in contrast with what you are stating.

What I stated is valid for unlogged tables. (a background checkpoint makes your 
pgbench results 'dirty')

When you talk about logged tables, you actually want to spread the checkpoint 
over time. The more it is spread, the better performances. But here, probably, 
checkpoint has a lot to write compared to the data produced by background job 
(and, maybe, checkpoints are happening more frequently?).


> 
> Anyway, I'll test more and report back some more results.

good, let us know and do not forget to provide the log lines produced by the 
checkpoints too.

regards,

fabio pardi




migration of a logical replication configuration

2019-07-15 Thread ROS Didier
Hi
   I would like to know the impact of migrating from 10 to 11 the 
source PostgreSQL cluster of a logical replication configuration?
should we also migrate the target PostgreSQL cluster?
   Or is it possible to setup logical replication from a PostgreSQL 
11 instance to a Postgresql instance 10 ?

   Thanks in advance

Best regards
Didier ROS





Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


Re: after restore the size of the database is increased

2019-07-15 Thread Adrian Klaver

On 7/15/19 6:21 AM, Luca Ferrari wrote:

Hi all,
this should be trivial, but if I dump and restore the very same
database the restored one is bigger than the original one.
I did vacuumed the database foo, then dumped and restored into bar,
and the latter, even when vacuumed, remains bigger then the original
one.
No other activity was running on the cluster.

What am I missing here?

% vacuumdb --full foo
vacuumdb: vacuuming database "foo"

% pg_dump -Fd -f backup_foo.d -U postgres foo

% createdb bar
% pg_restore -Fd backup_foo.d -U postgres -d bar

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2690212355

% vacuumdb --full bar
vacuumdb: vacuuming database "bar"

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2688193183


What does \l+ show?



% psql -c 'select version();' -U postgres template1

version
-
  PostgreSQL 11.3 on amd64-portbld-freebsd12.0, compiled by FreeBSD
clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM
6.0.1), 64-bit
(1 row)








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: after restore the size of the database is increased

2019-07-15 Thread Peter Geoghegan
On Mon, Jul 15, 2019 at 6:22 AM Luca Ferrari  wrote:
> What am I missing here?

Sometimes B-Tree indexes can be *larger* after a REINDEX (or after
they're recreated with a CREATE INDEX). It's not that common, but it
does happen. There isn't actually a very large size difference here,
so it seems worth comparing index size in detail.

-- 
Peter Geoghegan




RE: Issue: Creating Symlink for data directory of postgresql in CentOS7

2019-07-15 Thread Chatterjee, Shibayan
I found a work around for the problem:
After changing access permissions and ownership of the symlink data directory, 
I logged into postgres using 'postgres' login as:

>>> sudo -i -u postgres

Then force started postgres from there as mentioned below. The execution 
doesn't return back to prompt, so had it run back in the background. Postgres 
started running back to normal.

>>> /usr/bin/postgres -D /var/lib/pgsql/data &

Thanks !
-Shibayan

-Original Message-
From: Laurenz Albe 
Sent: Sunday, July 14, 2019 9:05 AM
To: Adrian Klaver ; Chatterjee, Shibayan 
; pgsql-gene...@postgresql.org
Subject: Re: Issue: Creating Symlink for data directory of postgresql in CentOS7

Adrian Klaver wrote:
> On 7/13/19 4:30 AM, Laurenz Albe wrote:
> > On Fri, 2019-07-12 at 18:08 +, Chatterjee, Shibayan wrote:
> > > > Jul 12 10:00:51 postgresql-check-db-dir: "/data/postgresql/data" is 
> > > > missing or empty.
> > > > Jul 12 10:00:51 postgresql-check-db-dir: Use "postgresql-setup initdb" 
> > > > to initialize the database cluster.
> > > > Jul 12 10:00:51 postgresql-check-db-dir: See 
> > > > /usr/share/doc/postgresql-9.2.24/README.rpm-dist for more information.
> > >
> > > For sure there's all the necessary files in '/data/postgresql/data'.
> > > The startup process cannot read it, because of sym link.
> >
> > Well, where is the source for this fabled "postgresql-check-db-dir"?
>
> https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/9.
> 2/postgresql/EL-7/postgresql92-check-db-dir;h=550b31770cabacf32cbb1b8f
> 272e8ce305fc9908;hb=HEAD

Thanks.  I read this:

  30 # Check for the PGDATA structure
  31 if [ -f "$PGDATA/PG_VERSION" ] && [ -d "$PGDATA/base" ]
  32 then
 [...]
  49 else
  50 # No existing PGDATA! Warn the user to initdb it.
  51 echo $"\"$PGDATA\" is missing or empty."
  52 echo $"Use \"/usr/pgsql-9.2/bin/postgresql92-setup initdb\" to 
initialize the database cluster."
  53 echo $"See $PGDOCDIR/README.rpm-dist for more information."
  54 exit 1
  55 fi

That means that either there was no regular file 
/data/postgresql/data/PG_VERSION or no directory /data/postgresql/data/base, or 
that the user running the script lacked the permissions to access them.

Since you say that there was a regular data directory there, that would point 
to permission problems.

Witn that information, it should be simple to debug the problem.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com


This communication is the property of CenturyLink and may contain confidential 
or privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful. If you have received this communication in 
error, please immediately notify the sender by reply e-mail and destroy all 
copies of the communication and any attachments.


Tablespace column value null on select * from pg_tables

2019-07-15 Thread Alex Williams
Hi,

Server Version 9.5

I found this old thread on something similar to the results I'm getting: 
https://www.postgresql.org/message-id/1308615192339-4508750.post%40n5.nabble.com

But in my case, I have a database that's in a user-defined tablespace (data2) 
and all the tables/indexes there are also in data2 and I want to do a select 
into a table the results of all the tables / tablespaces they are in that 
database...when doing this:

SELECT distinct tablespace
FROM pg_tables;

I get 2 rows: null and pg_global (I think to expect null for pg_default, but if 
the table is in a user-defined tablespace, should we expect it to show it, in 
my case, data2?)

or

SELECT distinct tablespace
FROM pg_indexes

I get 3 rows: null, pg_global and pg_default

and this: SELECT * FROM pg_tablespace;

I get 3 rows: pg_default, pg_global and data2.

Using pgadmin, getting properties for the DB / tables, it shows data2.

What I want to do is move all the tables / indexes from data2 to pg_default (we 
added more space to the pg_default mount.)

Now, I did a pg_dump/restore for one database which took a long time and we now 
know the process for that, so on the next database we have we want to do it 
where we use the following commands:

ALTER DATABASE mydatabase SET TABLESPACE pg_default;
alter table all in tablespace data2 set tablespace pg_default;

But, what I'm trying to accomplish here is, aside from checking the filesystem, 
like df- h, to see it was moved or properties on each table (too many) I just 
want to run a query that will insert into a table all the tables and their 
tablespace names and when the above two commands (3rd will be moving indexes) 
run the query again and verify everything has moved from data2 to pg_default.

Thanks for your help in advance.

Alex

Sent with [ProtonMail](https://protonmail.com) Secure Email.

Re: Tablespace column value null on select * from pg_tables

2019-07-15 Thread Adrian Klaver

On 7/15/19 11:35 AM, Alex Williams wrote:

Hi,

Server Version 9.5

I found this old thread on something similar to the results I'm getting: 
https://www.postgresql.org/message-id/1308615192339-4508750.post%40n5.nabble.com


But in my case, I have a database that's in a user-defined tablespace 
(data2) and all the tables/indexes there are also in data2 and I want to 
do a select into a table the results of all the tables / tablespaces 
they are in that database...when doing this:



SELECT distinct tablespace
FROM pg_tables;

I get 2 rows: null and pg_global (I think to expect null for pg_default, 
but if the table is in a user-defined tablespace, should we expect it to 
show it, in my case, data2?)


Not if you did:

CREATE DATABASE name ... [ TABLESPACE [=] tablespace_name ]

ALTER DATABASE name SET TABLESPACE new_tablespace

This makes the tablespace the default for the database and the default 
shows up as null in pg_tables:


https://www.postgresql.org/docs/9.5/view-pg-tables.html

tablespace 	name 	pg_tablespace.spcname 	Name of tablespace containing 
table (null if default for database)




or

SELECT distinct tablespace
FROM pg_indexes

I get 3 rows: null, pg_global and pg_default

and this: SELECT * FROM pg_tablespace;

I get 3 rows: pg_default, pg_global and data2.

Using pgadmin, getting properties for the DB / tables, it shows data2.

What I want to do is move all the tables / indexes from data2 to 
pg_default (we added more space to the pg_default mount.)


Now, I did a pg_dump/restore for one database which took a long time and 
we now know the process for that, so on the next database we have we 
want to do it where we use the following commands:


ALTER DATABASE mydatabase SET TABLESPACE pg_default;




alter table all in tablespace data2 set tablespace pg_default;

But, what I'm trying to accomplish here is, aside from checking the 
filesystem, like df- h, to see it was moved or properties on each table 
(too many) I just want to run a query that will insert into a table all 
the tables and their tablespace names and when the above two commands 
(3rd will be moving indexes) run the query again and verify everything 
has moved from data2 to pg_default.


Thanks for your help in advance.

Alex


Sent with ProtonMail  Secure Email.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Tablespace column value null on select * from pg_tables

2019-07-15 Thread Alex Williams
Hi Adrian,

"Not if you did: CREATE DATABASE name ... [ TABLESPACE [=] tablespace_name ] 
ALTER DATABASE name SET TABLESPACE new_tablespace This makes the tablespace the 
default for the database and the default shows up as null in pg_tables: 
https://www.postgresql.org/docs/9.5/view-pg-tables.html tablespace name 
pg_tablespace.spcname Name of tablespace containing table (null if default for 
database)"

Thanks, but I didn't do that. I have an existing database that's on data2 and 
haven't ran any command yet to change the db tablespace. When the db was 
created two years ago, it went directly to data2 along with any table/indexes 
to data2. The second command is the command I want to run but haven't ran it 
yet since I want to get the tablespaces for the tables on the db inserted into 
a table prior, so I can make sure all the tables in data2 go into pg_default by 
running the query again and seeing what tablespace they are in (at this point, 
it should probably be null for the tablespace name signifying it's pg_default.)

PgAdmin seems has the proper query to get the db and table tablespace names 
(right click table/select properties), but the queries I've used from various 
sources like stackoverflow don't provide the correct named tablespace.

Thanks,

Alex


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Monday, July 15, 2019 3:22 PM, Adrian Klaver  
wrote:

> On 7/15/19 11:35 AM, Alex Williams wrote:
>
> > Hi,
> > Server Version 9.5
> > I found this old thread on something similar to the results I'm getting:
> > https://www.postgresql.org/message-id/1308615192339-4508750.post%40n5.nabble.com
> > But in my case, I have a database that's in a user-defined tablespace
> > (data2) and all the tables/indexes there are also in data2 and I want to
> > do a select into a table the results of all the tables / tablespaces
> > they are in that database...when doing this:
> > SELECT distinct tablespace
> > FROM pg_tables;
> > I get 2 rows: null and pg_global (I think to expect null for pg_default,
> > but if the table is in a user-defined tablespace, should we expect it to
> > show it, in my case, data2?)
>
> Not if you did:
>
> CREATE DATABASE name ... [ TABLESPACE [=] tablespace_name ]
>
> ALTER DATABASE name SET TABLESPACE new_tablespace
>
> This makes the tablespace the default for the database and the default
> shows up as null in pg_tables:
>
> https://www.postgresql.org/docs/9.5/view-pg-tables.html
>
> tablespace name pg_tablespace.spcname Name of tablespace containing
> table (null if default for database)
>
> > or
> > SELECT distinct tablespace
> > FROM pg_indexes
> > I get 3 rows: null, pg_global and pg_default
> > and this: SELECT * FROM pg_tablespace;
> > I get 3 rows: pg_default, pg_global and data2.
> > Using pgadmin, getting properties for the DB / tables, it shows data2.
> > What I want to do is move all the tables / indexes from data2 to
> > pg_default (we added more space to the pg_default mount.)
> > Now, I did a pg_dump/restore for one database which took a long time and
> > we now know the process for that, so on the next database we have we
> > want to do it where we use the following commands:
> > ALTER DATABASE mydatabase SET TABLESPACE pg_default;
>
> > alter table all in tablespace data2 set tablespace pg_default;
> > But, what I'm trying to accomplish here is, aside from checking the
> > filesystem, like df- h, to see it was moved or properties on each table
> > (too many) I just want to run a query that will insert into a table all
> > the tables and their tablespace names and when the above two commands
> > (3rd will be moving indexes) run the query again and verify everything
> > has moved from data2 to pg_default.
> > Thanks for your help in advance.
> > Alex
> > Sent with ProtonMail https://protonmail.com Secure Email.
>
> --
>
> Adrian Klaver
> adrian.kla...@aklaver.com






Re: Tablespace column value null on select * from pg_tables

2019-07-15 Thread Thomas Kellerer

Alex Williams schrieb am 15.07.2019 um 20:35:

But in my case, I have a database that's in a user-defined tablespace
(data2) and all the tables/indexes there are also in data2 and I want
to do a select into a table the results of all the tables /
tablespaces they are in that database...when doing this:

SELECT distinct tablespace FROM pg_tables;


I get 2 rows: null and pg_global (I think to expect null for
pg_default, but if the table is in a user-defined tablespace, should
we expect it to show it, in my case, data2?)



If data2 is the default tablespace of the database, then this is expected.

The tablespace column is null in pg_tables if the table is located in the
default tablespace of the database.

See here:

https://www.postgresql.org/message-id/flat/15901-e5cfe2dd7298a3a4%40postgresql.org

And the answer on SO:

https://stackoverflow.com/a/56950950

Thomas




Re: Tablespace column value null on select * from pg_tables

2019-07-15 Thread Adrian Klaver

On 7/15/19 12:53 PM, Alex Williams wrote:

Hi Adrian,

"Not if you did: CREATE DATABASE name ... [ TABLESPACE [=] tablespace_name ] ALTER 
DATABASE name SET TABLESPACE new_tablespace This makes the tablespace the default for the 
database and the default shows up as null in pg_tables: 
https://www.postgresql.org/docs/9.5/view-pg-tables.html tablespace name 
pg_tablespace.spcname Name of tablespace containing table (null if default for 
database)"

Thanks, but I didn't do that. I have an existing database that's on data2 and haven't ran any command yet to change the db tablespace. When the db was created two years ago, it 


So someone else ran the command the end result is the same, data2 is the 
default tablespace for the db so you get NULL in the tablespace column 
in pg_tables.


went directly to data2 along with any table/indexes to data2. The second 
command is the command I want to run but haven't ran it yet since I want 
to get the tablespaces for the tables on the db inserted into a table 
prior, so I can make sure all the tables in data2 go into pg_default by 
running the query again and seeing what tablespace they are in (at this 
point, it should probably be null for the tablespace name signifying 
it's pg_default.)


That is where you are getting confused, there are two defaults in play; 
pg_default and the db default.


pg_default:

https://www.postgresql.org/docs/9.5/manage-ag-tablespaces.html
"Two tablespaces are automatically created when the database cluster is 
initialized. The pg_global tablespace is used for shared system 
catalogs. The pg_default tablespace is the default tablespace of the 
template1 and template0 databases (and, therefore, will be the default 
tablespace for other databases as well, unless overridden by a 
TABLESPACE clause in CREATE DATABASE)."



db default:

From same link above.

"The tablespace associated with a database is used to store the system 
catalogs of that database. Furthermore, it is the default tablespace 
used for tables, indexes, and temporary files created within the 
database, if no TABLESPACE clause is given and no other selection is 
specified by default_tablespace or temp_tablespaces (as appropriate). If 
a database is created without specifying a tablespace for it, it uses 
the same tablespace as the template database it is copied from."


In either case that tablespace becomes the default for the db and shows 
up as NULL in pg_tables.





PgAdmin seems has the proper query to get the db and table tablespace names 
(right click table/select properties), but the queries I've used from various 
sources like stackoverflow don't provide the correct named tablespace.


It is probably doing something like:

SELECT
datname, spcname
FROM
pg_database AS pd
JOIN
pg_tablespace AS pt
ON
pd.dattablespace = pt.oid;



Thanks,

Alex


Sent with ProtonMail Secure Email.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 7:07 PM Adrian Klaver  wrote:
> What does \l+ show?

The same as pg_size_pretty:

foo=# \l+
   List of databases
   Name|  Owner   | Encoding  | Collate | Ctype |   Access
privileges   |  Size   | Tablespace |Description
---+--+---+-+---+---+-++
 bar   | luca | SQL_ASCII | C   | C |
 | 2566 MB | pg_default |
 foo   | luca | SQL_ASCII | C   | C |
 | 2562 MB | pg_default |


foo=# SELECT pg_size_pretty( pg_database_size( 'foo' ) ) AS foo,
pg_size_pretty( pg_database_size( 'bar' ) ) AS bar;
-[ RECORD 1 ]
foo | 2562 MB
bar | 2566 MB

Luca




Re: after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 7:21 PM Peter Geoghegan  wrote:
> Sometimes B-Tree indexes can be *larger* after a REINDEX (or after
> they're recreated with a CREATE INDEX). It's not that common, but it
> does happen. There isn't actually a very large size difference here,
> so it seems worth comparing index size in detail.

A very good guess, and effectively reindexing the databases the size
of the _restored_ one has shrinked being less than the original one
(as I would expect in first place):

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2685776543
pg_database_size | 2690269699

% vacuumdb --full foo
vacuumdb: vacuuming database "foo"
% psql -U postgres -c "REINDEX DATABASE foo;" foo
REINDEX
% vacuumdb --full bar
vacuumdb: vacuuming database "bar"
% psql -U postgres -c "REINDEX DATABASE bar;" bar
REINDEX

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2685817503
pg_database_size | 2685624835


However I still don't get why the size should not be the same after
such vacuum+reindex. If my brain serves me well, in this case we have
less than 0.2MB of difference, that compared to the database size
(~2.5GB) is more than acceptable. Nevertheless, I would have thought
that a restored database has been always smaller than the original
one.

Luca