psql format result as markdown tables

2018-01-13 Thread Nicolas Paris
Hello

I wonder if someone knows how to configure psql to output results as
markdown tables. 
Then instead of :

SELECT * FROM (values(1,2),(3,4)) as t;
 column1 | column2 
-+-
   1 |   2
   3 |   4

Get the result as :
SELECT * FROM (values(1,2),(3,4)) as t;
| column1 | column2| 
|-||-
|   1 |   2|
|   3 |   4|

Thanks by advance



Re: Missing WAL file after running pg_rewind

2018-01-13 Thread Michael Paquier
On Fri, Jan 12, 2018 at 09:44:25PM +, Dylan Luong wrote:
> The file  exist in the archive directory of the old master but it is
> for the previous timeline, ie 5 and not 6, ie
> 0005038300BE. Can I just rename the file to 6 timeline? Ie
> 0006038300BE

What are the contents of the history file for this new timeline? You are
looking at 0006.history which should be archived as well. You could
do that assuming that WAL has forked on this segment at promotion as
both segments would have the same contents up to the point where WAL has
forked.
--
Michael


signature.asc
Description: PGP signature


Postgres 9.4 using primary key index in almost all queries leading to degraded performance

2018-01-13 Thread Rahul Sharma
Hi,

We're currently testing out upgrade of our Postgres database from 9.3.14 to
9.4.9. We are using Amazon RDS. We've encountered an issue in testing
phase  where after the upgrade, CPU utilization hovers around 100%. We dug
deep to find that the queries which executed in a few milliseconds on
Postgres 9.3.14 are taking a lot of time to complete on 9.4.9. We've
noticed a pattern under which the query planner (in almost all queries) is
using primary key index for scanning the table while ignoring other
alternatives like explicit sorting or using another efficient index present
on the table.

I'm sharing a few queries along with the query plans of Postgres 9.3 and
9.4.

*1. First Query:*

EXPLAIN ANALYZE SELECT  user_device_identifiers.id as
user_device_identifier_id, device_identifiers.partner_id as partner_id FROM
user_device_identifiers INNER JOIN device_identifiers ON
device_identifiers.id = user_device_identifiers.device_identifier_id WHERE
(user_device_identifiers.status = 1 AND
user_device_identifiers.authentication_token
= '2y_mHHruh69pPpNo2GFn') ORDER BY user_device_identifiers.id ASC LIMIT 1;

Query plan of Postgres 9.3:

 Limit  (cost=17.06..17.06 rows=1 width=8) (actual time=5.654..5.655 rows=1
loops=1)
   ->  Sort  (cost=17.06..17.06 rows=1 width=8) (actual time=5.653..5.653
rows=1 loops=1)
 Sort Key: user_device_identifiers.id
 Sort Method: quicksort  Memory: 25kB
 ->  Nested Loop  (cost=1.00..17.05 rows=1 width=8) (actual
time=5.637..5.640 rows=1 loops=1)
   ->  Index Scan using
index_user_device_identifiers_on_authentication_token
on user_device_identifiers  (cost=0.56..8.58 rows=1 width=8) (actual
time=3.114..3.114 rows=1 loops=1)
 Index Cond: ((authentication_token)::text =
'2y_mHHruh69pPpNo2GFn'::text)
 Filter: (status = 1)
   ->  Index Scan using device_identifiers_pkey on
device_identifiers  (cost=0.43..8.46 rows=1 width=8) (actual
time=2.519..2.520 rows=1 loops=1)
 Index Cond: (id = user_device_identifiers.
device_identifier_id)
 Total runtime: 5.686 ms
(11 rows)

Query plan of Postgres 9.4:

Limit  (cost=0.87..4181.94 rows=1 width=8) (actual
time=93014.991..93014.992 rows=1 loops=1)
 ->  Nested Loop  (cost=0.87..1551177.78 rows=371 width=8) (actual
time=93014.990..93014.990 rows=1 loops=1)
   ->  Index Scan using a_pkey on a  (cost=0.43..1548042.20 rows=371
width=8) (actual time=93014.968..93014.968 rows=1 loops=1)
 Filter: ((col_int = 1) AND ((col_text)::text = 'pqrs'::text))
 Rows Removed by Filter: 16114217
   ->  Index Scan using b_pkey on b  (cost=0.43..8.44 rows=1 width=8)
(actual time=0.014..0.014 rows=1 loops=1)
 Index Cond: (id = a.b_id)
Planning time: 0.291 ms
Execution time: 93015.041 ms

*2nd Query:*

EXPLAIN ANALYZE SELECT hotels.id FROM hotels INNER JOIN hotel_restrictions
ON hotel_restrictions.hotel_id = hotels.id INNER JOIN restrictions ON
restrictions.id = hotel_restrictions.restriction_id WHERE
(hotel_restrictions.status = 1) AND restrictions.id = 12 AND
(((hotel_restrictions.from_time < '2018-01-10' and
hotel_restrictions.to_time >='2018-01-09') OR
 (hotel_restrictions.from_time is null AND hotel_restrictions.to_time is
null)) and (not (excluded_days @> (select array((select distinct extract
(dow from generate_series(case when '2018-01-09'::date >
hotel_restrictions.from_time then '2018-01-09'::date else
hotel_restrictions.from_time end,case when '2018-01-10'::date <=
hotel_restrictions.to_time then '2018-01-09'::date else
hotel_restrictions.to_time end,'1 day'::interval))::integer or
excluded_days is null));

Query plan of Postgres 9.3:

 Nested Loop  (cost=26828.12..138106.67 rows=6597 width=4) (actual
time=52.956..157.053 rows=4972 loops=1)
   ->  Seq Scan on restrictions  (cost=0.00..1.56 rows=1 width=4) (actual
time=0.007..0.014 rows=1 loops=1)
 Filter: (id = 12)
 Rows Removed by Filter: 44
   ->  Nested Loop  (cost=26828.12..138039.14 rows=6597 width=8) (actual
time=52.947..154.531 rows=4972 loops=1)
 ->  Bitmap Heap Scan on hotel_restrictions
(cost=26827.70..130501.88 rows=6597 width=8) (actual time=52.932..120.653
rows=4972 loops=1)
   Recheck Cond: (((to_time >= '2018-01-09'::date) AND
(restriction_id = 12) AND (from_time < '2018-01-10'::date) AND (status =
1)) OR ((to_time IS NULL) AND (restriction_id = 12) AND (from_time IS NULL)
AND (status = 1)))
   Filter: ((NOT (excluded_days @> (SubPlan 2))) OR
(excluded_days IS NULL))
   Rows Removed by Filter: 359
   ->  BitmapOr  (cost=26827.70..26827.70 rows=6600 width=0)
(actual time=52.253..52.253 rows=0 loops=1)
 ->  Bitmap Index Scan on
hotel_restrictions_multi_column_index
(cost=0.00..26819.67 rows=6580 width=0) (actual time=52.242..52.242
rows=5568 loops=1)
   Index Cond: ((to_time >= '2018-01-09'::date) AND
(restriction_id = 12) AND (from_

Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Raghavendra Rao J S V
Please let me know below details.

pg_basebackup utility  depends on which are the parameters?

Is there any possibility to run the pg_basebackup in multi thread?

To improve the speed of the backup of database using pg_basebackup utility
we shutdown the database and started alone database services. Till that
time other sevices won't run. We observed some improvement but any other
things we need to perform to reduce the time taken by the  pg_basebackup
utility.

We are using below command to take the backup of the database. Any
improvements to reduce the time taken by backup statement.

$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=6 --pgdata=- -D /opt/backup_db



On Fri, Jan 12, 2018 at 6:37 PM, Stephen Frost  wrote:

> Greetings,
>
> * Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> > We have database with the size of *425GB*. Out of this 425 GB, Around 40
> to
> > 60% of space occupied by the indexes. Ram allocated to this machine is
> > 32GB  and configuration parameters below.
> [...]
> > In order to take the backup of the database we are using pg_basebackup
> > utility. But it is taking several hours (Around 18hours). Please guide me
> > what are the configuration parameters need to modify to reduce the time
> > taken by the pg_basebackup utility.
>
> Unfortunately, there aren't a lot of great options with pg_basebackup,
> but part of the question is- why is it taking so long?  That looks to be
> a rate of less than 60Mb/s, assuming I did my calculations right, and
> that's pretty slow.  How are you running pg_basebackup?  If you're doing
> compression and the pg_basebackup process is consistently at 100% then
> that's just the rate which a single CPU can compress data for you.  If
> you're running the pg_basebackup across a WAN, then perhaps the
> throughput available is only 60Mb/s.
>
> > Is there any possibility to exclude the index data while taking the
> > pg_basebackup?
>
> This isn't currently possible, no, and it would seriously increase the
> time required to restore the system.  If you aren't concerned about
> restore time at all (though, really, you probably should be) then you
> could consider using pg_dump instead, which can be run in parallel and
> wouldn't include the indexes.  The format is a logical point-in-time
> dump though, so you aren't able to do point-in-time recovery (playing
> WAL forward) and reloading the data and rebuilding the indexes will take
> quite a while.
>
> Lastly, if the issue is that pg_basebackup is single-threaded, or that
> you need multiple TCP connections to get higher throughput, then you
> should consider one of the alternative physical (file-based) backup
> solutions available for PostgreSQL, e.g.: pgBackRest, barman, or WAL-G.
>
> Thanks!
>
> Stephen
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Stephen Frost
Greetings,

Please don't top-post on the PG mailing lists.

* Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> pg_basebackup utility  depends on which are the parameters?

I'm not sure what you're asking here.

> Is there any possibility to run the pg_basebackup in multi thread?

No, not today.  There's been discussion about making it multi-threaded
but I seriously doubt that'll happen for v11 at this point.

> To improve the speed of the backup of database using pg_basebackup utility
> we shutdown the database and started alone database services. Till that
> time other sevices won't run. We observed some improvement but any other
> things we need to perform to reduce the time taken by the  pg_basebackup
> utility.

Sure, reducing the load of the system might make pg_basebackup a little
faster, but seems unlikely to help it a lot in this case, and it means
you have downtime which might not be ideal.

> We are using below command to take the backup of the database. Any
> improvements to reduce the time taken by backup statement.
> 
> $PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
> --compress=6 --pgdata=- -D /opt/backup_db

Might be faster if you didn't compress it, but, of course, then you
wouldn't have a compressed backup.

This is exactly the kind of issue that lead to the development of
pgBackRest.  Larger databases really do need multi-threaded backups and
there weren't any backup tools for PG which were multi-threaded when we
started.  There's a few other options now, which is good, but
pg_basebackup isn't one of them.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

I am asking in my previous mail as, Does the pg_basebackup depends on any
of the postgres configuration parameters likes shared
buffer/maintanance_work_memory etc? If yes, which are those configuration
parameters, I need to take care/increase the value?


Please let me know what does this means.

*Please don't top-post on the PG mailing lists.*

*How to get clarifications on my query?*

On Sat, Jan 13, 2018 at 9:52 PM, Stephen Frost  wrote:

> Greetings,
>
> Please don't top-post on the PG mailing lists.
>
> * Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> > pg_basebackup utility  depends on which are the parameters?
>
> I'm not sure what you're asking here.
>
> > Is there any possibility to run the pg_basebackup in multi thread?
>
> No, not today.  There's been discussion about making it multi-threaded
> but I seriously doubt that'll happen for v11 at this point.
>
> > To improve the speed of the backup of database using pg_basebackup
> utility
> > we shutdown the database and started alone database services. Till that
> > time other sevices won't run. We observed some improvement but any other
> > things we need to perform to reduce the time taken by the  pg_basebackup
> > utility.
>
> Sure, reducing the load of the system might make pg_basebackup a little
> faster, but seems unlikely to help it a lot in this case, and it means
> you have downtime which might not be ideal.
>
> > We are using below command to take the backup of the database. Any
> > improvements to reduce the time taken by backup statement.
> >
> > $PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar
> --gzip
> > --compress=6 --pgdata=- -D /opt/backup_db
>
> Might be faster if you didn't compress it, but, of course, then you
> wouldn't have a compressed backup.
>
> This is exactly the kind of issue that lead to the development of
> pgBackRest.  Larger databases really do need multi-threaded backups and
> there weren't any backup tools for PG which were multi-threaded when we
> started.  There's a few other options now, which is good, but
> pg_basebackup isn't one of them.
>
> Thanks!
>
> Stephen
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Stephen Frost
Greetings,

* Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> I am asking in my previous mail as, Does the pg_basebackup depends on any
> of the postgres configuration parameters likes shared
> buffer/maintanance_work_memory etc? If yes, which are those configuration
> parameters, I need to take care/increase the value?

No, pg_basebackup doesn't really depend on any of those values (except
for things like max_wal_senders and whatnot to allow it to run at all,
of course).

> Please let me know what does this means.
> 
> *Please don't top-post on the PG mailing lists.*

It means that you should reply in-line, like I'm doing, and remove text
that isn't relevant to your response.

> *How to get clarifications on my query?*

Either I've answered this above, or I don't know what you're asking
here.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Postgres 9.4 using primary key index in almost all queries leading to degraded performance

2018-01-13 Thread Tom Lane
Rahul Sharma  writes:
> We're currently testing out upgrade of our Postgres database from 9.3.14 to
> 9.4.9. We are using Amazon RDS. We've encountered an issue in testing
> phase  where after the upgrade, CPU utilization hovers around 100%. We dug
> deep to find that the queries which executed in a few milliseconds on
> Postgres 9.3.14 are taking a lot of time to complete on 9.4.9. We've
> noticed a pattern under which the query planner (in almost all queries) is
> using primary key index for scanning the table while ignoring other
> alternatives like explicit sorting or using another efficient index present
> on the table.

In all the examples you show, the rowcount estimates are substantially
different in 9.3 and 9.4 --- and at least in the non-LIMITed queries,
where it's possible to verify the estimates, 9.3 is reasonably on-target
but 9.4 is awful.  So your problem is not with the costs but with the
selectivity estimates.  Has the 9.4 database been ANALYZEd at all?  Maybe
you disabled auto-analyze?  Maybe the old DB has some special statistics
target settings that you neglected to port forward?

regards, tom lane



Re: psql format result as markdown tables

2018-01-13 Thread Melvin Davidson
On Sat, Jan 13, 2018 at 4:50 AM, Nicolas Paris  wrote:

> Hello
>
> I wonder if someone knows how to configure psql to output results as
> markdown tables.
> Then instead of :
>
> SELECT * FROM (values(1,2),(3,4)) as t;
>  column1 | column2
> -+-
>1 |   2
>3 |   4
>
> Get the result as :
> SELECT * FROM (values(1,2),(3,4)) as t;
> | column1 | column2|
> |-||-
> |   1 |   2|
> |   3 |   4|
>
> Thanks by advance
>
>
*A. You have not graced us with PostgreSQL version and O/S*
*B. Your two examples appear exactly the same.*
*C. Have you looked at https://donatstudios.com/CsvToMarkdownTable
 ?*


*Found with a google search of  sql markdown table*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: psql format result as markdown tables

2018-01-13 Thread Stephen Frost
Hi Nicolas!

* Nicolas Paris (nipari...@gmail.com) wrote:
> I wonder if someone knows how to configure psql to output results as
> markdown tables. 

Unfortunately, I don't believe we support markdown as an output format
as yet.  To change the output format, you can use '\pset format html'.

You can see the list of formats supported by psql here:

https://www.postgresql.org/docs/current/static/app-psql.html

Look for 'format' or 'asciidoc' in that page to find the list quickly.

I suspect that if someone wanted to write the code to have markdown be
available that it would certainly be accepted, as that's definitely a
popular format.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: psql format result as markdown tables

2018-01-13 Thread Stephen Frost
Greetings Melvin,

* Melvin Davidson (melvin6...@gmail.com) wrote:
> On Sat, Jan 13, 2018 at 4:50 AM, Nicolas Paris  wrote:
> > I wonder if someone knows how to configure psql to output results as
> > markdown tables.
> > Then instead of :
> >
> > SELECT * FROM (values(1,2),(3,4)) as t;
> >  column1 | column2
> > -+-
> >1 |   2
> >3 |   4
> >
> > Get the result as :
> > SELECT * FROM (values(1,2),(3,4)) as t;
> > | column1 | column2|
> > |-||-
> > |   1 |   2|
> > |   3 |   4|
> >
> > Thanks by advance
>
> *A. You have not graced us with PostgreSQL version and O/S*

While it can make a difference, it doesn't in this case, as far as I can
tell.  You can see the list of formats supported by each version of
PostgreSQL by going to:

https://www.postgresql.org/docs/current/static/app-psql.html

and then you can walk backwards through the various releases.

> *B. Your two examples appear exactly the same.*

Not true, actually, if you look carefully you'll see that there's a set
of pipes down the left-hand side, and pipes all down the middle (instead
of a '+' on the linebreak between the header and the data).  Those kinds
of differences are what would make having an actual markdown output
format for psql particularly useful as, otherwise, you have to
hand-massage it or use another tool.

> *C. Have you looked at https://donatstudios.com/CsvToMarkdownTable
>  ?*

That might be helpful in this case, though it would require sending data
to an 3rd party, which might not be ideal.  There's a number of
csv2markdown tools out there though which could be installed locally.
Not quite as nice as having the support in psql for it though.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: psql format result as markdown tables

2018-01-13 Thread Tom Lane
Stephen Frost  writes:
> I suspect that if someone wanted to write the code to have markdown be
> available that it would certainly be accepted, as that's definitely a
> popular format.

Somebody was working on that awhile ago,

https://www.postgresql.org/message-id/flat/CAAYBy8YU4pXYKDHeQhsA_%3DFC93yOBZp5j1h%3DBSSAo9-oLcwNww%40mail.gmail.com

Seems to have lost interest though ...

regards, tom lane