Issues on 9.5 to 9.6 Upgrade.

2018-06-13 Thread bend
Servers: Redhack 6.9 64 bit.Postgres OLD Server version 9.5.6Postgres New Server 9.6.6Issues BIG difference in speed on the following SQL statement from 9.5 to 9.6 ...EXPLAIN ANALYZEselect de_caseid, de_seqno, de_document_num, doc_link.dm_id as dm_id,   doc_link.dm_seq as dm_seq, doc_link.dm_dls_id as dm_dls_id,   doc_link.dm_restrict_view as dm_restrict_view, document_v.dm_type as dm_type,   document_v.dm_id as accessablefrom dktentry left outer join doc_link on de_doc_id = doc_link.dm_id left outer join document_v on doc_link.dm_id = document_v.dm_id and doc_link.dm_seq = document_v.dm_seq where de_caseid in (800);Number of rows in table dktentry:psql -x -c "select count(*) from dktentry;"-[ RECORD 1 ]-count | 376555Number of Rows in table document_v: psql -x -c "select count(*) from document_v;"-[ RECORD 1 ]-count | 931306Number of rows in table doc_link: psql -x -c "select count(*) from doc_link;"-[ RECORD 1 ]--count | 1329305We are migrating our current 9.5 batch of Postgres servers to a new batch of Postgres 9.6 servers.We have tried using both pd_dump and pg dumpall to move the files across.However , when executing the above statement on the 9.5 version, we get the following:* 9.5 Postgres[postgres@sudb114 admintools]$ psql -x -f docittest.sql-[ RECORD 1 ]---   --QUERY PLAN | Nested Loop Left Join  (cost=1.28..355.98 rows=104 width=32) (actual time=4.017..4.684 rows=6 loops=1)-[ RECORD 2 ]---   --QUERY PLAN |   ->  Nested Loop Left Join  (cost=0.85..310.53 rows=39 width=26) (actual time=0.134..0.283 rows=6 loops=1)-[ RECORD 3 ]---   --QUERY PLAN | ->  Index Scan using u_dktentry_pkey on u_dktentry x0  (cost=0.42..60.94 rows=20 width=16) (actual    time=0.093..0.131 rows=5 loops=1)-[ RECORD 4 ]---   --QUERY PLAN |   Index Cond: (de_caseid = 800)-[ RECORD 5 ]---   --QUERY PLAN |   Filter: ((de_restrict_view = 0) OR (may_access_docket((de_restrict_view)::integer, de_caseid,    de_seqno, 0, 0) <> 0))-[ RECORD 6 ]---   --QUERY PLAN | ->  Index Scan using u_document_pkey on u_document x0_1  (cost=0.43..12.46 rows=2 width=14) (actual    time=0.028..0.029 rows=1 loops=5)-[ RECORD 7 ]---   --QUERY PLAN |   Index Cond: (x0.de_doc_id = dm_id)-[ RECORD 8 ]---   --QUERY PLAN |   ->  Index Scan using u_document_pkey on u_document x0_2  (cost=0.43..1.16 rows=1 width=10) (actual time=0   .732..0.732 rows=0 loops=6)-[ RECORD 9 ]---   --QUERY PLAN | Index Cond: ((x0_1.dm_id = dm_id) AND (x0_1.dm_seq = dm_seq))-[ RECORD 10 ]--   --QUERY PLAN | Filter: ((dm_restrict_view = 0) OR (may_access_doc((dm_restrict_view)::integer, dm_id, dm_seq, dm_t   ype) <> 0))-[ RECORD 11 ]--   --QUERY PLAN | Rows Removed by Filter: 1-[ RECORD 12 ]--   --QUERY PLAN | Planning time: 1.579 ms-[ RECORD 13 ]

RE: Issues on 9.5 to 9.6 Upgrade.

2018-06-13 Thread bend
Analyze the DB exactly how?I did run a reindexdb after migration to no affect.Yeah the PLANS are a big difference Ben Duncan - Business Network Solutions, Inc. 336 Elton Road  Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor


 Original Message 
Subject: Re: Issues on 9.5 to 9.6 Upgrade.
From: Don Seiler 
Date: Wed, June 13, 2018 11:24 am
To: b...@linux4ms.net
Cc: Postgres General 

On Wed, Jun 13, 2018 at 11:16 AM,  wrote:Can anyoen explain WHY there is such a big difference? Is it the SQL statement or a bug in the pgdump ? Did you analyze the database after upgrading? Or at least the tables in question? Those are very different plans.-- Don Seilerwww.seiler.us  





RE: Issues on 9.5 to 9.6 Upgrade.

2018-06-13 Thread bend
Ok, this seemed to solve it:> vacuumdb -zv ; reindexdb -vAnd we are back to full speed Thanks for pointing out the ANALYZE ...Ben Duncan - Business Network Solutions, Inc. 336 Elton Road  Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor


 Original Message 
Subject: Re: Issues on 9.5 to 9.6 Upgrade.
From: Adrian Klaver 
Date: Wed, June 13, 2018 11:45 am
To: b...@linux4ms.net, Postgres General 

On 06/13/2018 09:44 AM, b...@linux4ms.net wrote:
> Analyze the DB exactly how?

https://www.postgresql.org/docs/10/static/sql-analyze.html

> 
> I did run a reindexdb after migration to no affect.
> 
> Yeah the PLANS are a big difference 
> 
> Ben Duncan - Business Network Solutions, Inc. 336 Elton Road Jackson MS, 
> 39212
> "Never attribute to malice, that which can be adequately explained by 
> stupidity"
> - Hanlon's Razor
> 
> 
>  Original Message 
> Subject: Re: Issues on 9.5 to 9.6 Upgrade.
> From: Don Seiler >
> Date: Wed, June 13, 2018 11:24 am
> To: b...@linux4ms.net 
> Cc: Postgres General  >
> 
> On Wed, Jun 13, 2018 at 11:16 AM,  > wrote:
> 
> 
> Can anyoen explain WHY there is such a big difference? Is it the
> SQL statement or a bug in the pgdump ?
> 
> 
> Did you analyze the database after upgrading? Or at least the tables
> in question? Those are very different plans.
> 
> -- 
> Don Seiler
> www.seiler.us www.seiler.us>
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com







RE: Trouble Upgrading Postgres

2018-11-06 Thread bend
As someone pointed out, there is a limit with bytea (Blob's).To test if it is bytea, use a COPY with a select statement :COPY ( select A, B,C ,D ...etc FROM table ) TO 'outfile' ;Leaveing out the bytea column.If this works, then then one of the bytea columns is way to big.Ben Duncan - Business Network Solutions, Inc. 336 Elton Road  Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor


 Original Message 
Subject: Re: Trouble Upgrading Postgres
From: Tom Lane 
Date: Tue, November 06, 2018 11:53 am
To: Adrian Klaver 
Cc: Daniel Verite ,Charles Martin
,pgsql-general


Adrian Klaver  writes:
> On 11/6/18 8:27 AM, Daniel Verite wrote:
>> Adrian Klaver wrote:
>>> To me that looks like a bug, putting data into a record you cannot get out.

>> Strictly speaking, it could probably get out with COPY in binary format,
>> but pg_dump doesn't use that.

Another possibility, seeing that the problematic data is bytea, is that
it might depend on whether you use hex or escape bytea_output format.
Hex format is reliably twice the size of the stored data, but escape
format could be anywhere from the same size as the stored data to four
times the size, depending on the contents.  pg_dump is agnostic about this
and will just dump using the prevailing bytea_output setting, so you might
be able to get it to work by changing that setting.

			regards, tom lane