PG8.3->10 migration data differences

2018-09-10 Thread Csaba Ragasits
Hello,

We would like to migrate our old databases to new postgres server, with the
simple backup-restore process. The process run fine without errors but when
we're  checking the source and the migrated datas as field level, we found
the following differences:

For example:

Field type: TIMESTAMP WITHOUT TIME ZONE NOT NULL
pg93: 2015-08-28 21:25:07.70
pg10: 2015-08-28 21:25:07.7

Field type: REAL
pg93: 2.2
pg10: 2.2005

When I check the 8.3 pg_dump file, it contains the pg83 correct values.

Do you have any ideas, why different this values?

Thx,
Csaba


Re: PG8.3->10 migration data differences

2018-09-10 Thread Achilleas Mantzios

On 10/09/2018 11:22, Csaba Ragasits wrote:

Hello,

We would like to migrate our old databases to new postgres server, with the simple backup-restore process. The process run fine without errors but when we're  checking the source and the migrated 
datas as field level, we found the following differences:


For example:

Field type: TIMESTAMP WITHOUT TIME ZONE NOT NULL
pg93: 2015-08-28 21:25:07.70
pg10: 2015-08-28 21:25:07.7

Those two are absolutely the same value. 7/10 = 70/100


Field type: REAL
pg93: 2.2
pg10: 2.2005


Those have to do with rounding. Precision for real is 6 decimal digits. Your 
difference is on the 8-th digit.

When I check the 8.3 pg_dump file, it contains the pg83 correct values.

Do you have any ideas, why different this values?

Thx,
Csaba




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: PG8.3->10 migration data differences

2018-09-10 Thread Peter J. Holzer
On 2018-09-10 11:57:42 +0300, Achilleas Mantzios wrote:
> On 10/09/2018 11:22, Csaba Ragasits wrote:
> Hello,
> 
> We would like to migrate our old databases to new postgres server,
> with the simple backup-restore process. The process run fine
> without errors but when we're  checking the source and the
> migrated datas as field level, we found the following differences:
[...]
> 
> Field type: REAL
> pg93: 2.2
> pg10: 2.2005
> 
> 
> Those have to do with rounding. Precision for real is 6 decimal
> digits. Your difference is on the 8-th digit.

Nitpick: Precision is 24 binary digits which is *about* 6 decimal
digits. The number which is actually stored (on both pg93 and pg10) is
actually 2.200476837158203125 (1.00011001100110011001101 * 2^1 in
binary). Apparently Pg9.3 rounds this to "2.2" on output while Pg10
prints the more precise (but still not exact) "2.2005". 

(I would argue that the Pg9.3 output is better, since it represents the
same value in fewer digits, but always printing the minimum number of
digits necessary is surprisingly difficult.)

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Why my query not doing index only scan

2018-09-10 Thread Arup Rakshit
Hello All,

I am learning at this point how index works in DBMS. So I am looking for a very 
broad explanation to clear my basics.

I have an index defined "inspector_tool_idx4_1" UNIQUE, btree (company_id, 
item_code, deleted_at). Now I am using the *company_id* column in the where 
clause, and the selecting just the *item_code* field for all matching rows. I 
expected here the planner will do a index only scans. But it is doing bitmap 
index scan. Any idea what it is not doing what I expected it to do.


inspection_development=# explain analyze select item_code from 
inspector_tool_components where company_id = 
'7881ff2e-0557-4734-9da8-2d33072ff1ef';
 QUERY PLAN
-
 Bitmap Heap Scan on inspector_tool_components  (cost=45.92..1360.48 rows=1226 
width=8) (actual time=0.382..1.202 rows=1232 loops=1)
   Recheck Cond: (company_id = '7881ff2e-0557-4734-9da8-2d33072ff1ef'::uuid)
   Heap Blocks: exact=81
   ->  Bitmap Index Scan on inspector_tool_idx4_1  (cost=0.00..45.61 rows=1226 
width=0) (actual time=0.347..0.347 rows=1232 loops=1)
 Index Cond: (company_id = '7881ff2e-0557-4734-9da8-2d33072ff1ef'::uuid)
 Planning time: 0.207 ms
 Execution time: 1.358 ms
(7 rows)



Thanks,

Arup Rakshit
a...@zeit.io





Re: Why my query not doing index only scan

2018-09-10 Thread Stephen Frost
Greetings,

* Arup Rakshit (a...@zeit.io) wrote:
> I have an index defined "inspector_tool_idx4_1" UNIQUE, btree (company_id, 
> item_code, deleted_at). Now I am using the *company_id* column in the where 
> clause, and the selecting just the *item_code* field for all matching rows. I 
> expected here the planner will do a index only scans. But it is doing bitmap 
> index scan. Any idea what it is not doing what I expected it to do.

One possibility is that the visibility map isn't current.

Indexes don't include visibility information.  The way an index-only
scan works is that we track pages which are 'all visible' (meaning that
every tuple on that page is visible to all running transactions) in a
seperate file called the 'visibility map' (aka the VM).  The VM is
updated by the VACUUM process- but we only automatically run a VACUUM
(with the autovacuum process) when thresholds have been reached for the
number of UPDATE'd or DELETE'd tuples.

What this means is that if you are playing around in development and
just COPY a bunch of rows into a table, but don't ever UPDATE or DELETE
the rows in that table, then you'll almost never get an index-only scan
because the VM won't be current (and PG knows this).

Make sure to do a VACUUM after loading data (and letting any ongoing
transactions finish) and then re-test.  That should make it sure that
the VM is current and make it more likely that PG will do an index-only
scan.  Not a guarantee still, but that's the first thing I'd try, based
on what you've shared here.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Why my query not doing index only scan

2018-09-10 Thread Arup Rakshit
Thanks Stephen. After running the vacuum on the table it worked.

inspection_development=# explain analyze select item_code from 
inspector_tool_components where company_id = 
'7881ff2e-0557-4734-9da8-2d33072ff1ef';
 QUERY 
PLAN
-
 Index Only Scan using inspector_tool_idx4_1 on inspector_tool_components  
(cost=0.41..57.87 rows=1226 width=8) (actual time=0.044..0.626 rows=1232 
loops=1)
   Index Cond: (company_id = '7881ff2e-0557-4734-9da8-2d33072ff1ef'::uuid)
   Heap Fetches: 0
 Planning time: 0.190 ms
 Execution time: 0.778 ms
(5 rows)


Thanks,

Arup Rakshit
a...@zeit.io



> On 10-Sep-2018, at 4:58 PM, Stephen Frost  wrote:
> 
> VACUUM



Re: Why my query not doing index only scan

2018-09-10 Thread Arup Rakshit
Hello Stephen,

I would like to ask one more question related to this topic. When I take a dump 
from production, and restore it to development DB, what are the commands I 
generally need to run to dev deb quack close to production?

Thanks,

Arup Rakshit
a...@zeit.io



> On 10-Sep-2018, at 4:58 PM, Stephen Frost  wrote:
> 
> Greetings,
> 
> * Arup Rakshit (a...@zeit.io) wrote:
>> I have an index defined "inspector_tool_idx4_1" UNIQUE, btree (company_id, 
>> item_code, deleted_at). Now I am using the *company_id* column in the where 
>> clause, and the selecting just the *item_code* field for all matching rows. 
>> I expected here the planner will do a index only scans. But it is doing 
>> bitmap index scan. Any idea what it is not doing what I expected it to do.
> 
> One possibility is that the visibility map isn't current.
> 
> Indexes don't include visibility information.  The way an index-only
> scan works is that we track pages which are 'all visible' (meaning that
> every tuple on that page is visible to all running transactions) in a
> seperate file called the 'visibility map' (aka the VM).  The VM is
> updated by the VACUUM process- but we only automatically run a VACUUM
> (with the autovacuum process) when thresholds have been reached for the
> number of UPDATE'd or DELETE'd tuples.
> 
> What this means is that if you are playing around in development and
> just COPY a bunch of rows into a table, but don't ever UPDATE or DELETE
> the rows in that table, then you'll almost never get an index-only scan
> because the VM won't be current (and PG knows this).
> 
> Make sure to do a VACUUM after loading data (and letting any ongoing
> transactions finish) and then re-test.  That should make it sure that
> the VM is current and make it more likely that PG will do an index-only
> scan.  Not a guarantee still, but that's the first thing I'd try, based
> on what you've shared here.
> 
> Thanks!
> 
> Stephen



Re: Why my query not doing index only scan

2018-09-10 Thread Stephen Frost
Greetings,

* Arup Rakshit (a...@zeit.io) wrote:
> I would like to ask one more question related to this topic. When I take a 
> dump from production, and restore it to development DB, what are the commands 
> I generally need to run to dev deb quack close to production?

The best way to get a prod-like environment in development is to use a
file-level backup tool, like pgBackRest.  There's a few nice things that
does:

- Copies the database files at a physical level, making it much closer
  to what prod is like
- Checks your database checksums (if you have them enabled, which
  hopefully you do, see initdb -k)
- Verifies your backup/restore approach
- Provides the fastest way to perform a restore in the event of an issue
- Allows you to do point-in-time-recovery (PITR)

Using pg_dump/pg_restore takes a logical export of the database and then
imports it into a new PG cluster.  Even if you do VACUUM ANALYZE after
doing pg_dump/restore, you're going to end up with (somewhat, at least)
different stats, all the tables will be completely packed (no bloat, no
space for new tuples to go into existing pages), all of the indexes will
be completely rebuilt and pristine, etc.  In the end, it's pretty
different and while it'll behavior in a similar manner to prod in a lot
of ways, there'll be a lot of ways it doesn't too.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Why my query not doing index only scan

2018-09-10 Thread Laurenz Albe
Arup Rakshit wrote:
> I am learning at this point how index works in DBMS. So I am looking for a 
> very broad explanation to clear my basics.
> 
> I have an index defined "inspector_tool_idx4_1" UNIQUE, btree (company_id, 
> item_code, deleted_at).
> Now I am using the *company_id* column in the where clause, and the selecting 
> just the *item_code* field for all matching rows.
> I expected here the planner will do a index only scans. But it is doing 
> bitmap index scan. Any idea what it is not doing what I expected it to do.
> 
> 
> inspection_development=# explain analyze select item_code from 
> inspector_tool_components where company_id = 
> '7881ff2e-0557-4734-9da8-2d33072ff1ef';
>  QUERY PLAN
> -
>  Bitmap Heap Scan on inspector_tool_components  (cost=45.92..1360.48 
> rows=1226 width=8) (actual time=0.382..1.202 rows=1232 loops=1)
>Recheck Cond: (company_id = '7881ff2e-0557-4734-9da8-2d33072ff1ef'::uuid)
>Heap Blocks: exact=81
>->  Bitmap Index Scan on inspector_tool_idx4_1  (cost=0.00..45.61 
> rows=1226 width=0) (actual time=0.347..0.347 rows=1232 loops=1)
>  Index Cond: (company_id = 
> '7881ff2e-0557-4734-9da8-2d33072ff1ef'::uuid)
>  Planning time: 0.207 ms
>  Execution time: 1.358 ms
> (7 rows)

Try to "VACUUM (ANALYZE) inspector_tool_components", that will set the
visibility mape and get the statistics right, maybe than you get an
index only scan.

How many rows does the table contain?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: PG8.3->10 migration data differences

2018-09-10 Thread Csaba Ragasits
I think I found the solution.

When I set this parameter on the pg10 client connection, the pg10 REAL
value format same as the pg83 value:

SET extra_float_digits = 0;

Interesting, because the default value is 0 in the postgresql.conf:
#extra_float_digits = 0 # min -15, max 3

Do you have any ideas how can I format the miliseconds too?

Thx,
Csaba

2018-09-10 12:00 GMT+02:00 Peter J. Holzer :

> On 2018-09-10 11:57:42 +0300, Achilleas Mantzios wrote:
> > On 10/09/2018 11:22, Csaba Ragasits wrote:
> > Hello,
> >
> > We would like to migrate our old databases to new postgres server,
> > with the simple backup-restore process. The process run fine
> > without errors but when we're  checking the source and the
> > migrated datas as field level, we found the following differences:
> [...]
> >
> > Field type: REAL
> > pg93: 2.2
> > pg10: 2.2005
> >
> >
> > Those have to do with rounding. Precision for real is 6 decimal
> > digits. Your difference is on the 8-th digit.
>
> Nitpick: Precision is 24 binary digits which is *about* 6 decimal
> digits. The number which is actually stored (on both pg93 and pg10) is
> actually 2.200476837158203125 (1.00011001100110011001101 * 2^1 in
> binary). Apparently Pg9.3 rounds this to "2.2" on output while Pg10
> prints the more precise (but still not exact) "2.2005".
>
> (I would argue that the Pg9.3 output is better, since it represents the
> same value in fewer digits, but always printing the minimum number of
> digits necessary is surprisingly difficult.)
>
> hp
>
> --
>_  | Peter J. Holzer| we build much bigger, better disasters now
> |_|_) || because we have much more sophisticated
> | |   | h...@hjp.at | management tools.
> __/   | http://www.hjp.at/ | -- Ross Anderson 
>


Re: PG8.3->10 migration data differences

2018-09-10 Thread Tom Lane
"Peter J. Holzer"  writes:
> On 2018-09-10 11:57:42 +0300, Achilleas Mantzios wrote:
>> On 10/09/2018 11:22, Csaba Ragasits wrote:
>>> Field type: REAL
>>> pg93: 2.2
>>> pg10: 2.2005

>> Those have to do with rounding. Precision for real is 6 decimal
>> digits. Your difference is on the 8-th digit.

> Nitpick: Precision is 24 binary digits which is *about* 6 decimal
> digits. The number which is actually stored (on both pg93 and pg10) is
> actually 2.200476837158203125 (1.00011001100110011001101 * 2^1 in
> binary). Apparently Pg9.3 rounds this to "2.2" on output while Pg10
> prints the more precise (but still not exact) "2.2005". 

Well, more specifically:

regression=# set extra_float_digits to 2;
SET
regression=# select 2.2::real;
 float4 

2.2
(1 row)

regression=# set extra_float_digits to 3;
SET
regression=# select 2.2::real;
   float4   

 2.2005
(1 row)

8.3 did not let you set extra_float_digits as high as 3, so you couldn't
get the "2.2005" result there, but given that and a correct
implementation of sprintf you would have.

I surmise that the OP is comparing the output of pg_dump in the two
versions.  pg_dump always sets extra_float_digits to the maximum the
server will allow, and the reason is to ensure that the dumped value will
reload as the same binary bit pattern (again, assuming correct float I/O
functionality in libc).  We used to think that 2 extra digits beyond the
nominal precision was enough to guarantee that, but there are cases where
you need 3, so it got changed.

regards, tom lane



Re: PG8.3->10 migration data differences

2018-09-10 Thread Adrian Klaver

On 9/10/18 6:43 AM, Csaba Ragasits wrote:

I think I found the solution.

When I set this parameter on the pg10 client connection, the pg10 REAL 
value format same as the pg83 value:


In your previous post you mentioned both 9.3 and 8.3.

Is 8.3 the actual version you are migrating from?



SET extra_float_digits = 0;

Interesting, because the default value is 0 in the postgresql.conf:
#extra_float_digits = 0 # min -15, max 3

Do you have any ideas how can I format the miliseconds too?


Assuming you are moving from 8.3 what does:

pg_controldata -D /your_83/data_dir

show for Date/time type storage: ?

vs

pg_controldata -D /your_10/data_dir



Thx,
Csaba

2018-09-10 12:00 GMT+02:00 Peter J. Holzer >:


On 2018-09-10 11:57:42 +0300, Achilleas Mantzios wrote:
> On 10/09/2018 11:22, Csaba Ragasits wrote:
>     Hello,
> 
>     We would like to migrate our old databases to new postgres server,

>     with the simple backup-restore process. The process run fine
>     without errors but when we're  checking the source and the
>     migrated datas as field level, we found the following differences:
[...]
> 
>     Field type: REAL

>     pg93: 2.2
>     pg10: 2.2005
> 
> 
> Those have to do with rounding. Precision for real is 6 decimal

> digits. Your difference is on the 8-th digit.

Nitpick: Precision is 24 binary digits which is *about* 6 decimal
digits. The number which is actually stored (on both pg93 and pg10) is
actually 2.200476837158203125 (1.00011001100110011001101 * 2^1 in
binary). Apparently Pg9.3 rounds this to "2.2" on output while Pg10
prints the more precise (but still not exact) "2.2005".

(I would argue that the Pg9.3 output is better, since it represents the
same value in fewer digits, but always printing the minimum number of
digits necessary is surprisingly difficult.)

         hp

-- 
    _  | Peter J. Holzer    | we build much bigger, better disasters now

|_|_) |                    | because we have much more sophisticated
| |   | h...@hjp.at          | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PG8.3->10 migration data differences

2018-09-10 Thread Adrian Klaver

On 9/10/18 6:43 AM, Csaba Ragasits wrote:

I think I found the solution.

When I set this parameter on the pg10 client connection, the pg10 REAL 
value format same as the pg83 value:


SET extra_float_digits = 0;

Interesting, because the default value is 0 in the postgresql.conf:
#extra_float_digits = 0 # min -15, max 3

Do you have any ideas how can I format the miliseconds too?


select version();
   version

 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit


test=# select '09/10/18 07:10:25.100'::timestamp;
   timestamp
---
 2018-09-10 07:10:25.1
(1 row)

test=# select '09/10/18 07:10:25.111'::timestamp;
timestamp
-
 2018-09-10 07:10:25.111

Milliseconds will display if they are significant.


If you want to format the output:

test=# select to_char('09/10/18 07:10:25.100'::timestamp, 'MM/DD/YY 
HH:MI:SS.MS');

to_char
---
 09/10/18 07:10:25.100
(1 row)




Thx,
Csaba

2018-09-10 12:00 GMT+02:00 Peter J. Holzer >:


On 2018-09-10 11:57:42 +0300, Achilleas Mantzios wrote:
> On 10/09/2018 11:22, Csaba Ragasits wrote:
>     Hello,
> 
>     We would like to migrate our old databases to new postgres server,

>     with the simple backup-restore process. The process run fine
>     without errors but when we're  checking the source and the
>     migrated datas as field level, we found the following differences:
[...]
> 
>     Field type: REAL

>     pg93: 2.2
>     pg10: 2.2005
> 
> 
> Those have to do with rounding. Precision for real is 6 decimal

> digits. Your difference is on the 8-th digit.

Nitpick: Precision is 24 binary digits which is *about* 6 decimal
digits. The number which is actually stored (on both pg93 and pg10) is
actually 2.200476837158203125 (1.00011001100110011001101 * 2^1 in
binary). Apparently Pg9.3 rounds this to "2.2" on output while Pg10
prints the more precise (but still not exact) "2.2005".

(I would argue that the Pg9.3 output is better, since it represents the
same value in fewer digits, but always printing the minimum number of
digits necessary is surprisingly difficult.)

         hp

-- 
    _  | Peter J. Holzer    | we build much bigger, better disasters now

|_|_) |                    | because we have much more sophisticated
| |   | h...@hjp.at          | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Volume partitioning (was Re: pgbackrest when data/base is symlinked to another volume)

2018-09-10 Thread David Steele

Hi Ron

On 9/8/18 4:26 PM, Ron wrote:


Googled "postgresql disk partitioning" and "postgresql volume 
partitioning" without much success.


Is the best practice volume partitioning:
/Database/9.6/data
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where /var/lib/pgsql/9.6 (on RHEL6) is a symlink to /Database/9.6/data 
and PGDATA=/Database/9.6/data


*or *

/Database/9.6/data/base
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where PGDATA=/var/lib/pgsql/9.6/data and base, pg_log and px_xlog are 
symlinks to the partitions?


This is really a matter of preference, but simpler is better, so the 
first one looks better to me.


In general I prefer to keep everything in /var/lib/pgsql/9.6:

/var/lib/pgsql/9.6/data
/var/lib/pgsql/9.6/data/pg_xlog -> /var/lib/pgsql/9.6/wal

Then use the log_directory setting to put logs in:

/var/log/pgsql or /var/lib/pgsql/9.6/log

pgBackRest will expect to be pointed at a real directory (pg-path) and 
expect the data_directory in PostgreSQL to match.


Regards,
--
-David
da...@pgmasters.net



Re: scram-sha-256 authentication broken in FIPS mode

2018-09-10 Thread Alessandro Gherardi
 I changed the implementation of the other SHA digests to use EVP also.
I verified that, with these changes, scram-sha-256 works when FIPS is enabled.  


sha2.h.diff
Description: Binary data


sha2_openssl.c.diff
Description: Binary data


Re: scram-sha-256 authentication broken in FIPS mode

2018-09-10 Thread Michael Paquier
On Mon, Sep 10, 2018 at 02:52:00PM +, Alessandro Gherardi wrote:
> I changed the implementation of the other SHA digests to use EVP
> also.  I verified that, with these changes, scram-sha-256 works when
> FIPS is enabled.

Hm...  I have spent a couple of hours setting up a CentOS 7 VM with FIPS
enabled to track down if there is actually a problem.  Here is what I
have done to enable it:
1) yum install dracut-fips
dracut -v -f
2) Update boot loader, where it is necessary to update
GRUB_CMDLINE_LINUX by adding to it "fips=1 boot=UUID=$PARTITION_UUID"
into /etc/default/grub.  PARTITION_UUID can be found with "blkid=/boot"
(depends on the partition layer by the way).
3) Disable prelinking (requires installation of package prelink) by
adding PRELINKING=no to /etc/sysconfig/prelink, then remove existing
prelinks with "prelink -u -a".

After a reboot, it is possible to see /proc/sys/crypto/fips_enabled set
to 1.  Once I did that, unfortunately I have not been able to spot
deficiencies when calling the low-level SHA APIs from OpenSSL, where
both SCRAM and all the in-core SSL functions are proving to work
correctly.  Calling directly FIPS_mode() within Postgres backends also
prove that FIPS is effectively enabled.  Anyway, on top of the remark
Alessandro has done above, this line from the OpenSSL docs has caught my
eyes:
https://www.openssl.org/docs/man1.1.0/crypto/SHA512_Init.html
"Applications should use the higher level functions EVP_DigestInit
etc. instead of calling the hash functions directly."

This is present in OpenSSL docs for some time:
commit: 4facdbb5fa9d791fc72dc78b9c3512ea1384df33
author: Ulf Möller 
date: Sun, 6 Feb 2000 23:26:31 +

Hence, intrinsically, we are in contradiction with the upstream docs.  I
have worked on the problem with the patch, which works down to OpenSSL
0.9.8, and should fix your issue.  This is based on what you sent
previously, except that I was not able to apply what was sent, so I
reworked the whole.  Alessandro, does this fix your problems?  I would
like to apply that down to v10 where SCRAM has been introduced.
--
Michael


signature.asc
Description: PGP signature


Re: scram-sha-256 authentication broken in FIPS mode

2018-09-10 Thread Michael Paquier
On Tue, Sep 11, 2018 at 12:02:50PM +0900, Michael Paquier wrote:
> Hence, intrinsically, we are in contradiction with the upstream docs.  I
> have worked on the problem with the patch, which works down to OpenSSL
> 0.9.8, and should fix your issue.  This is based on what you sent
> previously, except that I was not able to apply what was sent, so I
> reworked the whole.  Alessandro, does this fix your problems?  I would
> like to apply that down to v10 where SCRAM has been introduced.

With the actual patch attached things are better.  So here it is. 
--
Michael
From 1b450dee61855f4fd8b9e4a37d2f95c07f26db55 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Tue, 11 Sep 2018 11:34:48 +0900
Subject: [PATCH] Change SHA algorithms to use EVP_CTX from OpenSSL

This seems to fix issues with FIPS mode on Windows.
---
 src/common/sha2_openssl.c | 45 +++
 src/include/common/sha2.h | 10 -
 2 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/src/common/sha2_openssl.c b/src/common/sha2_openssl.c
index 362e1318db..e80dec7b4d 100644
--- a/src/common/sha2_openssl.c
+++ b/src/common/sha2_openssl.c
@@ -20,83 +20,100 @@
 #include "postgres_fe.h"
 #endif
 
-#include 
-
 #include "common/sha2.h"
 
+static void
+digest_init(EVP_MD_CTX **ctx, const EVP_MD *type)
+{
+	*ctx = EVP_MD_CTX_create();
+	EVP_DigestInit_ex(*ctx, type, NULL);
+}
+
+static void
+digest_update(EVP_MD_CTX **ctx, const uint8 *data, size_t len)
+{
+	EVP_DigestUpdate(*ctx, data, len);
+}
+
+static void
+digest_final(EVP_MD_CTX **ctx, uint8 *dest)
+{
+	EVP_DigestFinal_ex(*ctx, dest, 0);
+	EVP_MD_CTX_destroy(*ctx);
+}
 
 /* Interface routines for SHA-256 */
 void
 pg_sha256_init(pg_sha256_ctx *ctx)
 {
-	SHA256_Init((SHA256_CTX *) ctx);
+	digest_init(ctx, EVP_sha256());
 }
 
 void
 pg_sha256_update(pg_sha256_ctx *ctx, const uint8 *data, size_t len)
 {
-	SHA256_Update((SHA256_CTX *) ctx, data, len);
+	digest_update(ctx, data, len);
 }
 
 void
 pg_sha256_final(pg_sha256_ctx *ctx, uint8 *dest)
 {
-	SHA256_Final(dest, (SHA256_CTX *) ctx);
+	digest_final(ctx, dest);
 }
 
 /* Interface routines for SHA-512 */
 void
 pg_sha512_init(pg_sha512_ctx *ctx)
 {
-	SHA512_Init((SHA512_CTX *) ctx);
+	digest_init(ctx, EVP_sha512());
 }
 
 void
 pg_sha512_update(pg_sha512_ctx *ctx, const uint8 *data, size_t len)
 {
-	SHA512_Update((SHA512_CTX *) ctx, data, len);
+	digest_update(ctx, data, len);
 }
 
 void
 pg_sha512_final(pg_sha512_ctx *ctx, uint8 *dest)
 {
-	SHA512_Final(dest, (SHA512_CTX *) ctx);
+	digest_final(ctx, dest);
 }
 
 /* Interface routines for SHA-384 */
 void
 pg_sha384_init(pg_sha384_ctx *ctx)
 {
-	SHA384_Init((SHA512_CTX *) ctx);
+	digest_init(ctx, EVP_sha384());
 }
 
 void
 pg_sha384_update(pg_sha384_ctx *ctx, const uint8 *data, size_t len)
 {
-	SHA384_Update((SHA512_CTX *) ctx, data, len);
+	digest_update(ctx, data, len);
 }
 
 void
 pg_sha384_final(pg_sha384_ctx *ctx, uint8 *dest)
 {
-	SHA384_Final(dest, (SHA512_CTX *) ctx);
+	digest_final(ctx, dest);
 }
 
 /* Interface routines for SHA-224 */
 void
 pg_sha224_init(pg_sha224_ctx *ctx)
 {
-	SHA224_Init((SHA256_CTX *) ctx);
+	digest_init(ctx, EVP_sha224());
 }
 
 void
 pg_sha224_update(pg_sha224_ctx *ctx, const uint8 *data, size_t len)
 {
-	SHA224_Update((SHA256_CTX *) ctx, data, len);
+	digest_update(ctx, data, len);
 }
 
 void
 pg_sha224_final(pg_sha224_ctx *ctx, uint8 *dest)
 {
-	SHA224_Final(dest, (SHA256_CTX *) ctx);
+	digest_final(ctx, dest);
 }
diff --git a/src/include/common/sha2.h b/src/include/common/sha2.h
index f3fd0d0d28..701647713f 100644
--- a/src/include/common/sha2.h
+++ b/src/include/common/sha2.h
@@ -51,7 +51,7 @@
 #define _PG_SHA2_H_
 
 #ifdef USE_SSL
-#include 
+#include 
 #endif
 
 /*** SHA224/256/384/512 Various Length Definitions ***/
@@ -70,10 +70,10 @@
 
 /* Context Structures for SHA-1/224/256/384/512 */
 #ifdef USE_SSL
-typedef SHA256_CTX pg_sha256_ctx;
-typedef SHA512_CTX pg_sha512_ctx;
-typedef SHA256_CTX pg_sha224_ctx;
-typedef SHA512_CTX pg_sha384_ctx;
+typedef EVP_MD_CTX *pg_sha256_ctx;
+typedef EVP_MD_CTX *pg_sha512_ctx;
+typedef EVP_MD_CTX *pg_sha224_ctx;
+typedef EVP_MD_CTX *pg_sha384_ctx;
 #else
 typedef struct pg_sha256_ctx
 {
-- 
2.19.0.rc2



signature.asc
Description: PGP signature