Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
Hi

On Thu, 3 Mar 2022 at 12:31, Pavel Borisov  wrote:

> Hi, hackers!
>
> Around 2 months ago I've noticed a problem that messages containing
> patches in the thread [1] were always processed with manual moderation.
> They appear in hackers' thread hours after posting None of them are from
> new CF members and personally, I don't see a reason for such inconvenience.
> The problem still exists as of today.
>
> Can someone make changes in a moderation engine to make it more liberal
> and convenient for authors?
>
> [1]
> https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com
>

Here's the moderation reason for that message:

Message to list pgsql-hackers held for moderation due to 'Size 1MB (1061796
bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for
2 moderators

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
On Thu, 3 Mar 2022 at 13:22, Pavel Borisov  wrote:

> Message to list pgsql-hackers held for moderation due to 'Size 1MB
>> (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice
>> queued for 2 moderators
>>
> Could you make this limit 2MB at least for authorized commitfest members?
> Thanks!
>

The mail system doesn't have the capability to apply different moderation
rules for people in that way I'm afraid.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
On Thu, 3 Mar 2022 at 13:28, Pavel Borisov  wrote:

> The mail system doesn't have the capability to apply different moderation
>> rules for people in that way I'm afraid.
>>
> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to
> answers before the questions in the thread [1], seems weird.
>

Then someone will complain if their patch is 2.1MB! How often are messages
legitimately over 1MB anyway, even with a patch? I don't usually moderate
-hackers, so I don't know if this is a common thing or not.

I'll ping a message across to the sysadmin team anyway; I can't just change
that setting without buy-in from the rest of the team.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-21 Thread Dave Page
On Sun, 20 Mar 2022 at 13:52, Andrew Dunstan  wrote:

>
> On 3/19/22 14:48, Andres Freund wrote:
> > Hi,
> >
> > On 2022-03-03 13:37:35 +, Dave Page wrote:
> >> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov 
> wrote:
> >>
> >>> The mail system doesn't have the capability to apply different
> moderation
> >>>> rules for people in that way I'm afraid.
> >>>>
> >>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to
> >>> answers before the questions in the thread [1], seems weird.
> >>>
> >> Then someone will complain if their patch is 2.1MB! How often are
> messages
> >> legitimately over 1MB anyway, even with a patch? I don't usually
> moderate
> >> -hackers, so I don't know if this is a common thing or not.
> > I don't think it's actually that rare. But most contributors writing that
> > large patchsets know about the limit and work around it - I gzip patches
> when
> > I see the email getting too large. But it's more annoying to work with
> for
> > reviewers.
> >
> > It's somewhat annoying. If you e.g. append a few graphs of performance
> changes
> > and a patch it's pretty easy to get into the range where compressing
> won't
> > help anymore.
> >
> > And sure, any limit may be hit by somebody. But 1MB across the whole
> email
> > seems pretty low these days.
> >
>
> Of course we could get complaints no matter what level we set the limit
> at. I think raising it to 2Mb would be a reasonable experiment. If no
> observable evil ensues then leave it that way. If it does then roll it
> back. I agree that plain uncompressed patches are easier to deal with in
> general.
>

Thanks for the reminder :-)

I've bumped the limit to 2MB.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
Hi,

This is more of a head-ups than anything else, as I suspect this may come
up in various forums.

The PostgreSQL installers for macOS (from EDB, possibly others too) create
the data directory in /Library/PostgreSQL//data. This has been
the case since the first release, 10+ years ago.

It looks like the Big Sur upgrade has taken it upon itself to "fix" any
filesystem permissions it doesn't like. On my system, this resulted in the
data directory having 0755 permissions, which meant that PostgreSQL refused
to start. Manually changing the permissions back to 0700 (0750 should also
work) fixes the issue.

I'm not sure there's much we can do about this - systems that are likely to
be affected are already out there, and we obviously don't want to relax the
permissions Postgres requires.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
On Mon, Nov 16, 2020 at 3:55 PM Jonathan S. Katz 
wrote:

> On 11/16/20 4:27 AM, Dave Page wrote:
> > Hi,
> >
> > This is more of a head-ups than anything else, as I suspect this may
> > come up in various forums.
> >
> > The PostgreSQL installers for macOS (from EDB, possibly others too)
> > create the data directory in /Library/PostgreSQL//data. This
> > has been the case since the first release, 10+ years ago.
> >
> > It looks like the Big Sur upgrade has taken it upon itself to "fix" any
> > filesystem permissions it doesn't like. On my system, this resulted in
> > the data directory having 0755 permissions, which meant that PostgreSQL
> > refused to start. Manually changing the permissions back to 0700 (0750
> > should also work) fixes the issue.
> >
> > I'm not sure there's much we can do about this - systems that are likely
> > to be affected are already out there, and we obviously don't want to
> > relax the permissions Postgres requires.
>
> Thanks for raising this. We should provide some guidance on upgrading
> this when upgrading to Big Sur.
>
> Do we know where the other macOS installers place their data
> directories? We should reach out to the installer maintainers to see if
> they are seeing the same behavior so we know what guidance to issue.
>

I believe postgres.app only installs for the current user, and puts it's
data under ~/Library/Application Support/Postgres.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
On Mon, Nov 16, 2020 at 4:45 PM Pavel Borisov 
wrote:

> I suppose there are many ways to have PG on OSX i.e. package managers
> (Homebrew, Macports), App installers etc and so many places anyone can find
> his data directory reside in. Generally I prefer data directory to be
> somewhere inside the user home dir as OSX will take care of possible
> backups and will not generally modify its contents during migration betweeb
> osx versions and/or different machines. It is not only the question of
> permissions.
>
> Any options inside user homedir are equally suitable IMO.
>

It is in the user's homedir - it's just that that isn't under /Users:

hal:~ postgres$ echo $HOME
/Library/PostgreSQL/13

With the EDB installers (unlike postgres.app), PostgreSQL runs as a
service, much as it would on Linux or BSD.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-17 Thread Dave Page
FYI, both Jonathan and I have now tested this on additional machines and
have been unable to reproduce the issue, so it seems like something odd
happened on my original upgrade rather than a general issue.

Apologies for the noise.

On Mon, Nov 16, 2020 at 9:27 AM Dave Page  wrote:

> Hi,
>
> This is more of a head-ups than anything else, as I suspect this may come
> up in various forums.
>
> The PostgreSQL installers for macOS (from EDB, possibly others too) create
> the data directory in /Library/PostgreSQL//data. This has been
> the case since the first release, 10+ years ago.
>
> It looks like the Big Sur upgrade has taken it upon itself to "fix" any
> filesystem permissions it doesn't like. On my system, this resulted in the
> data directory having 0755 permissions, which meant that PostgreSQL refused
> to start. Manually changing the permissions back to 0700 (0750 should also
> work) fixes the issue.
>
> I'm not sure there's much we can do about this - systems that are likely
> to be affected are already out there, and we obviously don't want to relax
> the permissions Postgres requires.
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EDB: http://www.enterprisedb.com
>
>

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: sepgsql logging

2022-01-11 Thread Dave Page
Hi

On Tue, Jan 11, 2022 at 12:04 AM Jacob Champion 
wrote:

> On Wed, Apr 14, 2021 at 8:42 AM Dave Page  wrote:
> > Attached is a patch to clean this up. It will log denials as such
> > regardless of whether or not either selinux or sepgsql is in
> > permissive mode. When either is in permissive mode, it'll add "
> > permissive=1" to the end of the log messages. e.g.
>
> Dave,
>
> Just to clarify -- it looks like this patch *only* adds the
> "permissive=1" part, right? I don't see any changes around denied-vs-
> allowed.
>

Right. denied-vs-allowed is shown at the beginning of the log line. From my
earlier output:

2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column salt of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: denied { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column phash of table tb_users" permissive=1


>
> I read the previous posts to mean that you were seeing "allowed" when
> you should have been seeing "denied".


That's what I *thought* was happening originally, because I was mistakenly
in permissive mode (if memory serves).


> I don't see that behavior --
> without this patch, I see the correct "denied" entries even when
> running in permissive mode. (It's been a while since the patch was
> posted, so I checked to make sure there hadn't been any relevant
> changes in the meantime, and none jumped out at me.)
>

Right. The point is that if permissive mode is enabled, access will not be
denied. Effectively if you see permissive=1, then "denied" really means
"would be denied if enforcing mode was enabled".

The idea is that you can run a production system in permissive mode to see
what would be denied without breaking things for users. You can use that
info to build your policy, and then when you no longer see any unexpected
denials in the logs, switch to enforcing mode.


>
> That said, the patch looks good as-is and seems to be working for me on
> a Rocky 8 VM. (You weren't kidding about the setup difficulty.) Having
> permissive mode show up in the logs seems very useful.
>
> As an aside, I don't see the "allowed" verbiage that sepgsql uses in
> any of the SELinux documentation. I do see third-party references to
> "granted", though, as in e.g.
>
> avc: granted { execute } for ...
>
> That's not something that I think this patch should touch, but it
> seemed tangentially relevant for future convergence work.
>

Interesting. I never spotted that one. I'm not sure it matters much, except
for consistency. It's not like the various tools for analyzing SELinux logs
would be likely to work on a PostgreSQL log.


>
> On Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:
> > Looks superficially reasonable on first glance, but I think we should
> > try to get an opinion from someone who knows more about SELinux.
>
> I am not that someone, but this looks straightforward, it's been
> stalled for a while, and I think it should probably go in.
>

I'd like to see that. Thanks for the review.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: sepgsql logging

2022-01-12 Thread Dave Page
On Tue, Jan 11, 2022 at 5:55 PM Tom Lane  wrote:

> Andrew Dunstan  writes:
> > I am not that person either. I agree this looks reasonable, but I also
> > would like the opinion of an expert, if we have one.
>
> I'm not sure we do anymore.  Anyway, I tried this on Fedora 35 and
> confirmed that it compiles and the (very tedious) test process
> described in the sepgsql docs still passes.  Looking in the system's
> logs, it appears that Dave didn't precisely emulate how SELinux
> logs this setting, because I see messages like
>
> Jan  4 12:25:46 nuc1 audit[1754]: AVC avc:  denied  { setgid } for
> pid=1754 comm="sss_cache" capability=6
> scontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023
> tcontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023
> tclass=capability permissive=0
>
> So it looks like their plan is to unconditionally write "permissive=0"
> or "permissive=1", while Dave's patch just prints nothing in enforcing
> mode.  While I can see some virtue in brevity, I think that doing
> exactly what SELinux does is probably a better choice.  For one thing,
> it'd remove doubt about whether one is looking at a log from a sepgsql
> version that logs this or one that doesn't.
>
> Other than that nitpick, I think we should just push this.
>

Here's an update that adds the "permissive=0" case.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


sepgsql_permissive_logging_v2.diff
Description: Binary data


sepgsql logging

2021-04-01 Thread Dave Page
Hi

I've been trying to figure out selinux with sepgsql (which is proving quite
difficult as there is an almost total lack of documentation/blogs etc. on
the topic) and ran into an issue. Whilst my system had selinux in enforcing
mode, I mistakenly had sepgsql in permissive mode. I created a table and
restricted access to one column to regular users using the label
system_u:object_r:sepgsql_secret_table_t:s0. Because sepgsql was in
permissive mode, my test user could still access the restricted column.

Postgres logged this:

2021-03-31 17:12:29.713 BST [3917] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column private of table t1"

That's very confusing, because the norm in selinux is to log denials as if
the system were in enforcing mode, but then allow the action to proceed
anyway, when in permissive mode. For example, log entries such as this are
created when my restricted user tries to run an executable from /tmp after
running "setsebool -P user_exec_content off":

type=AVC msg=audit(1617278924.917:484): avc:  denied  { execute } for
 pid=53036 comm="bash" name="ls" dev="dm-0" ino=319727
scontext=user_u:user_r:user_t:s0 tcontext=user_u:object_r:user_tmp_t:s0
tclass=file permissive=1

The point being to let the admin know what would fail if the system were
switched to enforcing mode. Whilst that wasn't the point of what I was
trying to do, such a message would have indicated to me that I was in
permissive mode without realising.

It seems to me that sepgsql should also log the denial, but flag that
permissive mode is on.

Any reason not to do that?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: sepgsql logging

2021-04-01 Thread Dave Page
On Thu, Apr 1, 2021 at 3:23 PM Tom Lane  wrote:

> Andrew Dunstan  writes:
> > On 4/1/21 8:32 AM, Dave Page wrote:
> >> It seems to me that sepgsql should also log the denial, but flag that
> >> permissive mode is on.
>
> > +1 for doing what selinux does if possible.
>
> +1.  If selinux itself is doing that, it's hard to see a reason why
> we should not; and I concur that the info is useful.
>

Thanks both. I'll take a look at the code and see if I can whip up a patch
(it'll be a week or so as I'm taking some time off for Easter).

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


More sepgsql weirdness

2021-04-13 Thread Dave Page
On a system with selinux and sepgsql configured, search path resolution
appears to fail if sepgsql is in enforcing mode, but selinux is in
permissive mode (which, as I understand it, should cause sepgsql to behave
as if it's in permissive mode anyway - and does for other operations).
Regardless of whether my understanding of the interaction of the two
permissive modes is correct, I don't believe the following should happen:

mls=# SELECT current_user;

 current_user

--

 postgres

(1 row)


mls=# SHOW search_path;

   search_path

-

 "$user", public

(1 row)


mls=# \dn+ public

  List of schemas

  Name  |  Owner   |  Access privileges   |  Description

+--+--+

 public | postgres | postgres=UC/postgres+| standard public schema

|  | =UC/postgres |

(1 row)


mls=# CREATE TABLE tb_users(uid int primary key, name text, mail text,
address text, salt text, phash text);

ERROR:  no schema has been selected to create in

LINE 1: CREATE TABLE tb_users(uid int primary key, name text, mail t...

 ^

mls=# CREATE TABLE public.tb_users(uid int primary key, name text, mail
text, address text, salt text, phash text);

CREATE TABLE

mls=# drop table tb_users;

ERROR:  table "tb_users" does not exist

mls=# drop table public.tb_users;

DROP TABLE

This is on head, pulled yesterday.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Display individual query in pg_stat_activity

2020-07-27 Thread Dave Page
Hi

On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand 
wrote:

> Hi hackers,
>
> I've attached a patch to display individual query in the pg_stat_activity
> query field when multiple SQL statements are currently displayed.
>
> *Motivation:*
>
> When multiple statements are displayed then we don’t know which one is
> currently running.
>

I'm not sure I'd want that to happen, as it could make it much harder to
track the activity back to a query in the application layer or server logs.

Perhaps a separate field could be added for the current statement, or a
value to indicate what the current statement number in the query is?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Display individual query in pg_stat_activity

2020-07-27 Thread Dave Page
On Mon, Jul 27, 2020 at 4:28 PM Jeremy Schneider 
wrote:

> On 7/27/20 07:57, Dave Page wrote:
>
> I'm not sure I'd want that to happen, as it could make it much harder to
> track the activity back to a query in the application layer or server logs.
>
> Perhaps a separate field could be added for the current statement, or a
> value to indicate what the current statement number in the query is?
>
>
> Might be helpful to give some specifics about circumstances where strings
> can appear in pg_stat_activity.query with multiple statements.
>
> 1) First of all, IIUC multiple statements are only supported in the first
> place by the simple protocol and PLs.  Anyone using parameterized
> statements (bind variables) should be unaffected by this.
>
> 2) My read of the official pg JDBC driver is that even for batch
> operations it currently iterates and sends each statement individually. I
> don't think the JDBC driver has the capability to send multiple statements,
> so java apps using this driver should be unaffected.
>

That is just one of a number of different popular drivers of course.


>
> 3) psql -c will always send the string as a single "simple protocol"
> request.  Scripts will be impacted.
>
> 4) PLs also seem to have a code path that can put multiple statements in
> pg_stat_activity when parallel slaves are launched.  PL code will be
> impacted.
>
> 5) pgAdmin uses the simple protocol and when a user executes a block of
> statements, pgAdmin seems to send the whole block as a single "simple
> protocol" request.  Tools like pgAdmin will be impacted.
>

It does. It also prepends some queries with comments, specifically to allow
users to filter them out when they're analysing logs (a feature requested
by users, not just something we thought was a good idea). I'm assuming that
this patch would also strip those?


>
> At the application layer, it doesn't seem problematic to me if PostgreSQL
> reports each query one at a time.  IMO most people will find this to be a
> more useful behavior and they will still find their queries in their app
> code or app logs.
>

I think there are arguments to be made for both approaches.


>
> However at the PostgreSQL logging layer this is a good call-out.  I just
> did a quick test on 14devel to double-check my assumption and it does seem
> that PostgreSQL logs the entire combined query for psql -c.  I think it
> would be better for PostgreSQL to report queries individually in the log
> too - for example pgBadger summaries will be even more useful if they
> report information for each individual query rather than a single big block
> of multiple queries.
>
> Given how small this patch is, it seems worthwhile to at least investigate
> whether the logging component could be addressed just as easily.
>
> -Jeremy
>
> --
> Jeremy Schneider
> Database Engineer
> Amazon Web Services
>
>
>

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-04 Thread Dave Page
On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian  wrote:

> On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:
> >  Hi,
> >
> > As a follow-up to bug #16570 [1] and other previous discussions
> > on the mailing-lists, I'm checking out PG13 beta for Windows
> > from:
> >  https://www.enterprisedb.com/postgresql-early-experience
> > and it ships with the same obsolete ICU 53 that was used
> > for PG 10,11,12.
> > Besides not having the latest Unicode features and fixes, ICU 53
> > ignores the BCP 47 tags syntax in collations used as examples
> > in Postgres documentation, which leads to confusion and
> > false bug reports.
> > The current version is ICU 67.
> >
> > I don't see where the suggestion to upgrade it before the
> > next PG release should be addressed but maybe some people on
> > this list do know or have the leverage to make it happen?
>
> Well, you can ask EDB about this, but perhaps the have kept the same ICU
> version so indexes will not need to be reindexed.
>

Correct - updating ICU would mean a reindex is required following any
upgrade, major or minor.

I would really like to find an acceptable solution to this however as it
really would be good to be able to update ICU.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-04 Thread Dave Page
On Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander  wrote:

> On Tue, Aug 4, 2020 at 10:07 AM Dave Page  wrote:
>
>>
>>
>> On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian  wrote:
>>
>>> On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:
>>> >  Hi,
>>> >
>>> > As a follow-up to bug #16570 [1] and other previous discussions
>>> > on the mailing-lists, I'm checking out PG13 beta for Windows
>>> > from:
>>> >  https://www.enterprisedb.com/postgresql-early-experience
>>> > and it ships with the same obsolete ICU 53 that was used
>>> > for PG 10,11,12.
>>> > Besides not having the latest Unicode features and fixes, ICU 53
>>> > ignores the BCP 47 tags syntax in collations used as examples
>>> > in Postgres documentation, which leads to confusion and
>>> > false bug reports.
>>> > The current version is ICU 67.
>>> >
>>> > I don't see where the suggestion to upgrade it before the
>>> > next PG release should be addressed but maybe some people on
>>> > this list do know or have the leverage to make it happen?
>>>
>>> Well, you can ask EDB about this, but perhaps the have kept the same ICU
>>> version so indexes will not need to be reindexed.
>>>
>>
>> Correct - updating ICU would mean a reindex is required following any
>> upgrade, major or minor.
>>
>> I would really like to find an acceptable solution to this however as it
>> really would be good to be able to update ICU.
>>
>
> It certainly couldn't and shouldn't be done in a minor.
>
> But doing so in v13 doesn't seem entirely unreasonable, especially given
> that I believe we will detect the requirement to reindex thanks to the
> versioning, and not just start returning invalid results (like, say, with
> those glibc updates).
>
> Would it be possible to have the installer even check if there are any icu
> indexes in the database. If there aren't, just put in the new version of
> icu. If there are, give the user a choice of the old version or new version
> and reindex?
>

That would require fairly large changes to the installer to allow it to
login to the database server (whether that would work would be dependent on
how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't
changed between releases. It would also require some hacky renaming of
DLLs, as they have the version number in them.

The chances of designing, building and testing that thoroughly before v13
is released is about zero I'd say.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-17 Thread Dave Page
On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander 
wrote:

>
>
> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian  wrote:
>
>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:
>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page  wrote:
>> > That would require fairly large changes to the installer to allow
>> it to
>> > login to the database server (whether that would work would
>> be dependent on
>> > how pg_hba.conf is configured), and also assumes that the ICU ABI
>> hasn't
>> > changed between releases. It would also require some hacky renaming
>> of
>> > DLLs, as they have the version number in them.
>> >
>> > I assumed it had code for that stuff already. Mainly because I assumed
>> it
>> > supported doing pg_upgrade, which requires similar things no?
>>
>
No, the installers don't support pg_upgrade directly. They ship it of
course, and the user can manually run it, but the installers won't do that,
and have no ability to login to a cluster except during the post-initdb
phase.


>
>> While pg_upgrade requires having the old and new cluster software in
>> place, I don't think it helps allowing different ICU versions for each
>> cluster.
>
>
> Depends on where they are installed (and disclaimer, I don't know how the
> windows installers do that). But as long as the ICU libraries are installed
> in separate locations for the two versions, which I *think* they are or at
> least used to be, it shouldn't have an effect on this in either direction.
>

They are.


>
> That argument really only holds for different versions, not for different
> clusters of the same version. But I don't think the installers (natively)
> supports multiple clusters of the same version anyway.
>

They don't. You'd need to manually init a new cluster and register a new
server instance. The installer only has any knowledge of the cluster it
sets up.


>
> The tricky thing is if you want to allow the user to *choose* which ICU
> version should be used with postgres version .  Because then the user
> might also expect an upgrade-path wherein they only upgrade the icu library
> on an existing install...
>
>
>> I guess you can argue that if you know the user is _not_ going
>> to be using pg_upgrade, then a new ICU version should be used for the
>> new cluster.
>>
>
> Yes, that's exactly the argument I meant :) If the user got the option to
> "pick version of ICU: , ", with a comment saying "pick old only
> if you plan to do a pg_upgrade based upgrade of a different cluster, or if
> this instance should participate in replication with a node using ",
> that would probably help for the vast majority of cases. And of course, if
> the installer through other options can determine with certainty that it's
> going to be running pg_upgrade for the user, then it can reword the dialog
> based on that (that is, it should still allow the user to pick the new
> version, as long as they know that their indexes are going to need
> reindexing)
>

That seems like a very hacky and extremely user-unfriendly approach. How
many users are going to understand options in the installer to deal with
that, or want to go decode the ICU filenames on their existing
installations (which our installers may not actually know about) to figure
out what their current version is?

I would suggest that the better way to handle this would be for pg_upgrade
to (somehow) check the ICU version on the old and new clusters and if
there's a mismatch perform a reindex of any ICU based indexes. I suspect
that may require that the server exposes the ICU version though. That way,
the installers could freely upgrade the ICU version with a new major
release.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-17 Thread Dave Page
On Mon, Aug 17, 2020 at 4:14 PM Magnus Hagander  wrote:

>
>
> On Mon, Aug 17, 2020 at 1:44 PM Dave Page  wrote:
>
>>
>>
>> On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander 
>> wrote:
>>
>>>
>>>
>>> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian  wrote:
>>>
>>>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:
>>>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page  wrote:
>>>> > That would require fairly large changes to the installer to allow
>>>> it to
>>>> > login to the database server (whether that would work would
>>>> be dependent on
>>>> > how pg_hba.conf is configured), and also assumes that the ICU ABI
>>>> hasn't
>>>> > changed between releases. It would also require some hacky
>>>> renaming of
>>>> > DLLs, as they have the version number in them.
>>>> >
>>>> > I assumed it had code for that stuff already. Mainly because I
>>>> assumed it
>>>> > supported doing pg_upgrade, which requires similar things no?
>>>>
>>>
>> No, the installers don't support pg_upgrade directly. They ship it of
>> course, and the user can manually run it, but the installers won't do that,
>> and have no ability to login to a cluster except during the post-initdb
>> phase.
>>
>
> Oh, I just assumed it did :)
>
> If it doesn't, I think shipping with a modern ICU is a much smaller
> problem really...
>
>
> While pg_upgrade requires having the old and new cluster software in
>>>> place, I don't think it helps allowing different ICU versions for each
>>>> cluster.
>>>
>>>
>>> Depends on where they are installed (and disclaimer, I don't know how
>>> the windows installers do that). But as long as the ICU libraries are
>>> installed in separate locations for the two versions, which I *think* they
>>> are or at least used to be, it shouldn't have an effect on this in either
>>> direction.
>>>
>>
>> They are.
>>
>
> Good. So putting both in wouldn't break things.
>
>
>
> That argument really only holds for different versions, not for different
>>> clusters of the same version. But I don't think the installers (natively)
>>> supports multiple clusters of the same version anyway.
>>>
>>
>> They don't. You'd need to manually init a new cluster and register a new
>> server instance. The installer only has any knowledge of the cluster it
>> sets up.
>>
>
> I'd say that's "unsupported enough" to not be a scenario one has to
> consider.
>

Agreed. Plus it's not really any different from running multiple clusters
on other OSs where we're likely to be using a vendor supplied ICU that the
user also couldn't change easily.


>
>
>
>>> The tricky thing is if you want to allow the user to *choose* which ICU
>>> version should be used with postgres version .  Because then the user
>>> might also expect an upgrade-path wherein they only upgrade the icu library
>>> on an existing install...
>>>
>>>
>>>> I guess you can argue that if you know the user is _not_ going
>>>> to be using pg_upgrade, then a new ICU version should be used for the
>>>> new cluster.
>>>>
>>>
>>> Yes, that's exactly the argument I meant :) If the user got the option
>>> to "pick version of ICU: , ", with a comment saying "pick old
>>> only if you plan to do a pg_upgrade based upgrade of a different cluster,
>>> or if this instance should participate in replication with a node using
>>> ", that would probably help for the vast majority of cases. And of
>>> course, if the installer through other options can determine with certainty
>>> that it's going to be running pg_upgrade for the user, then it can reword
>>> the dialog based on that (that is, it should still allow the user to pick
>>> the new version, as long as they know that their indexes are going to need
>>> reindexing)
>>>
>>
>> That seems like a very hacky and extremely user-unfriendly approach. How
>> many users are going to understand options in the installer to deal with
>> that, or want to go decode the ICU filenames on their existing
>> installations (which our installers may not actually know about) to figure
>> out what their current version is?
>>

Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-18 Thread Dave Page
On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian  wrote:

> On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:
> > That was more if the installer actually handles the whole chain. It
> clearly
> > doesn't today (since it doesn't support upgrades), I agree this might
> > definitely be overkill. But then also I don't really see the problem
> with
> > just putting a new version of ICU in with the newer versions of
> PostgreSQL.
> > That's just puts the user in the same position as they are with any
> other
> > platform wrt manual pg_upgrade runs.
> >
> > Well we can certainly do that if everyone is happy in the knowledge that
> it'll
> > mean pg_upgrade users will need to reindex if they've used ICU
> collations.
> >
> > Sandeep; can you have someone do a test build with the latest ICU please
> (for
> > background, this would be with the Windows and Mac installers)? If noone
> > objects, we can push that into the v13 builds before GA. We'd also need
> to
> > update the README if we do so.
>
> Woh, we don't have any support in pg_upgrade to reindex just indexes
> that use ICU collations, and frankly, if they have to reindex, they
> might decide that they should just do pg_dump/reload of their cluster at
> that point because pg_upgrade is going to be very slow, and they will be
> surprised.


Not necessarily. It's likely that not all indexes use ICU collations, and
you still save time loading what may be large amounts of data.

I agree though, that it *could* be slow.


> I can see a lot more people being disappointed by this than
> will be happy to have Postgres using a newer ICU library.
>

Quite possibly, hence my hesitation to push ahead with anything more than a
simple test build at this time.


>
> Also, is it the ICU library version we should be tracking for reindex,
> or each _collation_ version?  If the later, do we store the collation
> version for each index?
>

I wasn't aware that ICU had the concept of collation versions internally
(which Michael seems to have confirmed downthread). That would potentially
make the number of users needing a reindex even smaller, but as you point
out won't help us for years as we don't store it anyway.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Kerberos support broken on MSVC builds for Windows x64?

2020-09-01 Thread Dave Page
I was experimenting with building with MIT Kerberos support on 64 bit
Windows using MSVC and ran into a number of linker errors along the lines
of:

"C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1) ->
"C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj" (default target) (2)
->
(Link target) ->
  LINK : fatal error LNK1181: cannot open input file
'C:\Progra~1\MIT\Kerberos\lib.obj'
[C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj]

That was after I had to manually add the include and lib paths in
buildenv.pl. Diving in a bit further I found a couple of things:

1) The only buildfarm machine doing 64bit Windows Kerberos enabled builds
with MSVC is hammerkop. It enables it by setting the "krb5" option in
config.pl, however, as far as I can see (going back to 9.5), the option is
actually "gss". I can't see any sign in the log for the make step that it
actually is making any attempt to build with Kerberos, despite the UI
showing the icon for it.

2) I can't find anything in the MSVC build scripts in src/tools/msvc to
deal with 64bit Kerberos builds - Solution.pm seems to unconditionally try
to link with the 32bit libraries (e.g. lib/i386/krb5_32.lib instead of
lib/amd64/krb5_64.lib).

I'm assuming noone has tried a build with 64bit Kerberos, or am I missing
something?

Sidenote: I'm not sure even a 32bit Kerberos build will work, as
Solution.pm assumes the headers are in $self->{options}->{gss} .
'\inc\krb5', however in at least the latest installer from MIT they're
actually in $self->{options}->{gss} . '\include'.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-01 Thread Dave Page
On Tue, Sep 1, 2020 at 4:22 PM Dave Page  wrote:

> I was experimenting with building with MIT Kerberos support on 64 bit
> Windows using MSVC and ran into a number of linker errors along the lines
> of:
>
> "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1)
> ->
> "C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj" (default target)
> (2) ->
> (Link target) ->
>   LINK : fatal error LNK1181: cannot open input file
> 'C:\Progra~1\MIT\Kerberos\lib.obj'
> [C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj]
>
> That was after I had to manually add the include and lib paths in
> buildenv.pl. Diving in a bit further I found a couple of things:
>
> 1) The only buildfarm machine doing 64bit Windows Kerberos enabled builds
> with MSVC is hammerkop. It enables it by setting the "krb5" option in
> config.pl, however, as far as I can see (going back to 9.5), the option
> is actually "gss". I can't see any sign in the log for the make step that
> it actually is making any attempt to build with Kerberos, despite the UI
> showing the icon for it.
>
> 2) I can't find anything in the MSVC build scripts in src/tools/msvc to
> deal with 64bit Kerberos builds - Solution.pm seems to unconditionally try
> to link with the 32bit libraries (e.g. lib/i386/krb5_32.lib instead of
> lib/amd64/krb5_64.lib).
>
> I'm assuming noone has tried a build with 64bit Kerberos, or am I missing
> something?
>
> Sidenote: I'm not sure even a 32bit Kerberos build will work, as
> Solution.pm assumes the headers are in $self->{options}->{gss} .
> '\inc\krb5', however in at least the latest installer from MIT they're
> actually in $self->{options}->{gss} . '\include'.
>

Attached is a patch against 12.4 for the build system in case anyone wants
to play (I'll do it properly against the head branch later). I'm guessing
this will work for < 12, as with 12 I'm now getting the following which
looks like it's related to GSS encryption:

"C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1) ->
"C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
target) (2) ->
"C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
target) (3) ->
(Link target) ->
  be-secure-gssapi.obj : error LNK2019: unresolved external symbol setenv
referenced in function secure_open_gssapi
[C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
  .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]

I'll dig into that some more.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


msvc64-kerberos.diff
Description: Binary data


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > Attached is a patch against 12.4 for the build system in case anyone
> wants
> > to play (I'll do it properly against the head branch later). I'm guessing
> > this will work for < 12, as with 12 I'm now getting the following which
> > looks like it's related to GSS encryption:
> >
> > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target)
> (1) ->
> > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
> > target) (2) ->
> > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
> > target) (3) ->
> > (Link target) ->
> >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol setenv
> > referenced in function secure_open_gssapi
> > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
> > externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> >
> > I'll dig into that some more.
>
> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> used under Windows.  If you're successful, I don't have any issue
> helping to make that work, though I'm curious if you're trying to build
> with MIT KfW (which is rather ancient these days, being based on krb5
> 1.13 and not updated since..) or with a more current release...?
>

I'm currently using the KFW 4.1 build from MIT. I've tried building it
myself but it requires a very old toolchain (which defeated the point of
what I was trying to do at the time).

I haven't yet looked to see if the source for krb5-1.8.2 will build or even
has the right bits in it for Windows - as I'm sure you know MIT seem to
maintain an entirely different version for Windows for which I assume
there's a reason.


>
> Of course, it'd be good to get a buildfarm animal in place that's
> actually testing this if we're going to make it work.
>

Fixing the config on hamerkop should deal with that I think. Though I am
confused as to why the Buildfarm UI thinks it has Kerberos support enabled
- did we change the config parameter from krb5 to gss some time prior to
9.5? If so, that could explain it.


>
> Regarding the setenv() call, should be able to use pgwin32_putenv() in
> place on Windows, I'd think..?
>

Right, I imagine so. It's on my todo...


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Wed, Sep 2, 2020 at 9:05 AM Dave Page  wrote:

>
>> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
>> used under Windows.
>
>
Here's a patch to make it build successfully (against head). I believe the
changes to Solution.pm should be back patched all the way, and the rest to
12.

Testing however, has been more problematic - I suspect at least partly
because of my Kerberos newbie-ness. I have a test server in an Ubuntu VM,
which I've used quite successfully to authenticate against another VM
running PG 12  on Ubuntu, from both Ubuntu and Windows clients. Using that,
but with a Windows client running MIT Kerberos I find that getting a ticket
takes a good 30 seconds or so. Postgres also seems to get it's ticket
successfully via the keytab file:

C:\pg>"c:\Program Files\MIT\Kerberos\bin\klist.exe"
Ticket cache: API:Initial default ccache
Default principal: dp...@pgadmin.org

Valid starting ExpiresService principal
09/02/20 15:06:49  09/03/20 01:06:49  krbtgt/pgadmin@pgadmin.org
renew until 09/03/20 15:06:31
09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8a9c@
renew until 09/03/20 15:06:31
09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8...@pgadmin.org
renew until 09/03/20 15:06:31

However, If I try to login using host + gss in the pg_hba.conf file, I then
get:

C:\pg>bin\psql postgres
psql: error: could not connect to server: SSPI continuation error: No
credentials are available in the security package
 (8009030e)

If I try to use hostgssenc + gss, it looks like it's not even trying to
encrypt:

C:\pg>bin\psql postgres
psql: error: could not connect to server: FATAL:  no pg_hba.conf entry for
host "::1", user "dpage", database "postgres", SSL off

Any ideas?


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


msvc64-kerberos-v2.diff
Description: Binary data


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
On Wed, Sep 2, 2020 at 2:47 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost  wrote:
> > > * Dave Page (dp...@pgadmin.org) wrote:
> > > > Attached is a patch against 12.4 for the build system in case anyone
> > > wants
> > > > to play (I'll do it properly against the head branch later). I'm
> guessing
> > > > this will work for < 12, as with 12 I'm now getting the following
> which
> > > > looks like it's related to GSS encryption:
> > > >
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target)
> > > (1) ->
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
> > > > target) (2) ->
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
> > > > target) (3) ->
> > > > (Link target) ->
> > > >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol
> setenv
> > > > referenced in function secure_open_gssapi
> > > > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> > > >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
> > > > externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> > > >
> > > > I'll dig into that some more.
> > >
> > > Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> > > used under Windows.  If you're successful, I don't have any issue
> > > helping to make that work, though I'm curious if you're trying to build
> > > with MIT KfW (which is rather ancient these days, being based on krb5
> > > 1.13 and not updated since..) or with a more current release...?
> >
> > I'm currently using the KFW 4.1 build from MIT. I've tried building it
> > myself but it requires a very old toolchain (which defeated the point of
> > what I was trying to do at the time).
>
> > I haven't yet looked to see if the source for krb5-1.8.2 will build or
> even
> > has the right bits in it for Windows - as I'm sure you know MIT seem to
> > maintain an entirely different version for Windows for which I assume
> > there's a reason.
>
> I'm a bit confused as to why you'd consider trying 1.8.2- did you mean
> 1.18.2 there, perhaps..?


Yes, typo.


> That's what I would think to try, since, as I
> understand it from following the Kerberos Dev list (which is pretty
> responsive...) has been updated to work with newer Windows build
> toolchains.
>

OK, will try to do that tomorrow.

Thanks!


>
> > > Of course, it'd be good to get a buildfarm animal in place that's
> > > actually testing this if we're going to make it work.
> >
> > Fixing the config on hamerkop should deal with that I think. Though I am
> > confused as to why the Buildfarm UI thinks it has Kerberos support
> enabled
> > - did we change the config parameter from krb5 to gss some time prior to
> > 9.5? If so, that could explain it.
>
> Looks to be run by SRA OSS..  Perhaps reaching out to them to ask about
> it would help?
>
> > > Regarding the setenv() call, should be able to use pgwin32_putenv() in
> > > place on Windows, I'd think..?
> >
> > Right, I imagine so. It's on my todo...
>
> Alright.
>
> Thanks,
>
> Stephen
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Wed, Sep 2, 2020 at 7:08 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > On Wed, Sep 2, 2020 at 9:05 AM Dave Page  wrote:
> > >> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> > >> used under Windows.
> >
> > Here's a patch to make it build successfully (against head). I believe
> the
> > changes to Solution.pm should be back patched all the way, and the rest
> to
> > 12.
>
> Looks about right..  I might pull out the code from both places that are
> setting that variable into a dedicated function to be used from both
> though.
>
> > Testing however, has been more problematic - I suspect at least partly
> > because of my Kerberos newbie-ness. I have a test server in an Ubuntu VM,
> > which I've used quite successfully to authenticate against another VM
> > running PG 12  on Ubuntu, from both Ubuntu and Windows clients. Using
> that,
> > but with a Windows client running MIT Kerberos I find that getting a
> ticket
> > takes a good 30 seconds or so. Postgres also seems to get it's ticket
> > successfully via the keytab file:
>
> So, from Windows clients that don't have MIT KfW installed, you're able
> to authenticate against PG 12 on Ubuntu using Kerberos, right..?  With
> PG built using SSPI on the client side, I'm guessing?
>

Yes, with the workstation configured to authenticate windows login with
Kerberos (e.g.
https://www.garyhawkins.me.uk/non-domain-mit-kerberos-logins-on-windows-10/)


>
> Kerberos uses reverse DNS to try to check what hostname to use when
> requesting a ticket, I wonder if what you're seeing here is a delay due
> to there not being reverse DNS functional in the environment, perhaps..?
>

Ahh, probably. I'm just using host files on these VMs, but I'll bet I
forgot to add the client to the kdc's file. Will try that tomorrow.


>
> > C:\pg>"c:\Program Files\MIT\Kerberos\bin\klist.exe"
> > Ticket cache: API:Initial default ccache
> > Default principal: dp...@pgadmin.org
> >
> > Valid starting ExpiresService principal
> > 09/02/20 15:06:49  09/03/20 01:06:49  krbtgt/pgadmin@pgadmin.org
> > renew until 09/03/20 15:06:31
> > 09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8a9c@
> > renew until 09/03/20 15:06:31
> > 09/02/20 15:07:06  09/03/20 01:06:49  postgres/
> win-ilt1arj8...@pgadmin.org
> > renew until 09/03/20 15:06:31
> >
> > However, If I try to login using host + gss in the pg_hba.conf file, I
> then
> > get:
> >
> > C:\pg>bin\psql postgres
> > psql: error: could not connect to server: SSPI continuation error: No
> > credentials are available in the security package
> >  (8009030e)
>
> This is with PG compiled with GSS on the client side and using MIT KfW?
>

Yes.


>
> This particular error from SSPI seems to possibly be coming from the
> constrained delegation system.  While not directly about this issue,
> Microsoft has some documentation about configuring constrained
> delegation (and how to turn it off) here:
>
>
> https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/Set-up-hosts-for-live-migration-without-Failover-Clustering
>
> Now, we aren't actually delegating credentials here, so it seems a bit
> odd for it to be complaining about that, but perhaps it's throwing this
> error because the MIT KfW library has no clue about constrained
> delegation and therefore wouldn't be trying to enforce it.
>

OK, I'll look into that.


>
> > If I try to use hostgssenc + gss, it looks like it's not even trying to
> > encrypt:
> >
> > C:\pg>bin\psql postgres
> > psql: error: could not connect to server: FATAL:  no pg_hba.conf entry
> for
> > host "::1", user "dpage", database "postgres", SSL off
> >
> > Any ideas?
>
> If it's not trying then I would be suspicious that the
> gss_acquire_creds() call is saying that there isn't a credential cache,
> though that would be a bit odd given that klist seems to be working.
>
> Would certainly be interesting to see if 1.18.2 changes anything in this
> regard.
>

I'll let you know how that goes. Thanks for the tips!

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-03 Thread Dave Page
On Wed, Sep 2, 2020 at 5:21 PM Dave Page  wrote:

>
>
> On Wed, Sep 2, 2020 at 2:47 PM Stephen Frost  wrote:
>
>> Greetings,
>>
>> * Dave Page (dp...@pgadmin.org) wrote:
>> > On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost 
>> wrote:
>> > > * Dave Page (dp...@pgadmin.org) wrote:
>> > > > Attached is a patch against 12.4 for the build system in case anyone
>> > > wants
>> > > > to play (I'll do it properly against the head branch later). I'm
>> guessing
>> > > > this will work for < 12, as with 12 I'm now getting the following
>> which
>> > > > looks like it's related to GSS encryption:
>> > > >
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default
>> target)
>> > > (1) ->
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
>> > > > target) (2) ->
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
>> > > > target) (3) ->
>> > > > (Link target) ->
>> > > >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol
>> setenv
>> > > > referenced in function secure_open_gssapi
>> > > > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
>> > > >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1
>> unresolved
>> > > > externals
>> [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
>> > > >
>> > > > I'll dig into that some more.
>> > >
>> > > Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
>> > > used under Windows.  If you're successful, I don't have any issue
>> > > helping to make that work, though I'm curious if you're trying to
>> build
>> > > with MIT KfW (which is rather ancient these days, being based on krb5
>> > > 1.13 and not updated since..) or with a more current release...?
>> >
>> > I'm currently using the KFW 4.1 build from MIT. I've tried building it
>> > myself but it requires a very old toolchain (which defeated the point of
>> > what I was trying to do at the time).
>>
>> > I haven't yet looked to see if the source for krb5-1.8.2 will build or
>> even
>> > has the right bits in it for Windows - as I'm sure you know MIT seem to
>> > maintain an entirely different version for Windows for which I assume
>> > there's a reason.
>>
>> I'm a bit confused as to why you'd consider trying 1.8.2- did you mean
>> 1.18.2 there, perhaps..?
>
>
> Yes, typo.
>
>
>> That's what I would think to try, since, as I
>> understand it from following the Kerberos Dev list (which is pretty
>> responsive...) has been updated to work with newer Windows build
>> toolchains.
>>
>
> OK, will try to do that tomorrow.
>
> Thanks!
>

OK, so 1.18.2 builds OK. It's a bit of a faff, but nothing major. It seems
to work fine as a standalone set of tools.

Of course, they've changed the installation paths again - they've dropped
the i386 and amd64 parts from the library path :-/

So having rebuilt PostgreSQL against that, I'm now in the situation where
the server never even attempts to get a ticket as far as I can see, and
psql just crashes with nothing more than a useless error in the event log:

Faulting application name: psql.exe, version: 14.0.0.20246, time stamp:
0x5f50e477
Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x
Exception code: 0xc005
Fault offset: 0x
Faulting process id: 0xd10
Faulting application start time: 0x01d681f189a17360
Faulting application path: C:\pg\bin\psql.exe
Faulting module path: unknown
Report Id: eb68d787-1c82-420d-8878-bc0648932a5d
Faulting package full name:
Faulting package-relative application ID:

So I'm going to have to break out the debugger, though I suspect this may
require more effort than I have time for right now.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-03 Thread Dave Page
On Thu, Sep 3, 2020 at 4:15 PM Dave Page  wrote:

>
> So having rebuilt PostgreSQL against that, I'm now in the situation where
> the server never even attempts to get a ticket as far as I can see, and
> psql just crashes with nothing more than a useless error in the event log:
>
> Faulting application name: psql.exe, version: 14.0.0.20246, time stamp:
> 0x5f50e477
> Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x
> Exception code: 0xc005
> Fault offset: 0x
> Faulting process id: 0xd10
> Faulting application start time: 0x01d681f189a17360
> Faulting application path: C:\pg\bin\psql.exe
> Faulting module path: unknown
> Report Id: eb68d787-1c82-420d-8878-bc0648932a5d
> Faulting package full name:
> Faulting package-relative application ID:
>
> So I'm going to have to break out the debugger, though I suspect this may
> require more effort than I have time for right now.
>

Yeah, this is almost certainly well beyond what I have the time to figure
out. Happy to do any testing etc. that may be needed, but I think this
needs someone familiar with the GSS API to take the lead.

Here's what I got from psql in the debugger:

Exception thrown at 0x in psql.exe: 0xC005: Access
violation executing location 0x. occurred

()
krb5_64.dll!51942807()
krb5_64.dll!5194214b()
krb5_64.dll!51980611()
krb5_64.dll!519766cb()
krb5_64.dll!519670ff()
gssapi64.dll!51bb1839()
gssapi64.dll!51bb48e4()
gssapi64.dll!51bb4575()
gssapi64.dll!51b993df()
libpq.dll!pqsecure_open_gss(pg_conn * conn) Line 632
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-secure-gssapi.c(632)
libpq.dll!PQconnectPoll(pg_conn * conn) Line 3173
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(3173)
libpq.dll!connectDBComplete(pg_conn * conn) Line 2187
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(2187)
libpq.dll!PQconnectdbParams(const char * const * keywords, const char *
const * values, int expand_dbname) Line 655
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(655)
psql.exe!main(int argc, char * * argv) Line 266
at c:\users\dpage\downloads\postgresql\src\bin\psql\startup.c(266)
[External Code]

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: [PATCH] Proposal for HIDDEN/INVISIBLE column

2021-10-14 Thread Dave Page
On Thu, Oct 14, 2021 at 2:32 PM Gilles Darold  wrote:

> Le 14/10/2021 à 14:28, Pavel Stehule a écrit :
>
>
>
> čt 14. 10. 2021 v 14:13 odesílatel Vik Fearing 
> napsal:
>
>> On 10/14/21 1:47 PM, Aleksander Alekseev wrote:
>> > Hi Gilles,
>> >
>> >> Any though and interest in this feature?
>> >
>> > Personally, I wouldn't call this feature particularly useful. `SELECT
>> > *` is intended for people who are working with DBMS directly e.g. via
>> > psql and want to see ALL columns.
>>
>> I disagree strongly with this.  It is really annoying when working
>> interactively with psql on a table that has a PostGIS geometry column,
>> or any other large blobby type column.
>>
>> I have not looked at the patch, but +1 for the feature.
>>
>
> Cannot be better to redefine some strategies for output for some types.
>
> I can agree so sometimes in some environments proposed features can be
> nice, but it can be a strong footgun too.
>
> Maybe some strange data can be filtered in psql and it can be better
> solution. I agree, so usually print long geometry in psql is useless.
>
>
> Pavel this doesn't concern only output but input too, think about the
> INSERT or COPY without a column list. We can add such filter in psql but
> how about other clients? They all have to implement their own filtering
> method. I think the HIDDEN attribute provide a common and basic way to
> implement that in all client application.
>

I like the idea - being able to hide computed columns such as tsvectors
from CRUD queries by default seems like it would be very nice for example.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: VS2022: Support Visual Studio 2022 on Windows

2021-11-24 Thread Dave Page
Hi

On Wed, Nov 24, 2021 at 9:12 AM Hans Buschmann  wrote:

> Hello Michael,
>
> thanks for your hard work and quick response!
> It is very convenient to only use VS2022 for Windows from now on...
>
> >Diff unrelated to your patch.
>
> Sorry for the copysoft problem from the first version.
>
> >Glad to see that we should have nothing to do about locales this
> >time.  I have not tested, but I think that you covering all the areas
> >that need a refresh here.  Nice work.
>
> I think it is almost impossible to overestimate the value of such support
> from experienced hackers to others starting their journey right now...
>
> I hope I can motivate you (and other experienced hackers) to give me some
> more support on my real project arriving anytime soon. It addresses
> hex_encoding (and more) targetting mostly pg_dump, but requires also some
> deeper knowledge of general infrastructure and building (also on Windows).
> Stay tuned!
>
> PS: Does anybody have good relations to EDB suggesting them to target
> VS2022 as the build environment for the upcoming PG15 release?
>

That would be me...


>
> postgres=# select version ();
>   version
> 
>  PostgreSQL 14.1, compiled by Visual C++ build 1931, 64-bit
> (1 row)
>

It's extremely unlikely that we'd shift to such a new version for PG15. We
build many components aside from PostgreSQL, and need to use the same
toolchain for all of them (we've had very painful experiences with mix n
match CRT versions in the past) so it's not just PG that needs to support
VS2022 as far as we're concerned - Perl, Python, TCL, MIT Kerberos,
OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for
consistency.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: VS2022: Support Visual Studio 2022 on Windows

2021-11-24 Thread Dave Page
Hi

On Wed, Nov 24, 2021 at 11:36 AM Michael Paquier 
wrote:

> On Wed, Nov 24, 2021 at 10:00:19AM +0000, Dave Page wrote:
> > It's extremely unlikely that we'd shift to such a new version for PG15.
> We
> > build many components aside from PostgreSQL, and need to use the same
> > toolchain for all of them (we've had very painful experiences with mix n
> > match CRT versions in the past) so it's not just PG that needs to support
> > VS2022 as far as we're concerned
>
> Yes, I can understand that upgrading the base version of VS used is a
> very difficult exercise.  I have been through that, on Windows for
> Postgres..  As well as for the compilation of all its dependencies.
>
> > - Perl, Python, TCL, MIT Kerberos,
> > OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for
> > consistency.
>
> Dave, do you include LZ4 in 14?  Just asking, as a matter of
> curiosity.
>

Yes we do :-)

C:\Program Files\PostgreSQL\14\bin>pg_config
BINDIR = C:/PROGRA~1/POSTGR~1/14/bin
DOCDIR = C:/PROGRA~1/POSTGR~1/14/doc
HTMLDIR = C:/PROGRA~1/POSTGR~1/14/doc
INCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include
PKGINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include
INCLUDEDIR-SERVER = C:/PROGRA~1/POSTGR~1/14/include/server
LIBDIR = C:/Program Files/PostgreSQL/14/lib
PKGLIBDIR = C:/Program Files/PostgreSQL/14/lib
LOCALEDIR = C:/PROGRA~1/POSTGR~1/14/share/locale
MANDIR = C:/Program Files/PostgreSQL/14/man
SHAREDIR = C:/PROGRA~1/POSTGR~1/14/share
SYSCONFDIR = C:/Program Files/PostgreSQL/14/etc
PGXS = C:/Program Files/PostgreSQL/14/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = --enable-thread-safety --enable-nls --with-ldap
--with-ssl=openssl --with-uuid --with-libxml --with-libxslt --with-lz4
--with-icu --with-tcl --with-perl --with-python
CC = not recorded
CPPFLAGS = not recorded
CFLAGS = not recorded
CFLAGS_SL = not recorded
LDFLAGS = not recorded
LDFLAGS_EX = not recorded
LDFLAGS_SL = not recorded
LIBS = not recorded
VERSION = PostgreSQL 14.1

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: GSoC proposal for pgAdmin 4 bytea support

2019-04-01 Thread Dave Page
Hi

On Mon, Apr 1, 2019 at 3:12 AM Haoran Yu  wrote:

> Dear PostgreSQL community,
>
> I have submitted a proposal for the project pgAdmin 4 bytea support. The
> project discusses storing media content (images, audio, video) as bytea.
> However, I have a quick question. What does bytea data look like typically
> when storing media content? What I had in mind is, media contents that uses
> MIME type, which are rendered as part of HTML. For example, the following
> is rendered as a red dot:
>
> 'data:image/png;base64,iVBORw0KGgoNSUhEUgUA
> AAAFCAYAAACNbyblHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
> 9TXL0Y4OHwBJRU5ErkJggg==’
>
> This string is decoded to bytea, and I stored it in a bytea column.
>
> What are some other examples of using bytea to store media content, not
> necessarily using the MIME type? Is there a way to detect the type of these
> media (audio, image) stored in bytea?
>

When I have stored small media items in bytea columns in the past, I just
stored the data. I vaguely recall I did store the mime type in another
column, but that may not be the case in all scenarios (e.g. when a system
is designed to store only PNGs). I think you should assume it's raw data
only, and try to determine the file type by examining the data;

e.g

PNG files have an 8 byte signature:
http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html
MPEG files have identifying information in the frame header that you may be
able to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html
JPEG images have identifying markers:
https://en.wikipedia.org/wiki/JPEG_File_Interchange_Format

etc.


> Another question I had is, I read that there are performance-related
> issues for storing media in bytea. Are there practical ways to store bytea
> data that does not face performance-related issues? For example, storing
> large media content using multiple bytea parts, and reassembling them
> together once retrieved from the database?
>

Not that I'm aware of. For larger objects, most people store them
externally (which of course loses ACID properties). There are certainly
applications for storing smaller objects directly in the database though -
and some folks have done work in the past with index types and
operators/functions for finding and comparing images for example, so there
are also benefits other than ACID to storing data in this way.

BTW; for pgAdmin related GSoC questions, you'd do better to ask on
pgadmin-hack...@postgresql.org.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Animals baiji, mastodon and narwhal

2018-01-11 Thread Dave Page
Due to circumstances beyond my control, I've had to shut down the ageing
Windows buildfarm animals in $SUBJECT.

I hope to restart them at some point, but it might be some time.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] FOSDEM PGDay_2018_Developer_Meeting notes

2018-02-09 Thread Dave Page
On Fri, Feb 9, 2018 at 8:55 AM, Pantelis Theodosiou 
wrote:

> On Tue, Feb 6, 2018 at 10:07 AM, Stephen Frost  wrote:
>
>>
>> That was also what seemed to be the consensus coming out of the FOSDEM
>> Developer meeting (notes here:
>> https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2018_Developer_Meeting).
>>
>>
> In the notes there is this, which confused me:
>
>
> > SIDE TOPIC:
> >
> > Release date for PostgreSQL 13 agreed: Friday 13th September 2019!!
>
>
> Isn't Postgres 12 to be released in 2019? And 13 in 2020?
>
>
It was a joke in the meeting.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GSOC 18] Performance Farm Project——Initialization Project

2018-03-11 Thread Dave Page
Hi

Maybe I’m missing something (I’ve been offline a lot recently for
unavoidable reasons), but the perf farm project already has a Django
backend initialised and configured to work with community auth, on
community infrastructure.

https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary

On Sunday, March 11, 2018, Hongyuan Ma  wrote:

> Hello, mark
> I initialized a Django project and imported the Django REST Framework. Its
> github address is: https://github.com/PGPerfFarm/server-code
> I created some model classes and also wrote scripts in the dbtools folder
> to import simulation data for development. I am hesitant to use admin or
> xadmin as the administrator's backend for the site (I prefer to use xadmin).
>
> I also initialized the website's front-end application. Here is its
> address: https://github.com/PGPerfFarm/front-end-code.git
> I wrote the header component as shown:
>
> I hope this effect can enhance the future user experience:).
> This application uses vue.js and element-ui, but if you insist on using
> other js libraries or not using js, please let me know. I will empty this
> project and then rewrite it as you wish.
>
> My next step is to determine the necessary api interface and the
> corresponding components of the front-end application. Then implement them
> one by one.
> Finally, as you can see, I created an organization named PGPerfFarm on
> github without permission. I sent you an invitation letter, and after you
> join, I am willing to hand over the administrator of the organization to
> you.
>
>
> Regards,
> Hongyuan Ma (cs_maleica...@163.com)
>
>
>
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: [GSOC 18] Performance Farm Project——Initialization Project

2018-03-12 Thread Dave Page
Hi

On Mon, Mar 12, 2018 at 9:57 AM, Hongyuan Ma  wrote:

> Hi Dave,
> Thank you for your reminder. This is indeed my negligence.
> In fact, I have browsed the code in the pgperffarm.git
> <https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary> repository,
> including the client folder and the web folder. However, I found that the
> web folder has some unfinished html files and does not contain model class
> files. And the project used Django 1.8 without importing the Django REST
> Framework (When I talked to Mark about the PerfFarm project, he told me he
> insisted on using Django 1.11 and considered using the REST framework). So
> I mistakenly thought that the code in the web folder had been shelved.
>

Nope, not at all. It just wasn't updated to meet the latest PG
infrastructure requirements yet (basically just the update to Django 11).
The rest should be fine as-is.


>
> In the newly initialized Django application, I upgraded its version and
> assigned the directory to make the project structure clearer. I want to use
> a separate front-end development approach (The front-end applications will
> use vue.js for programming.). I hope you can allow me to use it instead of
> the old one. I am willing to integrate the auth module into this new
> application as soon as possible.
>

I would much prefer to see jQuery + React, purely because there are likely
more PostgreSQL developers (particularly from the pgAdmin team) that know
those technologies. It is important to consider long term maintenance as
well as ease of initial development with any project.

Thanks.


>
> Regards,
> Hongyuan Ma (cs_maleica...@163.com)
>
> 在 2018-03-12 02:26:43,"Dave Page"  写道:
>
> Hi
>
> Maybe I’m missing something (I’ve been offline a lot recently for
> unavoidable reasons), but the perf farm project already has a Django
> backend initialised and configured to work with community auth, on
> community infrastructure.
>
> https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary
>
> On Sunday, March 11, 2018, Hongyuan Ma  wrote:
>
>> Hello, mark
>> I initialized a Django project and imported the Django REST Framework.
>> Its github address is: https://github.com/PGPerfFarm/server-code
>> I created some model classes and also wrote scripts in the dbtools folder
>> to import simulation data for development. I am hesitant to use admin or
>> xadmin as the administrator's backend for the site (I prefer to use xadmin).
>>
>> I also initialized the website's front-end application. Here is its
>> address: https://github.com/PGPerfFarm/front-end-code.git
>> I wrote the header component as shown:
>>
>> I hope this effect can enhance the future user experience:).
>> This application uses vue.js and element-ui, but if you insist on using
>> other js libraries or not using js, please let me know. I will empty this
>> project and then rewrite it as you wish.
>>
>> My next step is to determine the necessary api interface and the
>> corresponding components of the front-end application. Then implement them
>> one by one.
>> Finally, as you can see, I created an organization named PGPerfFarm on
>> github without permission. I sent you an invitation letter, and after you
>> join, I am willing to hand over the administrator of the organization to
>> you.
>>
>>
>> Regards,
>> Hongyuan Ma (cs_maleica...@163.com)
>>
>>
>>
>>
>
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
>
>
>



-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: Re: [GSOC 18] Performance Farm Project——Initialization Project

2018-03-14 Thread Dave Page
Hi

On Tue, Mar 13, 2018 at 11:31 PM, Hongyuan Ma  wrote:

> Hi Dave,
> I am willing to use React to re-create the front-end application. Since I
> plan to use separate front-end and back-end development methods, this means
> that there will be no html files in Django applications. Front-end and
> back-end applications will interact via restful api. I will use react to
> rewrite some existing html files if needed. Before initializing the react
> application, I want to learn more about your tendency toward front-end
> technology.
>
>  - About React: Which version of React (15.x or 16) and react-router (2.x
> or 4) do you tend to use?
>

pgAdmin is using React 16.2.


>  - About Package Manager: Do you tend to use yarn or npm?
>

Yarn for pgAdmin, though npm would probably be a little easier for a
project running on pginfra.


>  - About the UI library: I guess you might prefer Bootstrap or
> Material-UI.
>

Bootstrap.


>
> At the same time I hope you can help me understand the functional
> requirements of this project more clearly. Here are some of my thoughts
> on PerfFarm:
>
>  - I see this comment in the client folder (In line 15 of the "pgperffarm
> \ client \ benchmarks \ pgbench.py" file):
> '''
> # TODO allow running custom scripts, not just the default
> '''
> Will PerfFarm have many test items so that the test item search function
> or even the test item sort function is expected to be provided?
>

I don't know - Tomas or Mark are probably the best folks to ask about that.
I spent some time on the initial web app work, but then ran out of spare
cycles.


>
>  - What value will be used to determine if a machine's performance is
> improving or declining?
>

Good question - I think that needs discussion, as it's not necessarily
clear-cut when you think that performance may vary at different numbers of
clients.


>  - After the user logs in to the site, I think it may be necessary to
> provide a page for the user to browse or manage his own machine. Is there
> any function that requires users to log in before they can use it?
>

Yes, that! We want the system to be self-service for registered users. The
only interaction required by admins should be to approve new machines in my
opinion.


>  - When I registered a machine on BuildFarm, I did not receive the
> confirmation email immediately but several days later. In PerfFarm, when
> a user registers a machine, will the administrator review it before sending
> a confirmation email?
>

See above. It should be automated though - i.e. if an admin approves the
registration of a new machine, the system sends a welcome email with any
required info.


>  - I see BuildFarm assigning an animal name to each registered machine. Will
> PerfFarm also have this interesting feature?
>

It was going to, but I forget what we decided on for a naming scheme!
Another discussion item I think - in the code, we can just use "Name" or
similar.


>
> My postgresql.org community account is:
>  - Username: maleicacid
>  - Email: cs_maleica...@163.com
>
> I hope to get the commit permission of the pgperffarm.git
> <https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary>
> repository. I am willing to continue coding and complete the project on the
> existing code.Looking forward to your reply.
>

Typically PostgreSQL projects only give commit rights to developers who
have submitted a number of good quality patches (in the case of PostgreSQL
and pgAdmin, that can be over the course of years!). For the perffarm, I
expect the bar to be a little lower than that, but for now you should
assume you'll be sending patches for review.

Thanks.


>
> Best Regards,
> Hongyuan Ma (cs_maleica...@163.com)
>
>
> At 2018-03-13 03:03:08, "Dave Page"  wrote:
>
> Hi
>
> On Mon, Mar 12, 2018 at 9:57 AM, Hongyuan Ma 
> wrote:
>
>> Hi Dave,
>> Thank you for your reminder. This is indeed my negligence.
>> In fact, I have browsed the code in the pgperffarm.git
>> <https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary> repository,
>> including the client folder and the web folder. However, I found that the
>> web folder has some unfinished html files and does not contain model class
>> files. And the project used Django 1.8 without importing the Django REST
>> Framework (When I talked to Mark about the PerfFarm project, he told me he
>> insisted on using Django 1.11 and considered using the REST framework). So
>> I mistakenly thought that the code in the web folder had been shelved.
>>
>
> Nope, not at all. It just wasn't updated to meet the latest PG
> infrastructure requirements yet (basically just the update to Django

Re: [GSOC 18] Performance Farm Project——Initialization Project

2018-03-14 Thread Dave Page


> On 14 Mar 2018, at 19:14, Mark Wong  wrote:
> 
> Hi,
> 
> I have some additional comments to a couple areas...
> 
>> On Wed, Mar 14, 2018 at 05:33:00PM -0400, Dave Page wrote:
>>> On Tue, Mar 13, 2018 at 11:31 PM, Hongyuan Ma  wrote:
>>> At the same time I hope you can help me understand the functional
>>> requirements of this project more clearly. Here are some of my thoughts
>>> on PerfFarm:
>>> 
>>> - I see this comment in the client folder (In line 15 of the "pgperffarm
>>> \ client \ benchmarks \ pgbench.py" file):
>>> '''
>>> # TODO allow running custom scripts, not just the default
>>> '''
>>> Will PerfFarm have many test items so that the test item search function
>>> or even the test item sort function is expected to be provided?
>>> 
>> 
>> I don't know - Tomas or Mark are probably the best folks to ask about that.
>> I spent some time on the initial web app work, but then ran out of spare
>> cycles.
> 
> Yes, there is potential for the number of tests to change over time.
> (Maybe a test is less relevant, maybe there is a more useful test down
> he road.)
> 
> I'm haven't thought too much about sorting and searching functions.  But
> right now there is just pgbench with 6 basic configurations: read/write,
> read-only, and a low, medium, and high data set size for each of those.
> 
>>> - What value will be used to determine if a machine's performance is
>>> improving or declining?
>>> 
>> 
>> Good question - I think that needs discussion, as it's not necessarily
>> clear-cut when you think that performance may vary at different numbers of
>> clients.
> 
> My thought here is that each test should provide a single metric,
> defined by the test itself, so expect this to be define in the test
> results sent to the web site.
> 
> I do expect that off of the test results should be archived somehow and
> retrievable.  But the idea I have is to have a generic high level view
> that all of the different tests can share.
> 
>>> - I see BuildFarm assigning an animal name to each registered machine. Will
>>> PerfFarm also have this interesting feature?
>>> 
>> 
>> It was going to, but I forget what we decided on for a naming scheme!
>> Another discussion item I think - in the code, we can just use "Name" or
>> similar.
> 
> Since the buildfarm is composed of animals, I thought plants would be a
> complimentary scheme?  I'm also not sure if this was discussed
> previously...

For some reason I was thinking fish!


Re: Fix a typo in pg_rotate_logfile

2024-02-14 Thread Dave Page
Hi

On Mon, 12 Feb 2024 at 21:31, Bharath Rupireddy <
bharath.rupireddyforpostg...@gmail.com> wrote:

> On Tue, Feb 13, 2024 at 2:29 AM Daniel Gustafsson  wrote:
> >
> > On that note though, we might want to consider just dropping it
> altogether in
> > v17 (while fixing the incorrect hint in backbranches)?  I can't imagine
> > adminpack 1.0 being in heavy use today, and skimming pgAdmin code it
> seems it's
> > only used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?
>
> https://codesearch.debian.net/search?q=pg_logfile_rotate&literal=1
> shows no users for it though. There's pgadmin3 using it
>
> https://github.com/search?q=repo%3Apgadmin-org%2Fpgadmin3%20pg_logfile_rotate&type=code
> ,
> however the repo is archived. Surprisingly, core has to maintain the
> old code needed for adminpack 1.0 - pg_rotate_logfile_old SQL function
> and pg_rotate_logfile function in signalfuncs.c. These things could
> have been moved to adminpack.c back then and pointed CREATE FUNCTION
> pg_catalog.pg_logfile_rotate() to use it from adminpack.c. If we
> decide to remove adminpack 1.0 version completely, the 1.0 functions
> pg_file_read, pg_file_length and pg_logfile_rotate will also go away
> making adminpack code simpler.
>
> Having said that, it's good to hear from others, preferably from
> pgadmin developers - added Dave Page (dp...@pgadmin.org) in here for
> inputs.
>

As it happens we're currently implementing a redesigned version of that
functionality from pgAdmin III in pgAdmin 4. However, we are not using
adminpack for it.

FWIW, the reason for the weird naming is that originally all the
functionality for reading/managing files was added entirely as the
adminpack extension. It was only later that some of the functionality was
moved into core, and renamed along the way (everyone likes blue for their
bikeshed right?). The old functions (albeit, rewritten to use the new core
functions) were kept in adminpack for backwards compatibility.

That said, pgAdmin III has been out of support for many years, and as far
as I know, it (and similarly old versions of EDB's PEM which was based on
it) were the only consumers of adminpack. I would not be sad to see it
removed entirely - except for the fact that I fondly remember being invited
to join -core immediately after a heated discussion with Tom about it!

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Remove 'htmlhelp' documentat format (was meson documentation build open issues)

2023-03-28 Thread Dave Page
On Tue, 28 Mar 2023 at 10:46, Peter Eisentraut <
peter.eisentr...@enterprisedb.com> wrote:

> On 24.03.23 17:58, Andres Freund wrote:
> > On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:
> >> Another option here is to remove support for htmlhelp.
> >
> > That might actually be the best path - it certainly doesn't look like
> anybody
> > has been actively using it. Or otherwise somebody would have complained
> about
> > there not being any instructions on how to actually compile a .chm file.
> And
> > perhaps complained that it takes next to forever to build.
> >
> > I also have the impression that people don't use the .chm stuff much
> anymore,
> > but that might just be me not using windows.
>
> I think in ancient times, pgadmin used it for its internal help.
>

Yes, very ancient :-). We use Sphinx now.


>
> But I have heard less about htmlhelp over the years than about the info
> format.
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-10 Thread Dave Page
On Mon, 10 Apr 2023 at 18:34, Robert Haas  wrote:

> On Mon, Apr 10, 2023 at 12:56 PM Tom Lane  wrote:
> > Robert Haas  writes:
> > > However, if this is the direction we're going, we probably need to
> > > give pgsql-packagers a heads up ASAP, because anybody who is still
> > > relying on the MSVC system to build Windows binaries is presumably
> > > going to need some time to adjust. If we rip out the build system
> > > somebody is using a couple of weeks before beta, that might make it
> > > difficult for that person to get the beta out promptly. And I think
> > > there's probably more than just EDB who would be in that situation.
> >
> > Oh ... that's a good point.  Is there anyone besides EDB shipping
> > MSVC-built executables?  Would it even be practical to switch to
> > meson with a month-or-so notice?  Seems kind of tight, and it's
> > not like the packagers volunteered to make this switch.
>
> I can't really speak to those questions with confidence.
>
> Perhaps instead of telling pgsql-packagers what we're doing, we could
> instead ask them if it would work for them if we did XYZ. Then we
> could use that information to inform our decision-making.


Projects other than the EDB installers use the MSVC build system - e.g.
pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)
that are pretty heavily baked into a fully automated build system (even the
build servers and all their requirements are baked into Ansible).

Changing that lot would be non-trivial, though certainly possible, and I
suspect we’re not the only ones doing that sort of thing.

-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 08:09, Magnus Hagander  wrote:

> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund  wrote:
> >
> > Hi,
> >
> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:
> > > Projects other than the EDB installers use the MSVC build system - e.g.
> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump
> etc)
> > > that are pretty heavily baked into a fully automated build system
> (even the
> > > build servers and all their requirements are baked into Ansible).
> > >
> > > Changing that lot would be non-trivial, though certainly possible, and
> I
> > > suspect we’re not the only ones doing that sort of thing.
> >
> > Do you have a link to the code for that, if it's open? Just to get an
> > impression for how hard it'd be to switch over?
>
>
> The pgadmin docs/readme refers to
> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32
>
> It clearly doesn't have the full automation stuff, but appears to have
> the parts about building the postgres dependency.
>

Yeah, that's essentially the manual process, though I haven't tested it in
a while. The Ansible stuff is not currently public. I suspect (or rather,
hope) that we can pull in all the additional packages required using
Chocolatey which shouldn't be too onerous.

Probably my main concern is that the Meson build can use the same version
of the VC++ compiler that we use (v14), which is carefully matched for
compatibility with all the various components, just in case anything passes
CRT pointers around. Python is the one thing we don't build ourselves on
Windows and the process will build modules like gssapi and psycopg (which
links with libpq of course), so we're basically following what they use.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan  wrote:

>
> On 2023-04-11 Tu 04:05, Dave Page wrote:
>
>
>
> On Tue, 11 Apr 2023 at 08:09, Magnus Hagander  wrote:
>
>> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund 
>> wrote:
>> >
>> > Hi,
>> >
>> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:
>> > > Projects other than the EDB installers use the MSVC build system -
>> e.g.
>> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump
>> etc)
>> > > that are pretty heavily baked into a fully automated build system
>> (even the
>> > > build servers and all their requirements are baked into Ansible).
>> > >
>> > > Changing that lot would be non-trivial, though certainly possible,
>> and I
>> > > suspect we’re not the only ones doing that sort of thing.
>> >
>> > Do you have a link to the code for that, if it's open? Just to get an
>> > impression for how hard it'd be to switch over?
>>
>>
>> The pgadmin docs/readme refers to
>> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32
>>
>> It clearly doesn't have the full automation stuff, but appears to have
>> the parts about building the postgres dependency.
>>
>
> Yeah, that's essentially the manual process, though I haven't tested it in
> a while. The Ansible stuff is not currently public. I suspect (or rather,
> hope) that we can pull in all the additional packages required using
> Chocolatey which shouldn't be too onerous.
>
> Probably my main concern is that the Meson build can use the same version
> of the VC++ compiler that we use (v14), which is carefully matched for
> compatibility with all the various components, just in case anything passes
> CRT pointers around. Python is the one thing we don't build ourselves on
> Windows and the process will build modules like gssapi and psycopg (which
> links with libpq of course), so we're basically following what they use.
>
>
>
> For meson you just need to to "pip install meson ninja" in your python
> distro and you should be good to go (they will be installed in python's
> Scripts directory). Don't use chocolatey to install meson/ninja - I ran
> into issues doing that.
>
> AFAICT meson will use whatever version of VC you have installed, although
> I have only been testing with VC2019.
>
OK, that sounds easy enough then (famous last words!)

Thanks!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz  wrote:

> On 4/11/23 7:54 AM, Dave Page wrote:
> >
> >
> > On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan  > <mailto:and...@dunslane.net>> wrote:
> >
> > For meson you just need to to "pip install meson ninja" in your
> > python distro and you should be good to go (they will be installed
> > in python's Scripts directory). Don't use chocolatey to install
> > meson/ninja - I ran into issues doing that.
> >
> > AFAICT meson will use whatever version of VC you have installed,
> > although I have only been testing with VC2019.
> >
> > OK, that sounds easy enough then (famous last words!)
>
> [RMT hat]
>
> Dave -- does this mean you see a way forward on moving the Windows
> builds over to use Meson instead of MSVC?
>

I can see a way forward, yes.


>
> Do you think we'll have enough info by end of this week to make a
> decision on whether we can drop MSVC in v16?
>

There's no way I can test anything this week - I'm on leave for most of it
and AFK.

But, my point was more that there are almost certainly more projects using
the MSVC build system than the EDB installers; pgAdmin being just one
example.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Windows now has fdatasync()

2022-04-08 Thread Dave Page
On Fri, 8 Apr 2022 at 05:41, Tom Lane  wrote:

> Michael Paquier  writes:
> > On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:
> >> I propose that we drop support for Windows versions older than
> >> 10/Server 2016 in the PostgreSQL 16 cycle,
>
> Do we have any data on what people are actually using?
>

None that I know of. Anecdotally, we dropped support for pgAdmin on Windows
< 8 (2012 for the server edition), and had a single complaint - and the
user happily acknowledged they were on an old release and expected support
to be dropped sooner or later. Windows 8 was a pretty unpopular release, so
I would expect shifting to 10/2016+ for PG 16 would be unlikely to be a
major problem.

FWIW, Python dropped support for < 8/2012 with v3.9.


>
> > Do you think that we could raise the minimum C standard on WIN32 to
> > C11, at least for MSVC?
>
> As long as the C11-isms are in MSVC-only code, it seems like this is
> exactly equivalent to setting a minimum MSVC version.  I don't see
> an objection-in-principle there, it's just a practical question of
> how far back is reasonable to support MSVC versions.  (That's very
> distinct from how far back we need the built code to run.)
>
> regards, tom lane
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: faulty link

2022-02-10 Thread Dave Page
On Thu, 10 Feb 2022 at 15:53, Dagfinn Ilmari Mannsåker 
wrote:

> Tom Lane  writes:
>
> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?=  writes:
> >> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers  napsal:
> >>> The provided link
> >>> https://www.postgresql.org/docs/release/
> >>> leads to
> >>> https://www.postgresql.org/docs/release/14.2/
> >>> which gives 'Not Found' for me (Netherlands)
> >
> >> Thinking about that again, the 14.2 release just happened. Could it be
> >> just a matter of propagating new release info to mirrors?
> >
> > The link works for me, too (USA).  Stale cache seems like a reasonable
> > explanation for the OP's problem --- maybe clearing browser cache
> > would help?
>
> I'm getting a 404 as well from London. After trying multiple times with
> curl I did get one 200 response, but it's mostly 404s.
>
> It looks like some of the mirrors have it, but not all:
>
> $ for h in $(dig +short -tA www.mirrors.postgresql.org); do echo -n "$h:
> "; curl -i -k -s -HHost:www.postgresql.org "https://$h/docs/release/14.2/";
> | grep ^HTTP; done
>  72.32.157.230: HTTP/2 200
>  87.238.57.232: HTTP/2 404
>  217.196.149.50: HTTP/2 200
>
> $ for h in $(dig +short -t www.mirrors.postgresql.org); do echo -n
> "$h: "; curl -i -k -s -HHost:www.postgresql.org 
> "https://[$h]/docs/release/14.2/";
> | grep ^HTTP; done
>  2001:4800:3e1:1::230: HTTP/2 200
>  2a02:c0:301:0:::32: HTTP/2 404
>  2a02:16a8:dc51::50: HTTP/2 200
>

Despite the name, they're not actually mirrors. They're varnish caches. By
the looks of it one of them cached a 404 (probably someone tried to access
the new page before it really did exist). I've purged /docs/release now,
and everything is returning 200.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Observability in Postgres

2022-02-15 Thread Dave Page
 rules; you could then confine
the monitoring traffic to a management VLAN for example.

- I strongly dislike the idea of building this around the
prometheus exporter format. Whilst that is certainly a useful format if
you're using prom (as many do), it does have limitations and quirks that
would make it painful for other systems to use; for example, the need to
encode non-numeric data into labels rather than the metrics themselves
(e.g. server version strings or LSNs). I would much prefer to see a common
format such as JSON used by default, and perhaps offer a hook to allow
alternate formatters to replace that. The prometheus format is also pretty
inefficient, as you have to repeat all the key data (labels) for each
individual metric.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Postgres auto vacuum - Disable

2022-11-07 Thread Dave Page
Hi

On Mon, 7 Nov 2022 at 11:42, Karthik Jagadish (kjagadis) 
wrote:

> Hi,
>
>
>
> We have a NMS application in cisco and using postgres as a database.
>
>
>
> We have query related to disabling auto vacuum. We have below
> configuration in postgres.conf where the autovacuum=on is commented out.
>
>
>
> [image: Shape Description automatically generated]
>
>
>
> But when checked in database we notice that it’s showing as on
>
>
>
> [image: Graphical user interface, timeline Description automatically
> generated]
>
>
>
> What would this mean? Does it mean that autovacuum is not disabled?
> Appreciate a response.
>

Right. The default is for it to be enabled, so commenting out the option
does nothing. You would need to set it explicitly to off.

BUT... you almost certainly don't want to do that. Cases where it should be
disabled are *extremely* rare. Make sure you *really* know what you're
letting yourself in for by disabling autovacuum, and don't rely on 10+ year
old performance tuning advice from random places on the internet, if that's
what you're doing.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-11-08 Thread Dave Page
On Tue, 8 Nov 2022 at 04:10, Michael Paquier  wrote:

> On Mon, Nov 07, 2022 at 04:54:07PM +0900, Michael Paquier wrote:
> > FWIW, all the other areas of pgstatfuncs.c manipulate timestamptz
> > fields with a style like the attached.  That's a nit, still per the
> > role of consistency with the surroundings..
> >
> > Anyway, it seems to me that a regression test is in order before a
> > scan happens just after the relation creation, and the same problem
> > shows up with last_idx_scan.
>
> Hearing nothing, done this way as of d7744d5.  Thanks for the report,
> Robert.  And thanks for the patch, Dave.
>

Thank you!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: More sepgsql weirdness

2021-04-14 Thread Dave Page
Hi

On Tue, Apr 13, 2021 at 6:22 PM Robert Haas  wrote:

> On Tue, Apr 13, 2021 at 10:33 AM Dave Page  wrote:
> > On a system with selinux and sepgsql configured, search path resolution
> appears to fail if sepgsql is in enforcing mode, but selinux is in
> permissive mode (which, as I understand it, should cause sepgsql to behave
> as if it's in permissive mode anyway - and does for other operations).
> Regardless of whether my understanding of the interaction of the two
> permissive modes is correct, I don't believe the following should happen:
>
> I agree that this sounds like something which shouldn't happen if the
> system is in permissive mode,


I realised that my test database hadn't had the sepgsql SQL script run in
it (I must have created it before running it on template1). I guess the
error was caused by lack of proper labelling.

So, clearly my fault, but I think there are a couple of things we need to
do here:

1) Improve the docs for sepgsql. The *only* vaguely useful source of info
I've found on using this is "SELinux System Administration", a Packt book
by Sven Vermeulen. Our own docs don't even list the supported object
classes (e.g. db_table) or types (e.g. sepgsql_ro_table_t) for example.

2) Improve the way we handle cases like the one I ran into. I only realised
what was going on when I tried to run sepgsql_getcon() to confirm I was
running in undefined_t. Clearly very weird things can happen if labelling
hasn't been run; perhaps we could raise a notice if the sepgsql module is
loaded but sepgsql_getcon() isn't present (though that seems flakey at
best)? I'd hesitate to try to check for the presence of one or more labels
as the admin could have intentionally removed them or changed them of
course.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: sepgsql logging

2021-04-14 Thread Dave Page
Hi

On Thu, Apr 1, 2021 at 3:30 PM Dave Page  wrote:

>
>
> On Thu, Apr 1, 2021 at 3:23 PM Tom Lane  wrote:
>
>> Andrew Dunstan  writes:
>> > On 4/1/21 8:32 AM, Dave Page wrote:
>> >> It seems to me that sepgsql should also log the denial, but flag that
>> >> permissive mode is on.
>>
>> > +1 for doing what selinux does if possible.
>>
>> +1.  If selinux itself is doing that, it's hard to see a reason why
>> we should not; and I concur that the info is useful.
>>
>
> Thanks both. I'll take a look at the code and see if I can whip up a patch
> (it'll be a week or so as I'm taking some time off for Easter).
>

Attached is a patch to clean this up. It will log denials as such
regardless of whether or not either selinux or sepgsql is in permissive
mode. When either is in permissive mode, it'll add " permissive=1" to the
end of the log messages. e.g.

Regular user in permissive mode, with a restricted table column:

2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table
name="public.tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column uid of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column name of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column mail of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column address of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column salt of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: denied { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column phash of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;

The same user/table, but in enforcing mode:

2021-04-14 13:17:21.645 BST [22974] LOG:  SELinux: allowed { search }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema
name="public" at character 15
2021-04-14 13:17:21.645 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table
name="public.tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column uid of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column name of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column mail of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column address of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }

Re: Windows default locale vs initdb

2021-04-19 Thread Dave Page
On Mon, Apr 19, 2021 at 11:52 AM Andrew Dunstan  wrote:

>
> My understanding from Microsoft staff at conferences is that Azure's
> PostgreSQL SAS runs on  linux, not WIndows.
>

This is from a regular Azure Database for PostgreSQL single server:

postgres=> select version();
  version

 PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit
(1 row)

And this is from the new Flexible Server preview:

postgres=> select version();
 version

-
 PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit
(1 row)

So I guess it's a case of "it depends".

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-20 Thread Dave Page
Hi

On Tue, Apr 20, 2021 at 10:37 AM Daniel Carter <
danielchriscarter+postg...@gmail.com> wrote:

> Hi,
>
> This is a small patch (against master) to allow an application using
> libpq with GSSAPI authentication to specify where to fetch the
> credential cache from -- it effectively consists of a new field in
> PQconninfoOptions to store this data and (where the user has specified a
> ccache location) a call into the gss_krb5_ccache_name function in the
> GSSAPI library.
>

The pgAdmin team would love to have this feature. It would greatly simplify
management of multiple connections from different users.


>
> It's my first go at submitting a patch -- it works as far as I can tell,
> but I suspect there will probably still be stuff to fix before it's
> ready to use!
>
> As far as I'm concerned this is working (the code compiles successfully
> following "./configure --with-gssapi --enable-cassert", and seems to
> work for specifying the ccache location without any noticeable errors).
>
> I hope there shouldn't be anything platform-specific here (I've been
> working on Ubuntu Linux but the only interactions with external
> applications are via the GSSAPI library, which was already in use).
>
> The dispsize value for ccache_name is 64 in this code (which seems to be
> what's used with other file-path-like parameters in the existing code)
> but I'm happy to have this corrected if it needs a different value -- as
> far as I can tell this is just for display purposes rather than anything
> critical in terms of actually storing the value?
>
> If no ccache_name is specified in the connection string then it defaults
> to NULL, which means the gss_krb5_ccache_name call is not made and the
> current behaviour (of letting the GSSAPI library work out the location
> of the ccache) is not changed.
>
> Many thanks,
> Daniel
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-21 Thread Dave Page
Hi

On Tue, Apr 20, 2021 at 8:44 PM Daniel Carter <
danielchriscarter+postg...@gmail.com> wrote:

> Hi Stephen,
>
> On 20/04/2021 20:01, Stephen Frost wrote:
> > I'm not necessarily against this, but typically the GSSAPI library
> > provides a way for you to control this using, eg, the KRB5_CCACHE
> > environment variable.  Is there some reason why that couldn't be used..?
>
> The original motivation for investigating this was setting up a web app
> which could authenticate to a database server using a Kerberos ticket.
> Since the web framework already needs to create a connection string
> (with database name etc.) to set up the database connection, having an
> option here for the ccache location makes it much more straightforward
> to specify than having to save data out to environment variables (and
> makes things cleaner if there are potentially multiple database
> connections going on at once in different processes).
>

Yes, that's why we'd like it for pgAdmin. When dealing with a
multi-threaded application it becomes a pain keeping credentials for
different users separated; a lot more mucking about with mutexes etc. If we
could specify the credential cache location in the connection string, it
would be much easier (and likely more performant) to securely keep
individual caches for each user.


>
> There may well be a better way of going about this -- it's just that I
> can't currently see an obvious way to get this kind of setup working
> using only the environment variable.
>
> Many thanks,
> Daniel
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-22 Thread Dave Page
On Thu, Apr 22, 2021 at 1:55 AM Stephen Frost  wrote:

> Greetings,
>
> * Daniel Carter (danielchriscarter+postg...@gmail.com) wrote:
> > On 21/04/2021 18:40, Stephen Frost wrote:
> > >I surely hope that the intent here is to use Negotiate / SPNEGO to
> > >authenticate the user who is connecting to the webserver and then have
> > >credentials delegated (ideally through constrained credential
> > >delegation..) to the web server by the user for the web application to
> > >use to connect to the PG server.
> > >
> > >I certainly don't think we should be targetting a solution where the
> > >application is acquiring credentials from the KDC directly using a
> > >user's username/password, that's very strongly discouraged for the very
> > >good reason that it means the user's password is being passed around.
> >
> > Indeed -- that's certainly not the intended aim of this patch!
>
> Glad to hear that. :)
>
> > >>There may well be a better way of going about this -- it's just that I
> can't
> > >>currently see an obvious way to get this kind of setup working using
> only
> > >>the environment variable.
> > >
> > >Perhaps you could provide a bit more information about what you're
> > >specifically doing here?  Again, with something like apache's
> > >mod_auth_gssapi, it's a matter of just installing that module and then
> > >the user will be authenticated by the web server itself, including
> > >managing of delegated credentials, setting of the environment variables,
> > >and the web application shouldn't have to do anything but use libpq to
> > >request a connection and if PG's configured with gssapi auth, it'll all
> > >'just work'.  Only thing I can think of offhand is that you might have
> > >to take AUTH_USER and pass that to libpq as the user's username to
> > >connect with and maybe get from the user what database to request the
> > >connection to..
> >
> > Hmm, yes -- something like that is definitely a neater way of doing
> things
> > in the web app scenario (I'd been working on the principle that the
> username
> > and credential cache were "provided" from the same place, i.e. the web
> app,
> > but as you point out that's not actually necessary).
>
> Yeah, that's really how web apps should be doing this.
>
> > However, it seems like there might be some interest in this for other
> > scenarios (e.g. with relation to multi-threaded applications where more
> > precise control of which thread uses which credential cache is useful),
> so
> > possibly this may still be worth continuing with even if it has a
> slightly
> > different intended purpose to what was originally planned?
>
> I'd want to hear the actual use-case rather than just hand-waving that
> "oh, this might be useful for this threaded app that might exist some
> day"...
>

I thought I gave that precise use case upthread. As you know, we've been
adding Kerberos support to pgAdmin. When running in server mode, we have
multiple users logging into a single instance of the application, and we
need to cache credentials for them to be used to login to the PostgreSQL
servers, using libpq that is on the pgAdmin server. For obvious reasons, we
want to use separate credential caches for each pgAdmin user, and currently
that means having a mutex around every use of the caches, so we can be sure
we're safely manipulating the environment, using the correct cache, and
then continuing as normal once we're done.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX.

2021-05-20 Thread Dave Page
On Tue, Mar 30, 2021 at 6:58 AM Tom Lane  wrote:

> Thomas Munro  writes:
> > I'll move it when committing.  I'll let this patch sit for another day
> > to see if any other objections show up.
>
> FWIW, I remain fairly strongly against this, precisely because of the
> point that it requires us to start using a randomly different
> feature-probing technology anytime Apple decides that they're going to
> implement some standard API that they didn't before.  Even if it works
> everywhere for preadv/pwritev (which we won't know in advance of
> buildfarm testing, and maybe not then, since detection failures will
> probably be silent), it seems likely that we'll hit some case in the
> future where this interacts badly with some other platform's weirdness.
> We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,
> and I'm not sure we should start now.  How many people actually care
> about that?
>

I missed this earlier - it's come to my attention through a thread on the
-packagers list. Adding my response on that thread here for this audience:

The ability to target older releases with a newer SDK is essential for
packages such as the EDB PostgreSQL installers and the pgAdmin community
installers. It's very difficult (sometimes impossible) to get older OS
versions on new machines now - Apple make it very hard to download old
versions of macOS (some can be found, others not), and they won't always
work on newer hardware anyway so it's really not feasible to have all the
build machines running the oldest version that needs to be supported.

FYI, the pgAdmin and PG installer buildfarms have
-mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is
synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building
packages that way for a decade or more.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-03 Thread Dave Page
Hi

On Fri, 30 Sept 2022 at 18:58, Andres Freund  wrote:

> Hi,
>
> On 2022-09-30 17:58:31 +0200, Vik Fearing wrote:
> > On 9/7/22 12:03, Dave Page wrote:
> > > Here's a v4 patch. This reverts to using
> > > GetCurrentTransactionStopTimestamp() for the last_scan times, and will
> > > set xactStopTimestamp the first time
> GetCurrentTransactionStopTimestamp()
> > > is called, thus avoiding multiple gettimeofday() calls.
> > > SetCurrentTransactionStopTimestamp() is removed, as is use
> > > of xactStopTimestamp (except when resetting it to 0).
> >
> > This patch looks good to me and has much saner behavior than what it
> > replaces.
>
> I agree. However, it seems like a significant enough behavioural change
> that
> I'd rather commit it as a separate patch.  I agree with Vik's judgement
> that
> the patch otherwise is otherwise ready. Happy to do that split myself, or
> you
> can do it...
>

Thanks. It's just the changes in xact.c, so it doesn't seem like it would
cause you any more work either way, in which case, I'll leave it to you :-)

FYI, the OID I chose was simply the closest single value to those used for
the other related functions (e.g. pg_stat_get_numscans). Seemed like a good
way to use up one more random unused value, but I don't care if it gets
changed to the 8000+ range.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-12 Thread Dave Page
On Wed, 12 Oct 2022 at 07:40, Michael Paquier  wrote:

> On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:
> > Thanks. It's just the changes in xact.c, so it doesn't seem like it would
> > cause you any more work either way, in which case, I'll leave it to you
> :-)
>
> Okay, I have just moved the patch to the next CF then, still marked as
> ready for committer.  Are you planning to look at that?
>

Thanks. Was the question directed at me or Andres?

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-13 Thread Dave Page
Hi

On Wed, 12 Oct 2022 at 23:52, Andres Freund  wrote:

> Hi,
>
> On 2022-10-12 12:50:31 -0700, Andres Freund wrote:
> > I think this should have at a basic test in
> src/test/regress/sql/stats.sql. If
> > I can write one in a few minutes I'll go for that, otherwise will reply
> > detailing difficulties.
>
> Took a bit longer (+lunch). Attached.
>
>
> In the attached 0001, the patch to make
> GetCurrentTransactionStopTimestamp()
> set xactStopTimestamp, I added a few comment updates and an Assert() to
> ensure
> that CurrentTransactionState->state is
> TRANS_(DEFAULT|COMMIT|ABORT|PREPARE). I
> am worried that otherwise we might end up with someone ending up using it
> in a
> place before the end of the transaction, which'd then end up recording the
> wrong timestamp in the commit/abort record.
>
>
> For 0002, the commit adding lastscan, I added catversion/stats version
> bumps
> (because I was planning to commit it already...), a commit message, and
> that
> minor docs change mentioned earlier.
>
>
> 0003 adds the tests mentioned above. I plan to merge them with 0002, but
> left
> them separate for easier review for now.
>
> To be able to compare timestamps for > not just >= we need to make sure
> that
> two subsequent timestamps differ. The attached achieves this by sleeping
> for
> 100ms between those points - we do that in other places already. I'd
> started
> out with 10ms, which I am fairly sure would suffice, but then deciced to
> copy
> the existing 100ms sleeps.
>
> I verified tests pass under valgrind, debug_discard_caches and after I make
> pgstat_report_stat() only flush when force is passed in.
>

Thanks for that. It looks good to me, bar one comment (repeated 3 times in
the sql and expected files):

fetch timestamps from before the next test

"from " should be removed.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-14 Thread Dave Page
On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:

> Hi,
>
> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
> > Thanks for that. It looks good to me, bar one comment (repeated 3 times
> in
> > the sql and expected files):
> >
> > fetch timestamps from before the next test
> >
> > "from " should be removed.
>
> I was trying to say something with that from, but clearly it wasn't
> understandable :). Removed.
>
> With that I pushed the changes and marked the CF entry as committed.


Thanks!


> --
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-31 Thread Dave Page
FYI, this is not intentional, and I do plan to look into it, however I've
been somewhat busy with pgconfeu, and am travelling for the rest of this
week as well.

On Sun, 23 Oct 2022 at 21:09, Robert Treat  wrote:

> On Fri, Oct 14, 2022 at 2:55 PM Dave Page  wrote:
> > On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:
> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
> >> > Thanks for that. It looks good to me, bar one comment (repeated 3
> times in
> >> > the sql and expected files):
> >> >
> >> > fetch timestamps from before the next test
> >> >
> >> > "from " should be removed.
> >>
> >> I was trying to say something with that from, but clearly it wasn't
> >> understandable :). Removed.
> >>
> >> With that I pushed the changes and marked the CF entry as committed.
> >
> >
> > Thanks!
> >
>
> Hey folks,
>
> I was looking at this a bit further (great addition btw) and noticed
> the following behavior (this is a mre of the original testing that
> uncovered this):
>
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
> (0 rows)
>
> pagila=# create table x (xx int);
> CREATE TABLE
> Time: 2.145 ms
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>  16392 | public | x   |0 | [null]|
> 0 |   [null] | [null]|[null] | 0 | 0 |
> 0 | 0 |  0 |  0 |
>  0 |  0 | [null]  | [null]  | [null]
> | [null]   |0 |0 | 0 |
> 0
> (1 row)
>
> pagila=# insert into x select 1;
> INSERT 0 1
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan  |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--++--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>  16392 | public | x   |0 | 1999-12-31 19:00:00-05 |
> 0 |   [null] | [null]|[null] | 1 |
> 0 | 0 | 0 |  1 |  0 |
>  1 |  1 | [null]  | [null]  |
> [null]   | [null]   |0 |0 |
>  0 | 0
> (1 row)
>
> Normally we populate "last" columns with a NULL value when the
> corresponding marker is zero, which seems correct in the first query,
> but no longer matches in the second. I can see an argument that this
> is a necessary exception to that rule (I'm not sure I agree with it,
> but I see it) but even in

Re: Tracking last scan time

2022-11-03 Thread Dave Page
On Mon, 31 Oct 2022 at 07:36, Dave Page  wrote:

> FYI, this is not intentional, and I do plan to look into it, however I've
> been somewhat busy with pgconfeu, and am travelling for the rest of this
> week as well.
>

Here's a patch to fix this issue. Many thanks to Peter Eisentraut who
figured it out in a few minutes after I spent far too long looking down
rabbit holes in entirely the wrong place.

Thanks for the bug report.


>
> On Sun, 23 Oct 2022 at 21:09, Robert Treat  wrote:
>
>> On Fri, Oct 14, 2022 at 2:55 PM Dave Page  wrote:
>> > On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:
>> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
>> >> > Thanks for that. It looks good to me, bar one comment (repeated 3
>> times in
>> >> > the sql and expected files):
>> >> >
>> >> > fetch timestamps from before the next test
>> >> >
>> >> > "from " should be removed.
>> >>
>> >> I was trying to say something with that from, but clearly it wasn't
>> >> understandable :). Removed.
>> >>
>> >> With that I pushed the changes and marked the CF entry as committed.
>> >
>> >
>> > Thanks!
>> >
>>
>> Hey folks,
>>
>> I was looking at this a bit further (great addition btw) and noticed
>> the following behavior (this is a mre of the original testing that
>> uncovered this):
>>
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>> (0 rows)
>>
>> pagila=# create table x (xx int);
>> CREATE TABLE
>> Time: 2.145 ms
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>>  16392 | public | x   |0 | [null]|
>> 0 |   [null] | [null]|[null] | 0 | 0 |
>> 0 | 0 |  0 |  0 |
>>  0 |  0 | [null]  | [null]  | [null]
>> | [null]   |0 |0 | 0 |
>> 0
>> (1 row)
>>
>> pagila=# insert into x select 1;
>> INSERT 0 1
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan  |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--++--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>>  16392 | public | x   |0 | 1999-12-31 19:00:00-05 |
>> 0 |   [null] | [null]|[null] | 1 |
>> 0 |   

Re: "an SQL" vs. "a SQL"

2021-06-10 Thread Dave Page
On Thu, Jun 10, 2021 at 9:31 AM Peter Eisentraut <
peter.eisentr...@enterprisedb.com> wrote:

> On 10.06.21 09:26, David Rowley wrote:
> > It seems we have no standard as to if we say "a SQL" or "an SQL".
>
> The SQL standard uses "an SQL-something".
>

I use both commonly, but the argument for "an S-Q-L ..." is strong I think
- and I definitely think consistency is good.


>
> > However, we mostly use "an SQL"  in the docs.
> >
> > ~/pg_src$ cd doc/
> > ~/pg_src/doc$ git grep -E "\s(a|A)\sSQL\s" | wc -l
> > 55
> > ~/pg_src/doc$ git grep -E "\s(an|An)\sSQL\s" | wc -l
> > 94
> >
> > I think we should change all 55 instances of "a SQL" in the docs to
> > use "an SQL" and leave the 800 other instances of "a SQL" alone.
> > Changing those does not seem worthwhile as it could cause
> > back-patching pain.
>
> agreed
>

+1 in general, though I would perhaps suggest extending to any user-visible
messages in the code. I don't think there's any point in messing with
comments etc. I'm not sure what that would do to the numbers though.


>
> > Further, there might be a few more in the docs that we might want to
> > consider changing:
> >
> > git grep -E "\sa\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\s"
> >
> > I see "a FSM", "a FIFO", "a SSPI", "a SASL", "a MCV", "a SHA", "a SQLDA"
> >
> > My regex foo is not strong enough to think how I might find multiline
> instances.
>
> Um, of those, I pronounce FIFO, SASL, and SHA as words, with an "a"
> article.
>

Same here. I've never heard anyone try to pronounce SSPI, so I would expect
that to be "an SSPI ...". The other remaining ones (FSM, MCV & SQLDA) I
would also argue aren't pronounceable, so should use the "an" article.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi 
wrote:

> (Sorry in advance if this is off-topic of -hackers, and please head me
> to the right place if so.)
>
> I'm stuck by connection failure to gitmaster.
>
> I told that I already have the commit-bit on pgtranslation repository
> for the community account "horiguti".
>
> I did the following steps.
>
> 1. Add the public key for git-access to "SSH Key" field of "Edit User
>Profile" page.(https://www.postgresql.org/account/profile/) I did
>this more than few months ago.
>
> 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git.
>

The correct repo is ssh://g...@git.postgresql.org/pgtranslation/messages.git.


>
> The problem for me here is I get "Permission denied" by the second
> step.
>
> The following is an extract of verbose log when I did:
>
> > GIT_SSH_COMMAND="ssh -" git clone ssh://
> g...@gitmaster.postgresql.org/pgtranslation/messages.git
>
> debug1: Authenticating to gitmaster.postgresql.org:22 as 'git'
> debug1: Offering public key: /home/horiguti/.ssh/postgresql ECDSA
> SHA256:zMOonb8...
> debug3: send packet: type 50
> debug2: we sent a publickey packet, wait for reply
> debug3: receive packet: type 51
>
> The account and host looks correct. The server returns 51
> (SSH_MSG_USERAUTH_FAILURE), which means the server didn't find my
> public key, but the fingerprint shown above coincides with that of the
> registered public key. I don't have a clue of the reason from my side.
>
> Please someone tell me what to do to get over the situation.
>
> regards.
>
> --
> Kyotaro Horiguchi
> NTT Open Source Software Center
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 08:55, Kyotaro Horiguchi 
wrote:

> At Wed, 11 May 2022 08:46:40 +0100, Dave Page  wrote
> in
> > Hi
> >
> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi  >
> > wrote:
> > > 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git
> .
> > >
> >
> > The correct repo is ssh://
> g...@git.postgresql.org/pgtranslation/messages.git.
>
> Thanks for the reply.  I didn't wrote, but I have tried that and had
> the same result.
>

What is your community user ID?

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 09:34, Tatsuo Ishii  wrote:

> > Hi
> >
> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi  >
> > wrote:
> >
> >> (Sorry in advance if this is off-topic of -hackers, and please head me
> >> to the right place if so.)
> >>
> >> I'm stuck by connection failure to gitmaster.
> >>
> >> I told that I already have the commit-bit on pgtranslation repository
> >> for the community account "horiguti".
> >>
> >> I did the following steps.
> >>
> >> 1. Add the public key for git-access to "SSH Key" field of "Edit User
> >>Profile" page.(https://www.postgresql.org/account/profile/) I did
> >>this more than few months ago.
> >>
> >> 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git.
> >>
> >
> > The correct repo is ssh://
> g...@git.postgresql.org/pgtranslation/messages.git.
>
> This does not work for me neither. However, in my case following works:
>
> ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git


If that works, then colour me confused because:

gemulon:~# host gitmaster.postgresql.org
gitmaster.postgresql.org is an alias for gemulon.postgresql.org.
gemulon.postgresql.org has address 72.32.157.198
gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198
gemulon:~# find / -name pgtranslation
gemulon:~# find / -name messages.git
gemulon:~# ls -al /home/git/repositories/
total 16
drwxr-xr-x 4 git git 4096 Jan  4  2020 .
drwxr-xr-x 8 git git 4096 May 11 09:03 ..
drwxr-xr-x 7 git git 4096 Jan  4  2020 mhatest.git
drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git
gemulon:~#


>
>
> Also Tom Lane said:
> On Sun, May 1, 2022 at 4:52 PM Tom Lane  wrote:
> > Tatsuo Ishii  writes:
> > > This is ok:
> > > git clone ssh://g...@gitmaster.postgresql.org/postgresql.git
> >
> > That's the thing to use if you're a committer.
>
> Best reagards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>


-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
On Wed, 11 May 2022 at 09:25, Kyotaro Horiguchi 
wrote:

> At Wed, 11 May 2022 09:08:26 +0100, Dave Page  wrote
> in
> > What is your community user ID?
>
> My community user name is "horiguti".
>

OK, so you have write access on the repo on git.postgresql.org, but I can't
find an SSH key for your account on the system. Can you check
https://www.postgresql.org/account/profile/ and make sure you've got the
correct SSH key in your profile? If you add one, it might take 10 minutes
or so to make its way to the git server.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 13:56, Tatsuo Ishii  wrote:

> >> This does not work for me neither. However, in my case following works:
> >>
> >> ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git
> >
> >
> > If that works, then colour me confused because:
> >
> > gemulon:~# host gitmaster.postgresql.org
> > gitmaster.postgresql.org is an alias for gemulon.postgresql.org.
> > gemulon.postgresql.org has address 72.32.157.198
> > gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198
> > gemulon:~# find / -name pgtranslation
> > gemulon:~# find / -name messages.git
> > gemulon:~# ls -al /home/git/repositories/
> > total 16
> > drwxr-xr-x 4 git git 4096 Jan  4  2020 .
> > drwxr-xr-x 8 git git 4096 May 11 09:03 ..
> > drwxr-xr-x 7 git git 4096 Jan  4  2020 mhatest.git
> > drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git
> > gemulon:~#
>
> Sorry, I meant ssh://g...@gitmaster.postgresql.org/postgresql.git
> works, but ssh://g...@git.postgresql.org/postgresql.git does not work
> for me.
>

That is expected; no one has write access to that repo (and we only include
SSH keys for users with write access).

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
On Thu, 12 May 2022 at 07:11, Kyotaro Horiguchi 
wrote:

> At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii 
> wrote in
> > >> Thank you for the info, but unfortunately it hasn't worked.
> > >> I'm going to try a slightly different steps..
> > >
> > > And finally I succeeded to clone from git.postgresql.org and to push a
> > > commit.


\o/


> >
> > Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...
>
> git.postgresql.org.  I still receive "Permission denied" from
> gitmaster.



Yes, gitmaster is completely irrelevant here. It is *only* used for
PostgreSQL itself, and only by PostgreSQL Committers.

The postgresql.git repo on git.postgresql.org is unique in that it is a
mirror of the real repository on gitmaster, and doesn’t have any committers
except for the account used to push commits from gitmaster. The third party
browser software doesn’t know anything about that which is why it still
shows the ssh:// URL despite it not being usable by anyone.

Is there some reason you thought gitmaster was relevant here (some webpage
for example)? This is the third(?) someone has been confused by gitmaster
recently, something both Magnus and I have been surprised by.
-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: gitmaster access

2022-05-12 Thread Dave Page
On Thu, 12 May 2022 at 08:03, Tatsuo Ishii  wrote:

> > At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <
> is...@sraoss.co.jp> wrote in
> >> >> Thank you for the info, but unfortunately it hasn't worked.
> >> >> I'm going to try a slightly different steps..
> >> >
> >> > And finally I succeeded to clone from git.postgresql.org and to push
> a
> >> > commit.
> >>
> >> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...
> >
> > git.postgresql.org.  I still receive "Permission denied" from
> > gitmaster.
>
> Ok. I learned that only postgresql.git should be accessed from
> gitmaster.postgresql.org. All other repos should be accessed from
> git.postgresql.org.


That is correct for PostgreSQL Committers such as yourself. Anyone else can
*only* use git.postgresql.org


>
> BTW,
> > I'm going to try a slightly different steps..
>
> Can you please tell me What you actually did? I am afraid of facing
> similar problem if I want to add another committer to pgpool2 repo.
>
> Best reagards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: removing datlastsysoid

2022-05-16 Thread Dave Page
On Thu, 20 Jan 2022 at 14:03, Robert Haas  wrote:

> On Mon, Jan 17, 2022 at 3:43 PM Tom Lane  wrote:
> > +1.  Another reason to get rid of it is that it has nothing to do
> > with the system OID ranges defined in access/transam.h.
>
> Agreed. Thanks for looking. Committed.
>

So we just ran into this whilst updating pgAdmin to support PG15. How is
one supposed to figure out what the last system OID is now from an
arbitrary database? pgAdmin uses that value in well over 300 places in its
source.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: removing datlastsysoid

2022-05-16 Thread Dave Page
On Mon, 16 May 2022 at 15:06, David Steele  wrote:

> On 5/16/22 9:43 AM, Dave Page wrote:
> >
> >
> > On Thu, 20 Jan 2022 at 14:03, Robert Haas  > <mailto:robertmh...@gmail.com>> wrote:
> >
> > On Mon, Jan 17, 2022 at 3:43 PM Tom Lane  > <mailto:t...@sss.pgh.pa.us>> wrote:
> >  > +1.  Another reason to get rid of it is that it has nothing to do
> >  > with the system OID ranges defined in access/transam.h.
> >
> > Agreed. Thanks for looking. Committed.
> >
> >
> > So we just ran into this whilst updating pgAdmin to support PG15. How is
> > one supposed to figure out what the last system OID is now from an
> > arbitrary database? pgAdmin uses that value in well over 300 places in
> > its source.
>
> We ran into the same issue in pgBackRest. The old query that initdb used
> to generate these values is no good for PG15 since the template
> databases now have fixed low oids.
>
> Out solution was to use the constant:
>
> #define FirstNormalObjectId 16384
>
> And treat anything below that as a system oid. This constant has not
> changed in a very long time (if ever) but we added it to our list of
> constants to recheck with each release.
>

Yes, that seems reasonable. Changing that value would very likely break
pg_upgrade I can imagine, so I suspect it'll stay as it is for a while
longer.


>
> We used the initdb query to provide backward compatibility for older
> versions of pgbackrest using PG <= 14, but are using FirstNormalObjectId
> going forward.
>
> See
>
> https://github.com/pgbackrest/pgbackrest/commit/692fe496bdb5fa6dcffeb9f85b6188ceb1df707a
> for details.
>

 Thanks David!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: pgsql: Default to hidden visibility for extension libraries where possi

2022-07-20 Thread Dave Page
On Wed, 20 Jul 2022 at 16:12, Tom Lane  wrote:

> Alvaro Herrera  writes:
> > On 2022-Jul-20, Tom Lane wrote:
> >> I'll try to do some research later today to identify anything else
> >> we need to mark in plpgsql.  I recall doing some work specifically
> >> creating functions for pldebugger's use, but I'll need to dig.
>
> > I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't
> > expose functions directly, but through plpgsql_plugin_ptr.  Maybe that
> > one does need to be made PGDLLEXPORT, since currently it isn't.
>
> After some experimentation, it does not need to be marked: pldebugger
> gets at that via find_rendezvous_variable(), so there is no need for
> any explicit linkage at all between plpgsql.so and plugin_debugger.so.
>
> Along the way, I made a quick hack to get pldebugger to load into
> v15/HEAD.  It lacks #ifdef's which'd be needed so that it'd still
> compile against older branches, but perhaps this'll save someone
> some time.
>

Thanks Tom - I've pushed that patch with the relevant #ifdefs added.

-- 
Dave Page
PostgreSQL Core Team
http://www.postgresql.org/


Re: Proposal: Job Scheduler

2024-06-06 Thread Dave Page
On Thu, 6 Jun 2024 at 09:47, Laurenz Albe  wrote:

> On Thu, 2024-06-06 at 16:27 +0800, Wang Cheng wrote:
> > We are the PostgreSQL team in Tencent. We have recently developed a job
> scheduler
> > that runs inside the database to schedules and manages jobs similar to
> Oracle
> > DBMS_JOB package, and we would like to contribute this feature to the
> community.
> >
> > As far as we know, there are currently two open-sourced job scheduling
> extensions
> > for PostgreSQL: pg_cron (https://github.com/citusdata/pg_cron/) and
> pg_dbms_job
> > (https://github.com/MigOpsRepos/pg_dbms_job/tree/main). However, the
> cron-based
> > syntax is not easy to use and suffers some limitations like one-off
> commands.
> > The pg_dbms_job extension is difficult to manage and operate because it
> runs as
> > a standalone process .
>
> There is also pg_timetable:
> https://github.com/cybertec-postgresql/pg_timetable


And probably the oldest of them all, pgAgent:
https://www.pgadmin.org/docs/pgadmin4/8.7/pgagent.html


>
>
> > That's why we have developed the job scheduler that runs as a process
> inside the
> > database just like autovacuum.
> >
> > We can start to send the patch if this idea makes sense to the you.
>
> Perhaps your job scheduler is much better than all the existing ones.
> But what would be a compelling reason to keep it in the PostgreSQL source
> tree?
> With PostgreSQL's extensibility features, it should be possible to write
> your
> job scheduler as an extension and maintain it outside the PostgreSQL
> source.
>
> I am sure that the PostgreSQL community will be happy to use the extension
> if it is any good.
>

I agree. This is an area in which there are lots of options at the moment,
with compelling reasons to choose from various of them depending on your
needs.

It's this kind of choice that means it's unlikely we'd include any one
option in PostgreSQL, much like various other tools such as failover
managers or poolers.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Windows: openssl & gssapi dislike each other

2024-06-11 Thread Dave Page
Hi

On Sun, 9 Jun 2024 at 08:29, Imran Zaheer  wrote:

> Hi
>
> I am submitting two new patches. We can undefine the macro at two locations
>
> 1). As be-secure-openssl.c [1] was the actual
> file where the conflict happened so I undefined the macro here before
> the ssl includes. I changed the comment a little to make it understandable.
> I am also attaching the error generated with ninja build.
>
> OR
>
> 2). Right after the gssapi includes in libpq-be.h
>

Thank you for working on this. I can confirm the undef version compiles and
passes tests with 16.3.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Windows: openssl & gssapi dislike each other

2024-06-11 Thread Dave Page
On Tue, 11 Jun 2024 at 12:22, Andrew Dunstan  wrote:

>
> On 2024-06-11 Tu 05:19, Dave Page wrote:
>
> Hi
>
> On Sun, 9 Jun 2024 at 08:29, Imran Zaheer  wrote:
>
>> Hi
>>
>> I am submitting two new patches. We can undefine the macro at two
>> locations
>>
>> 1). As be-secure-openssl.c [1] was the actual
>> file where the conflict happened so I undefined the macro here before
>> the ssl includes. I changed the comment a little to make it
>> understandable.
>> I am also attaching the error generated with ninja build.
>>
>> OR
>>
>> 2). Right after the gssapi includes in libpq-be.h
>>
>
> Thank you for working on this. I can confirm the undef version compiles
> and passes tests with 16.3.
>
>
>
> Thanks for testing.
>
> I think I prefer approach 2, which should also allow us to remove the
> #undef in sslinfo.c so we only need to do this in one place.
>
OK, well that version compiles and passes tests as well :-)

Thanks!

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Meson far from ready on Windows

2024-06-18 Thread Dave Page
Hi

Further to my previous report [1] about zlib detection not working with
Meson on Windows, I found it's similarly or entirely broken for the
majority of other dependencies, none of which are tested on the buildfarm
as far as I can see.

For convenience, I've put together a number of Github actions [2] that show
how to build the various dependencies on Windows, in the most
standard/recommended way I can find for each. Another action combines these
into a single downloadable archive that people can test with, and another
one uses that archive to build PostgreSQL 12 through 16, all successfully.

You can see build logs, and download the various builds/artefacts from the
Github Workflow pages.

My next task was to extend that to support PostgreSQL 17 and beyond, which
is where I started to run into problems. I've attempted builds using Meson
with each of the dependencies defined in the old-style config.pl, both with
and without modifying the INCLUDE/LIBS envvars to include the directories
for the dependencies (as was found to work in the previous discussion re
zlib):

Will not successfully configure at all:

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dgssapi=enabled
build-gssapi

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dicu=enabled
build-icu

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dlibxml=enabled
build-libxml

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dlz4=enabled
build-lz4

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dnls=enabled
build-nls

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Duuid=ossp
build-uuid

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dzstd=enabled
build-zstd

Configured with modified LIBS/INCLUDE:

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dlibxslt=enabled
build-libxslt

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dssl=openssl
build-openssl

meson setup --auto-features=disabled
-Dextra_include_dirs=C:\build64\include
-Dextra_lib_dirs=C:\build64\lib;C:\build64\lib64 --wipe -Dzlib=enabled
build-zlib

I think it's important to note that Meson largely seems to want to use
pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing on
Windows (it is available, but isn't commonly used), and even cmake would
typically rely on finding things in either known installation directories
or through lib/include vars. There really aren't standard directories like
/usr/lib or /usr/include as we find on unixes, or pkgconfig files for
everything.

For the EDB installers, the team has hand-crafted pkgconfig files for
everything, which is clearly not a proper solution.

I can provide logs and run tests if anyone wants me to do so. Testing so
far has been with the Ninja backend, in a VS2022 x86_64 native environment.

[1]
https://www.postgresql.org/message-id/CA+OCxozrPZx57ue8rmhq6CD1Jic5uqKh80=vtpzurskesn-...@mail.gmail.com
[2] https://github.com/dpage/winpgbuild/actions

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-18 Thread Dave Page
Hi

On Tue, 18 Jun 2024 at 15:38, Andres Freund  wrote:

> Hi,
>
> On 2024-06-18 14:53:53 +0100, Dave Page wrote:
> > My next task was to extend that to support PostgreSQL 17 and beyond,
> which
> > is where I started to run into problems. I've attempted builds using
> Meson
> > with each of the dependencies defined in the old-style config.pl, both
> with
> > and without modifying the INCLUDE/LIBS envvars to include the directories
> > for the dependencies (as was found to work in the previous discussion re
> > zlib):
> >
> > Will not successfully configure at all:
>
> Do you have logs for those failures?
>

Sure - https://developer.pgadmin.org/~dpage/build-logs.zip. Those are all
without any modifications to %LIB% or %INCLUDE%.


> I think it's important to note that Meson largely seems to want to use
> > pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing
> on
> > Windows (it is available, but isn't commonly used), and even cmake would
> > typically rely on finding things in either known installation directories
> > or through lib/include vars.
>
> I am not really following what you mean with the cmake bit here?
>
> You can configure additional places to search for cmake files with
> meson setup --cmake-prefix-path=...
>

None of the dependencies include cmake files for distribution on Windows,
so there are no additional files to tell meson to search for. The same
applies to pkgconfig files, which is why the EDB team had to manually craft
them.


>
>
> > There really aren't standard directories like
> > /usr/lib or /usr/include as we find on unixes, or pkgconfig files for
> > everything.
>
> Yes, and?
>

And that's why we really need to be able to locate headers and libraries
easily by passing paths to meson, as we can't rely on pkgconfig, cmake, or
things being in some standard directory on Windows.

Thanks.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-19 Thread Dave Page
rberos libraries, as you'll get both 32 and 64 bit libs if
you follow their standard process for building on Windows so you still need
to have code to pick the right ones.


>
> FWIW, at least libzstd, libxml [3], lz4, zlib can generate cmake dependency
> files on windows in their upstream code.
>

In the case of zstd, it does not if you build with VC++, the Makefile, or
Meson, at least in my testing. It looks like it would if you built it
with cmake, but I couldn't get that to work in 10 minutes or so of messing
around. And that's a perfect example of what I'm bleating about - there are
often many ways of building things on Windows and there are definitely many
ways of getting things on Windows, and they're not all equal. We've either
got to be extremely prescriptive in our docs, telling people precisely what
they need to download for each dependency, or how to build it themselves in
the way that will work with PostgreSQL, or the build system needs to be
flexible enough to handle different dependency variations, as the old VC++
build system was.


>
> I'm *not* against adding "hardcoded" dependency lookup stuff for libraries
> where other approaches aren't feasible, I just don't think it's a good
> idea to
> add fragile stuff that will barely be tested, when not necessary.
>
> Greetings,
>
> Andres Freund
>
>
> [1] Here's a build of PG with the dependencies installed, builds
> https://cirrus-ci.com/task/4953968097361920
>
> [2] E.g.
>
> https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Mkvcbuild.pm#L600
>
> https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Solution.pm#L1039
>
> [3] Actually, at least your libxml build actually *did* include both .pc
> and
> cmake files. So just pointing to the relevant path would do the trick.
>


-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-21 Thread Dave Page
rocess for building on Windows so you still
> need
> > to have code to pick the right ones.
>
> vcpkg for one does provide .pc files for kerberos.
>

Yes - that's in the vcpkg repo. I suspect they're adding pc and cmake files
for a lot of things.


> > We've either got to be extremely prescriptive in our docs, telling people
> > precisely what they need to download for each dependency, or how to
> build it
> > themselves in the way that will work with PostgreSQL, or the build system
> > needs to be flexible enough to handle different dependency variations, as
> > the old VC++ build system was.
>
> I'm confused - the old build system wasn't flexible around this stuff *at
> all*. Everyone had to patch it to get dependencies to work, unless you
> chose
> exactly the right source to download from - which was often not documented
> or
> outdated.
>

As I noted above - as the "owner" of the official packages, I never did
despite using a variety of upstream sources.


>
> For example:
> -
> https://github.com/microsoft/vcpkg/blob/master/ports/libpq/windows/msbuild.patch


That one looks almost entirely related to making PostgreSQL itself fit into
vcpkg's view of the world. It's changing the installation footprint, and
pulling some paths from their own variables. If they're changing our
installation footprint, it's likely they're doing the same for other
packages.


>
> -
> https://github.com/conan-io/conan-center-index/blob/1b24f7c74994ec6573e322b7ae4111c10f620ffa/recipes/libpq/all/conanfile.py#L116-L160


Same for that one. It's making many of those changes for non-Windows
platforms as well.


>
> -
> https://github.com/conda-forge/postgresql-feedstock/tree/main/recipe/patches


That one is interesting. It fixes the same zlib and OpenSSL issues I
mentioned being the one fix I did myself, albeit by renaming libraries
originally, and later by actually following the upstream build instructions
correctly.

It also makes essentially the same fix for krb5 that I hacked into my
Github Action, but similarly that isn't actually needed at all if you
follow the documented krb5 build process, which produces 32 and 64 bit
binaries.

Additionally, it also fixes a GSSAPI related bug which I reported a week or
two back here and for which there is a patch waiting to be committed, and
replaces some setenv calls with _putenv_.

There are a couple more patches in there, but they're Linux related from a
quick glance.


In short, I don't really see anything in those examples that are general
issues (aside from the bugs of course).


-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-21 Thread Dave Page
Hi

On Fri, 21 Jun 2024 at 12:20, Dave Page  wrote:

>
> We/I *could* add cmake/pc file generation to that tool, which would make
> things work nicely with PostgreSQL 17 of course.
>

For giggles, I took a crack at doing that, manually creating .pc files for
everything I've been working with so far. It seems to work as expected,
except that unlike everything else libintl is detected entirely based on
whether the header and library can be found. I had to pass extra lib and
include dirs:

meson setup --wipe --pkg-config-path=C:\build64\lib\pkgconfig
-Dextra_include_dirs=C:\build64\include -Dextra_lib_dirs=C:\build64\lib
-Duuid=ossp build-auto

I'm assuming that's an oversight, given your previous comments?

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-21 Thread Dave Page
On Fri, 21 Jun 2024 at 16:15, Robert Haas  wrote:

> On Fri, Jun 21, 2024 at 7:21 AM Dave Page  wrote:
> > My assumption all along was that Meson would replace autoconf etc.
> before anything happened with MSVC, precisely because that's the type of
> environment all the Postgres devs work in primarily. Instead we seem to
> have taken what I think is a flawed approach of entirely replacing the
> build system on the platform none of the devs use, whilst leaving the new
> system as an experimental option on the platforms it will have had orders
> of magnitude more testing.
> >
> > What makes it worse, is that I don't believe anyone was warned about
> such a drastic change. Packagers were told about the git archive changes to
> the tarball generation, but that's it (and we've said before, we can't
> expect packagers to follow all the activity on -hackers).
>
> I agree that we should have given a heads up to pgsql-packagers. The
> fact that that wasn't done is a mistake, and inconsiderate. At the
> same time, I don't quite know who should have done that exactly when.
> Note that, while I believe Andres is on pgsql-packagers, many
> committers are not, and we have no written guidelines anywhere for
> what kinds of changes require notifying pgsql-packagers.
>
> Previous threads on this issue:
>
> https://postgr.es/m/zqzp_vmjcerm1...@paquier.xyz
> http://postgr.es/m/20230408191007.7lysd42euafwl...@awork3.anarazel.de
>
> Note that in the second of these threads, which contemplated removing
> MSVC for v16, I actually pointed out that if we went that way, we
> needed to notify pgsql-packagers ASAP. But, since we didn't do that,
> no email was ever sent to pgsql-packagers about this, or at least not
> that I can find.


That's what I was saying should have been done. I don't think there was a
requirement on Andres to tell them that they could use Meson instead.


> Still, MSVC support was removed more than six months
> ago, so even if somebody didn't see any of the pgsql-hackers
> discussion about this, there's been a fair amount of time (and a beta)
> for someone to notice that their build process isn't working any more.
> It seems a bit weird to me to start complaining about this now.
>

People noticed when they started prepping for beta1. Then there was a mad
rush to get things working under Meson in any way possible.


> As a practical matter, I don't think MSVC is coming back. The
> buildfarm was already changed over to use meson, and it would be
> pretty disruptive to try to re-add buildfarm coverage for a
> resurrected MSVC on the eve of beta2. I think we should focus on
> improving whatever isn't quite right in meson -- plenty of other
> people have also complained about various things there, me included --
> rather than trying to undo over a year's worth of work by lots of
> people to get things on Windows switched over to MSVC.
>

The buildfarm hasn't switched over - it had support added for Meson. If it
had been switched, then the older back branches would have gone red.

Anyway, that's immaterial - I know the old code isn't coming back now. My
motivation for this thread is to get Meson to a usable state on Windows,
that doesn't require hacking stuff around for the casual builder moving
forwards - and at present, it requires *significantly* more hacking around
than it has in many years.

The design goals Andres spoke about would clearly be a technical
improvement to PostgreSQL, however as we're finding, they rely on the
upstream dependencies being built with pkgconfig or cmake files which
either doesn't happen at present, or only happens if you happen to build in
a certain way, or download from some source that has added them. I'm not
sure how to fix that without re-introducing the old hacks in the build
system, or extending my side project to add .pc files to all the
dependencies it builds. I will almost certainly do that, as it'll give
folks a single place where they can download everything they need, and
provide a reference on how everything can be built if they want to do it
themselves, but on the other hand, it's far from an ideal solution and I'd
much prefer if I didn't need to do that at all.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-24 Thread Dave Page
ng all of PostgreSQL's dependencies on
Windows requires multiple different toolchains and environments and is not
trivial to setup. That's largely why I started working on
https://github.com/dpage/winpgbuild.

Thanks!


>
> Happy to hear your thoughts. I think if our goal is to enable more
> people to work on Postgres, we should probably add subproject wraps to
> the source tree, but we also need to be forgiving like in the Meson DSL
> snippet above.
>
> Let me know your thoughts!
>
> [0]: https://mesonbuild.com/Wrap-dependency-system-manual.html
> [1]:
> https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/meson.build#L239-L275
> [2]:
> https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/subprojects/packagefiles/libbsd/meson.build
>
> --
> Tristan Partin
> https://tristan.partin.io
>


-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-24 Thread Dave Page
Hi

On Sat, 22 Jun 2024 at 17:32, Andres Freund  wrote:

> > I don't think it's unreasonable to not support static linking, but I take
> > your point.
>
> Separately from this thread: ISTM that on windows it'd be quite beneficial
> to
> link a few things statically, given how annoying dealing with dlls can be?
> There's also the perf issue addressed further down.
>

Dealing with DLLs largely just boils down to copying them into the right
place when packaging. The perf issue is a much more compelling reason to
look at static linking imho.


>
>
> > My assumption all along was that Meson would replace autoconf etc. before
> > anything happened with MSVC, precisely because that's the type of
> > environment all the Postgres devs work in primarily. Instead we seem to
> > have taken what I think is a flawed approach of entirely replacing the
> > build system on the platform none of the devs use, whilst leaving the new
> > system as an experimental option on the platforms it will have had orders
> > of magnitude more testing.
>
> The old system was a major bottleneck. For one, there was no way to run all
> tests. And even the tests that one could run, would run serially, leading
> to
> exceedingly long tests times. While that could partially be addressed by
> having both buildsystems in parallel, the old one would frequently break
> in a
> way that one couldn't reproduce on other systems. And resource wise it
> wasn't
> feasible to test both old and new system for cfbot/CI.
>

Hmm, I've found that running the tests under Meson takes notably longer
than the old system - maybe 5 - 10x longer ("meson test" vs. "vcregress
check"). I haven't yet put any effort into figuring out a cause for that
yet.


> FWIW, dynamic linking has a noticeable overhead on other platforms too. A
> non-dependencies-enabled postgres can do about 2x the
> connections-per-second
> than a fully kitted out postgres can (basically due to more memory mapping
> metadata being copied).  But on windows the overhead is larger because so
> much
> more happens for every new connections, including loading all dlls from
> scratch.
>
> I suspect linking a few libraries statically would be quite worth it on
> windows. On other platforms it'd be quite inadvisable to statically link
> libraries, due to security updates, but for stuff like the EDB windows
> installer dynamic linking doesn't really help with that afaict?
>

Correct - we're shipping the dependencies ourselves, so we have to
rewrap/retest anyway.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-24 Thread Dave Page
On Sat, 22 Jun 2024 at 17:35, Andres Freund  wrote:

> Hi,
>
> On 2024-06-21 15:36:56 +0100, Dave Page wrote:
> > For giggles, I took a crack at doing that, manually creating .pc files
> for
> > everything I've been working with so far.
>
> Cool!
>
>
> > It seems to work as expected, except that unlike everything else libintl
> is
> > detected entirely based on whether the header and library can be found. I
> > had to pass extra lib and include dirs:
>
> Yea, right now libintl isn't using dependency detection because I didn't
> see
> any platform where it's distributed with a .pc for or such. It'd be just a
> line or two to make it use one...
>

I think it should, for consistency if nothing else - especially if we're
adding our own pc/cmake files to prebuilt dependencies.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-25 Thread Dave Page
On Tue, 25 Jun 2024 at 11:41, Andres Freund  wrote:

> Hi,
>
> On 2024-06-21 12:20:49 +0100, Dave Page wrote:
> > > I'm confused - the old build system wasn't flexible around this stuff
> *at
> > > all*. Everyone had to patch it to get dependencies to work, unless you
> > > chose
> > > exactly the right source to download from - which was often not
> documented
> > > or
> > > outdated.
> > >
> >
> > As I noted above - as the "owner" of the official packages, I never did
> > despite using a variety of upstream sources.
>
> For reference, with 16 and src/tools/msvc:
> - upstream zstd build doesn't work, wrong filename (libzstd.dll.a instead
> of libzstd.lib)
> - upstream lz4 build doesn't work, wrong filename (liblz4.dll.a instead of
> liblz4.lib)
> - openssl, from https://slproweb.com/products/Win32OpenSSL.htm , as our
>   docs suggest: doesn't work, wrong filenames (openssl.lib instead of
>   lib*64.lib, works if you delete lib/VC/sslcrypto64MD.lib)
> - iconv/intl: mismatching library names (lib*.dll.a lib*.lib)
>
> - zlib at least at least from some of the sources (it's hard to tell,
> because
>   everything available is so outdated), wrong filenames
>

https://github.com/dpage/winpgbuild proves that the hacks above are not
required *if* you build the dependencies in the recommended way for use
with MSVC++ (where documented), otherwise just native Windows.

If you, for example, build a dependency using Mingw/Msys, then you may get
different filenames than if you build the same thing using its VC++
solution or makefile. That's where most, if not all, of these issues come
from.

It's probably worth noting that "back in the day" when most of this stuff
was built, there was no UCRT32 compiler option, and it really was a
potential problem to mix VC++ and Mingw compiled binaries so there was a
heavy focus on making sure everything was designed around the MSVC++ builds
wherever they existed.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Meson far from ready on Windows

2024-06-25 Thread Dave Page
Hi

On Tue, 25 Jun 2024 at 12:39, Andres Freund  wrote:

> Hi,
>
> On 2024-06-25 11:54:56 +0100, Dave Page wrote:
> > https://github.com/dpage/winpgbuild proves that the hacks above are not
> > required *if* you build the dependencies in the recommended way for use
> > with MSVC++ (where documented), otherwise just native Windows.
>
> Partially it just means that some of the hacks are now located in the
> "build
> dependencies" script.  E.g. you're renaming libintl.dll.a, libiconv.dll.a,
> libuuid.a to something that's expected by the buildmethod. And the scripts
> change the directory structure for several other dependencies (e.g. zstd,
> krb).
>
>
> > If you, for example, build a dependency using Mingw/Msys, then you may
> get
> > different filenames than if you build the same thing using its VC++
> > solution or makefile. That's where most, if not all, of these issues come
> > from.
>
> Yes, that's precisely my point. The set of correct names / flags depends on
> things outside of postgres control. Hence they should be handled outside of
> postgres, not as part of postgres. Particularly because several of the
> dependencies can be built in multiple ways, resulting in multiple library
> names. And it doesn't even just differ by compiler, there's ways to get
> different library names for some of the deps even with the same compiler!
>

Replying to this and your previous email.

I think we're in violent agreement here as to how things *should* be in an
ideal world. The issue for me is that this isn't an ideal world and the
current solution potentially makes it much harder to get a working
PostgreSQL build on Windows - not only that, but everyone doing those
builds potentially has to figure out how to get things to work for
themselves because we're pushing the knowledge outside of our build system.

I've been building Postgres on Windows for years (and like to think I'm
reasonably competent), and despite reading the docs I still ended up
starting multiple threads on the hackers list to try to understand how to
get v17 to build. Would we accept the current Meson setup on Linux if
people had to hand-craft .pc files, or install dependencies using multiple
third-party package managers to get it to work?

As I previously noted, I think we should default to pkgconfig/cmake
detection, but then fall back to what we did previously (with suitably
obnoxious warnings). Then at least a build environment that worked in the
past should work in the future.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-08 Thread Dave Page
Hi

On Sun, 7 Jul 2024 at 07:07, Andres Freund  wrote:

> Hi,
>
> On 2024-07-07 01:26:13 -0400, Tom Lane wrote:
> > Andres Freund  writes:
> > > Do we want to support checking out with core.autocrlf?
> >
> > -1.  It would be a constant source of breakage, and you could never
> > expect that (for example) making a tarball from such a checkout
> > would match anyone else's results.
>
> WFM.
>
>
> > > If we do not want to support that, ISTM we ought to raise an error
> somewhere?
> >
> > +1, if we can figure out how.
>
> I can see two paths:
>
> 1) we prevent eol conversion, by using the right magic incantation in
>.gitattributes
>
> 2) we check that some canary file is correctly encoded, e.g. during meson
>configure (should suffice, this is realistically only a windows issue)
>
>
> It seems that the only realistic way to achieve 1) is to remove the "text"
> attribute from all files. That had me worried for a bit, thinking that
> might
> have a larger blast radius. However, it looks like this is solely used for
> line-ending conversion. The man page says:
>   "This attribute marks the path as a text file, which enables end-of-line
> conversion:"
>
>
> Which sounds like it'd work well - except that it appears to behave oddly
> when
> updating to such a change in an existing repo -
>
> cd /tmp/;
> rm -rf pg-eol;
> git -c core.eol=crlf -c core.autocrlf=true clone ~/src/postgresql pg-eol;
> cd pg-eol;
> git config core.eol crlf; git config core.autocrlf true;
> stat src/test/modules/test_json_parser/tiny.json -> 6748 bytes
>
> cd ~/src/postgresql
> stat src/test/modules/test_json_parser/tiny.json -> 6604 bytes
> echo '* -text' >> .gitattributes
> git commit -a -m tmp
>
> cd /tmp/pg-eol
> git pull
> git st
>   ...
>   nothing to commit, working tree clean
> stat src/test/modules/test_json_parser/tiny.json -> 6748 bytes
>
> I.e. the repo still is in CRLF state.
>
> But if I reclone at that point, the line endings are in a sane state.
>
>
> IIUC this is because line-ending conversion is done only during
> checkout/checkin.
>
>
> There are ways to get git to redo the normalization, but it's somewhat
> awkward [1].
>

Yeah, I vaguely remember playing with core.autocrlf many years ago and
running into similar issues.


>
> OTOH, given that the tests already fail, I assume our windows contributors
> already have disabled autocrlf?
>

I can't speak for others of course, but at least as far as building of
installers is concerned, we use tarballs not git checkouts.

For my own work; well, I've started playing with PG17 on Windows just in
the last month or so and have noticed a number of test failures as well as
a weird meson issue that only shows up on a Github actions runner. I was
hoping to look into those issues this week as I've been somewhat
sidetracked with other work of late.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: Should we work around msvc failing to compile tab-complete.c?

2024-07-09 Thread Dave Page
On Mon, 8 Jul 2024 at 21:08, Andres Freund  wrote:

> Hi,
>
> On 2024-07-08 14:18:03 -0400, Tom Lane wrote:
> > Andres Freund  writes:
> > > Compiling postgres on windows with tab-completion support fails either
> with
> > > "fatal error C1026: parser stack overflow, program too complex”.
> > > or (in recent versions) with
> > > "…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal
> compiler error."
> >
> > > It's pretty easy to work around the error [2]. I wonder if we should
> just do
> > > that, then we don't have to insist on a very new msvc version being
> used and
> > > it'll work even if they just decide to fix the internal compiler error.
> >
> > I'm on board with doing something here, but wouldn't we want to
> > back-patch at least a minimal fix to all supported branches?
>
> I think we'd need to backpatch more for older branches. At least
>
> commit 3f28bd7337d
> Author: Thomas Munro 
> Date:   2022-12-22 17:14:23 +1300
>
> Add work-around for VA_ARGS_NARGS() on MSVC.
>
>
> Given that - afaict - tab completion never used to work with msvc, I think
> it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is
> currently
> building with readline support for windows - not sure if any packager is
> going
> to go back and add support for it in older branches.


Packagers aren't likely to be using readline, as it's GPL and it would have
to be shipped with packages on Windows. They are more likely to be using
libedit if anything. Not sure if that has any bearing on the compilation
failure.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-09 Thread Dave Page
Hi

On Mon, 8 Jul 2024 at 22:44, Andres Freund  wrote:

> Hi,
>
> On 2024-07-08 16:56:10 -0400, Andrew Dunstan wrote:
> > On 2024-07-08 Mo 4:16 PM, Andres Freund wrote:
> > > I'm actually mildly surprised that the tests don't fail when *not*
> using
> > > autocrlf, because afaict test_json_parser_incremental.c doesn't set
> stdout to
> > > binary and thus we presumably end up with \r\n in the output? Except
> that that
> > > can't be true, because the test does pass on repos without autocrlf...
> > >
> > >
> > > That approach does seem to mildly conflict with Tom and your
> preference for
> > > fixing this by disallowing core.autocrlf? If we do so, the test never
> ought to
> > > see a crlf?
> > >
> >
> > IDK. I normally use core.autocrlf=false core.eol=lf on Windows. The
> editors
> > I use are reasonably well behaved ;-)
>
> :)
>
>
> > What I suggest (see attached) is we run the diff command with
> > --strip-trailing-cr on Windows. Then we just won't care if the expected
> file
> > and/or the output file has CRs.
>
> I was wondering about that too, but I wasn't sure we can rely on that flag
> being supported...
>

I have 4 different diff.exe's on my ~6 week old build VM (not counting
shims), all of which seem to support --strip-trailing-cr. Those builds came
with:

- git
- VC++
- diffutils (installed by chocolatey)
- vcpkg

I think it's reasonable to assume it'll be supported.


>
>
> > Not sure what the issue is with pg_bsd_indent, though.
>

Yeah - that's odd, as that test always passes for me, with or without
autocrlf.

The other failures I see are the following, which I'm just starting to dig
into:

 26/298 postgresql:recovery / recovery/019_replslot_limit
ERROR43.05s   exit status 2
 44/298 postgresql:recovery / recovery/027_stream_regress
ERROR   383.08s   exit status 1
 50/298 postgresql:recovery / recovery/035_standby_logical_decoding
ERROR   138.06s   exit status 25
 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync
 ERROR   132.87s   exit status 25
170/298 postgresql:pg_dump / pg_dump/002_pg_dump
 ERROR93.45s   exit status 2
233/298 postgresql:bloom / bloom/001_wal
 ERROR54.47s   exit status 2
236/298 postgresql:subscription / subscription/001_rep_changes
 ERROR46.46s   exit status 2
246/298 postgresql:subscription / subscription/010_truncate
ERROR47.69s   exit status 2
253/298 postgresql:subscription / subscription/013_partition
 ERROR   125.63s   exit status 25
255/298 postgresql:subscription / subscription/022_twophase_cascade
ERROR58.13s   exit status 2
257/298 postgresql:subscription / subscription/015_stream
ERROR   128.32s   exit status 2
262/298 postgresql:subscription / subscription/028_row_filter
ERROR43.14s   exit status 2
263/298 postgresql:subscription / subscription/027_nosuperuser
 ERROR   102.02s   exit status 2
269/298 postgresql:subscription / subscription/031_column_list
 ERROR   123.16s   exit status 2
271/298 postgresql:subscription / subscription/032_subscribe_use_index
 ERROR   139.33s   exit status 2

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-10 Thread Dave Page
On Tue, 9 Jul 2024 at 17:32, Andres Freund  wrote:

> Hi,
>
> On 2024-07-09 14:52:39 +0100, Dave Page wrote:
> > I have 4 different diff.exe's on my ~6 week old build VM (not counting
> > shims), all of which seem to support --strip-trailing-cr. Those builds
> came
> > with:
> >
> > - git
> > - VC++
> > - diffutils (installed by chocolatey)
> > - vcpkg
> >
> > I think it's reasonable to assume it'll be supported.
>
> I think the more likely issue would be an older setup with an older diff,
> people on windows seem to not want to touch a working setup ever :). But we
> can deal with that if reports about it come in.
>

They've got to move to meson/ninja anyway, so... .


>
>
> > > > Not sure what the issue is with pg_bsd_indent, though.
> > >
> >
> > Yeah - that's odd, as that test always passes for me, with or without
> > autocrlf.
>
> Huh.
>
>
> > The other failures I see are the following, which I'm just starting to
> dig
> > into:
> >
> >  26/298 postgresql:recovery / recovery/019_replslot_limit
> > ERROR43.05s   exit status 2
> >  44/298 postgresql:recovery / recovery/027_stream_regress
> > ERROR   383.08s   exit status 1
> >  50/298 postgresql:recovery / recovery/035_standby_logical_decoding
> > ERROR   138.06s   exit status 25
> >  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync
> >  ERROR   132.87s   exit status 25
> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump
> >  ERROR93.45s   exit status 2
> > 233/298 postgresql:bloom / bloom/001_wal
> >  ERROR54.47s   exit status 2
> > 236/298 postgresql:subscription / subscription/001_rep_changes
> >  ERROR46.46s   exit status 2
> > 246/298 postgresql:subscription / subscription/010_truncate
> > ERROR47.69s   exit status 2
> > 253/298 postgresql:subscription / subscription/013_partition
> >  ERROR   125.63s   exit status 25
> > 255/298 postgresql:subscription / subscription/022_twophase_cascade
> > ERROR58.13s   exit status 2
> > 257/298 postgresql:subscription / subscription/015_stream
> > ERROR   128.32s   exit status 2
> > 262/298 postgresql:subscription / subscription/028_row_filter
> > ERROR43.14s   exit status 2
> > 263/298 postgresql:subscription / subscription/027_nosuperuser
> >  ERROR   102.02s   exit status 2
> > 269/298 postgresql:subscription / subscription/031_column_list
> >  ERROR   123.16s   exit status 2
> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index
> >  ERROR   139.33s   exit status 2
>
> Hm, it'd be good to see some of errors behind that ([1]).
>
> I suspect it might be related to conflicting ports. I had to use
> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:
>
>   # use unix socket to prevent port conflicts
>   $env:PG_TEST_USE_UNIX_SOCKETS = 1;
>   # otherwise pg_regress insists on creating the directory and
> does it
>   # in a non-existing place, this needs to be fixed :(
>   mkdir d:/sockets
>   $env:PG_REGRESS_SOCK_DIR = "d:/sockets/"
>

No, it all seems to be fallout from GSSAPI being included in the build. If
I build without that, everything passes. Most of the tests are failing with
a "too many clients already" error, but a handful do seem to include auth
related errors as well. For example, this is from




>
>
> FWIW, building a tree with the patches I sent to the list last night and
> changes to make postgresql-dev.yml use a git checkout, I get:
>
>
> https://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469
>
> Ok: 281
> Expected Fail:  0
> Fail:   0
> Unexpected Pass:0
> Skipped:17
> Timeout:0
>
> This is without readline and pltcl, as neither is currently built as part
> of
> winpgbuild. Otherwise it has all applicable dependencies enabled (no
> bonjour,
> bsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict expected).
>
> Greetings,
>
> Andres Freund
>
>
> [1] I plan to submit a PR that'll collect the necessary information
>


-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-10 Thread Dave Page
Sorry - somehow managed to send whilst pasting in logs...

On Wed, 10 Jul 2024 at 10:30, Dave Page  wrote:

>
>
> On Tue, 9 Jul 2024 at 17:32, Andres Freund  wrote:
>
>> Hi,
>>
>> On 2024-07-09 14:52:39 +0100, Dave Page wrote:
>> > I have 4 different diff.exe's on my ~6 week old build VM (not counting
>> > shims), all of which seem to support --strip-trailing-cr. Those builds
>> came
>> > with:
>> >
>> > - git
>> > - VC++
>> > - diffutils (installed by chocolatey)
>> > - vcpkg
>> >
>> > I think it's reasonable to assume it'll be supported.
>>
>> I think the more likely issue would be an older setup with an older diff,
>> people on windows seem to not want to touch a working setup ever :). But
>> we
>> can deal with that if reports about it come in.
>>
>
> They've got to move to meson/ninja anyway, so... .
>
>
>>
>>
>> > > > Not sure what the issue is with pg_bsd_indent, though.
>> > >
>> >
>> > Yeah - that's odd, as that test always passes for me, with or without
>> > autocrlf.
>>
>> Huh.
>>
>>
>> > The other failures I see are the following, which I'm just starting to
>> dig
>> > into:
>> >
>> >  26/298 postgresql:recovery / recovery/019_replslot_limit
>> > ERROR43.05s   exit status 2
>> >  44/298 postgresql:recovery / recovery/027_stream_regress
>> > ERROR   383.08s   exit status 1
>> >  50/298 postgresql:recovery / recovery/035_standby_logical_decoding
>> > ERROR   138.06s   exit status 25
>> >  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync
>> >  ERROR   132.87s   exit status 25
>> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump
>> >  ERROR93.45s   exit status 2
>> > 233/298 postgresql:bloom / bloom/001_wal
>> >  ERROR54.47s   exit status 2
>> > 236/298 postgresql:subscription / subscription/001_rep_changes
>> >  ERROR46.46s   exit status 2
>> > 246/298 postgresql:subscription / subscription/010_truncate
>> > ERROR47.69s   exit status 2
>> > 253/298 postgresql:subscription / subscription/013_partition
>> >  ERROR   125.63s   exit status 25
>> > 255/298 postgresql:subscription / subscription/022_twophase_cascade
>> > ERROR58.13s   exit status 2
>> > 257/298 postgresql:subscription / subscription/015_stream
>> > ERROR   128.32s   exit status 2
>> > 262/298 postgresql:subscription / subscription/028_row_filter
>> > ERROR43.14s   exit status 2
>> > 263/298 postgresql:subscription / subscription/027_nosuperuser
>> >  ERROR   102.02s   exit status 2
>> > 269/298 postgresql:subscription / subscription/031_column_list
>> >  ERROR   123.16s   exit status 2
>> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index
>> >  ERROR   139.33s   exit status 2
>>
>> Hm, it'd be good to see some of errors behind that ([1]).
>>
>> I suspect it might be related to conflicting ports. I had to use
>> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:
>>
>>   # use unix socket to prevent port conflicts
>>   $env:PG_TEST_USE_UNIX_SOCKETS = 1;
>>   # otherwise pg_regress insists on creating the directory and
>> does it
>>   # in a non-existing place, this needs to be fixed :(
>>   mkdir d:/sockets
>>   $env:PG_REGRESS_SOCK_DIR = "d:/sockets/"
>>
>
> No, it all seems to be fallout from GSSAPI being included in the build. If
> I build without that, everything passes. Most of the tests are failing with
> a "too many clients already" error, but a handful do seem to include GSSAPI
> auth related errors as well. For example, this is from
>


... this is from subscription/001_rep_changes:

[14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop
from publication
connection error: 'psql: error: connection to server at "127.0.0.1", port
58059 failed: could not initiate GSSAPI security context: No credentials
were supplied, or the credentials were unavailable or inaccessible:
Credential cache is empty
connection to server at "127.0.0.1", port 58059 failed: FATA

Re: Should we work around msvc failing to compile tab-complete.c?

2024-07-10 Thread Dave Page
On Tue, 9 Jul 2024 at 17:23, Andres Freund  wrote:

> Hi,
>
> On 2024-07-09 09:14:33 +0100, Dave Page wrote:
> > On Mon, 8 Jul 2024 at 21:08, Andres Freund  wrote:
> > > I think we'd need to backpatch more for older branches. At least
> > >
> > > commit 3f28bd7337d
> > > Author: Thomas Munro 
> > > Date:   2022-12-22 17:14:23 +1300
> > >
> > > Add work-around for VA_ARGS_NARGS() on MSVC.
> > >
> > >
> > > Given that - afaict - tab completion never used to work with msvc, I
> think
> > > it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is
> > > currently
> > > building with readline support for windows - not sure if any packager
> is
> > > going
> > > to go back and add support for it in older branches.
> >
> >
> > Packagers aren't likely to be using readline, as it's GPL and it would
> have
> > to be shipped with packages on Windows.
>
> I'm not sure (I mean that literally, not as a way to state that I think
> it's
> not as you say) it'd actually be *that* big a problem to use readline -
> sure
> it'd make psql GPL, but that's the only thing using readline. Since psql
> doesn't link to extensible / user provided code, it might actually be ok.
>
> With openssl < 3.0 the mix between openssl and readline would be
> problematic,
> IIRC the licenses aren't really compatible. But luckily openssl relicensed
> to
> apache v2.
>

Certainly in the case of the packages produced at EDB, we didn't want any
potential surprises for users/redistributors/embedders, so *any* GPL was a
problem.


> But that is pretty orthogonal to the issue at hand:
>

Right.

What is more relevant is that as far as I can see, we've never actually
supported either libedit or readline in MSVC++ builds - which kinda makes
sense because Windows terminals are very different from traditional *nix
ones. Windows isn't supported by either libedit or readline as far as I can
see, except through a couple of forks.

I imagine from mingw/cygwin builds, readline would only actually work
properly in their respective terminals.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-10 Thread Dave Page
On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan  wrote:

>
> On 2024-07-09 Tu 11:34 AM, Andrew Dunstan wrote:
>
>
> On 2024-07-09 Tu 9:52 AM, Dave Page wrote:
>
>
>
>> > What I suggest (see attached) is we run the diff command with
>> > --strip-trailing-cr on Windows. Then we just won't care if the expected
>> file
>> > and/or the output file has CRs.
>>
>> I was wondering about that too, but I wasn't sure we can rely on that flag
>> being supported...
>>
>
> I have 4 different diff.exe's on my ~6 week old build VM (not counting
> shims), all of which seem to support --strip-trailing-cr. Those builds came
> with:
>
> - git
> - VC++
> - diffutils (installed by chocolatey)
> - vcpkg
>
> I think it's reasonable to assume it'll be supported.
>
>
>
> Ok, cool. So I propose to patch the test_json_parser and pg_bsd_indent
> tests to use it on Windows, later today unless there's some objection.
>
>
>
>
> As I was looking at this I wondered if there might be anywhere else that
> needed adjustment. One thing that occurred to me was that that maybe we
> should replace the use of "-w" in pg_regress.c with this rather less
> dangerous flag, so instead of ignoring any white space difference we would
> only ignore line end differences. The use of "-w" apparently dates back to
> 2009.
>
That seems like a good improvement to me.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: tests fail on windows with default git settings

2024-07-11 Thread Dave Page
On Wed, 10 Jul 2024 at 10:35, Dave Page  wrote:

> > The other failures I see are the following, which I'm just starting to
>>> dig
>>> > into:
>>> >
>>> >  26/298 postgresql:recovery / recovery/019_replslot_limit
>>> > ERROR43.05s   exit status 2
>>> >  44/298 postgresql:recovery / recovery/027_stream_regress
>>> > ERROR   383.08s   exit status 1
>>> >  50/298 postgresql:recovery / recovery/035_standby_logical_decoding
>>> > ERROR   138.06s   exit status 25
>>> >  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync
>>> >  ERROR   132.87s   exit status 25
>>> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump
>>> >  ERROR93.45s   exit status 2
>>> > 233/298 postgresql:bloom / bloom/001_wal
>>> >  ERROR54.47s   exit status 2
>>> > 236/298 postgresql:subscription / subscription/001_rep_changes
>>> >  ERROR46.46s   exit status 2
>>> > 246/298 postgresql:subscription / subscription/010_truncate
>>> > ERROR47.69s   exit status 2
>>> > 253/298 postgresql:subscription / subscription/013_partition
>>> >  ERROR   125.63s   exit status 25
>>> > 255/298 postgresql:subscription / subscription/022_twophase_cascade
>>> > ERROR58.13s   exit status 2
>>> > 257/298 postgresql:subscription / subscription/015_stream
>>> > ERROR   128.32s   exit status 2
>>> > 262/298 postgresql:subscription / subscription/028_row_filter
>>> > ERROR43.14s   exit status 2
>>> > 263/298 postgresql:subscription / subscription/027_nosuperuser
>>> >  ERROR   102.02s   exit status 2
>>> > 269/298 postgresql:subscription / subscription/031_column_list
>>> >  ERROR   123.16s   exit status 2
>>> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index
>>> >  ERROR   139.33s   exit status 2
>>>
>>> Hm, it'd be good to see some of errors behind that ([1]).
>>>
>>> I suspect it might be related to conflicting ports. I had to use
>>> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:
>>>
>>>   # use unix socket to prevent port conflicts
>>>   $env:PG_TEST_USE_UNIX_SOCKETS = 1;
>>>   # otherwise pg_regress insists on creating the directory and
>>> does it
>>>   # in a non-existing place, this needs to be fixed :(
>>>   mkdir d:/sockets
>>>   $env:PG_REGRESS_SOCK_DIR = "d:/sockets/"
>>>
>>
>> No, it all seems to be fallout from GSSAPI being included in the build.
>> If I build without that, everything passes. Most of the tests are failing
>> with a "too many clients already" error, but a handful do seem to include
>> GSSAPI auth related errors as well. For example, this is from
>>
>
>
> ... this is from subscription/001_rep_changes:
>
> [14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop
> from publication
> connection error: 'psql: error: connection to server at "127.0.0.1", port
> 58059 failed: could not initiate GSSAPI security context: No credentials
> were supplied, or the credentials were unavailable or inaccessible:
> Credential cache is empty
> connection to server at "127.0.0.1", port 58059 failed: FATAL:  sorry, too
> many clients already'
> while running 'psql -XAtq -d port=58059 host=127.0.0.1 dbname='postgres'
> -f - -v ON_ERROR_STOP=1' at
> C:/Users/dpage/git/postgresql/src/test/perl/PostgreSQL/Test/Cluster.pm line
> 2129.
> # Postmaster PID for node "publisher" is 14488
> ### Stopping node "publisher" using mode immediate
> # Running: pg_ctl -D
> C:\Users\dpage\git\postgresql\build/testrun/subscription/001_rep_changes\data/t_001_rep_changes_publisher_data/pgdata
> -m immediate stop
> waiting for server to shut down done
> server stopped
> # No postmaster PID for node "publisher"
> # Postmaster PID for node "subscriber" is 15012
> ### Stopping node "subscriber" using mode immediate
> # Running: pg_ctl -D
> C:\Users\dpage\git\postgresql\build/testrun/subscription/001_rep_changes\data/t_001_rep_changes_subscriber_data/pgdata
> -m immediate st

Re: zlib detection in Meson on Windows broken?

2024-05-21 Thread Dave Page
Hi Sandeep, Nazir,

On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz  wrote:

> Hi,
>
> On Tue, 21 May 2024 at 10:20, Sandeep Thakkar
>  wrote:
> >
> > Hi Dave,
> >
> > Is the .pc file generated after the successful build of zlib? If yes,
> then meson should be able to detect the installation ideally
>
> If meson is not able to find the .pc file automatically, using 'meson
> setup ... --pkg-config-path $ZLIB_PC_PATH' might help.
>

The problem is that on Windows there are no standard locations for a
Unix-style development library installation such as this, so the chances
are that the .pc file will point to entirely the wrong location.

For example, please see
https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a
Github action that builds a completely vanilla zlib using VC++. If you look
at the uploaded artefact containing the build output and example the .pc
file, you'll see it references /zlib as the location, which is simply where
I built it in that action. On a developer's machine that's almost certainly
not going to be where it actually ends up. For example, on the pgAdmin
build farm, the dependencies all end up in C:\build64\[whatever]. On the
similar Github action I'm building for PostgreSQL, that artefact will be
unpacked into /build/zlib.

Of course, for my own builds I can easily make everything use consistent
directories, however most people who are likely to want to build PostgreSQL
may not want to also build all the dependencies themselves as well, as some
are a lot more difficult than zlib. So what tends to happen is people find
third party builds or upstream official builds.

I would therefore argue that if the .pc file that's found doesn't provide
correct paths for us, then Meson should fall back to searching in the paths
specified on its command line for the appropriate libraries/headers (which
is what it does for OpenSSL for example, as that doesn't include a .pc
file). This is also what happens with PG16 and earlier.

One other thing I will note is that PG16 and earlier try to use the wrong
filename for the import library. For years, it's been a requirement to do
something like this: "copy \zlib\lib\zlib.lib \zlib\lib\zdll.lib" to make a
build succeed against a "vanilla" zlib build. I haven't got as far as
figuring out if the same is true with Meson yet.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: zlib detection in Meson on Windows broken?

2024-05-21 Thread Dave Page
Hi

On Tue, 21 May 2024 at 15:12, Sandeep Thakkar <
sandeep.thak...@enterprisedb.com> wrote:

> Hi Dave,
>
>
> On Tue, May 21, 2024 at 3:12 PM Dave Page  wrote:
>
>> Hi Sandeep, Nazir,
>>
>> On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz 
>> wrote:
>>
>>> Hi,
>>>
>>> On Tue, 21 May 2024 at 10:20, Sandeep Thakkar
>>>  wrote:
>>> >
>>> > Hi Dave,
>>> >
>>> > Is the .pc file generated after the successful build of zlib? If yes,
>>> then meson should be able to detect the installation ideally
>>>
>>> If meson is not able to find the .pc file automatically, using 'meson
>>> setup ... --pkg-config-path $ZLIB_PC_PATH' might help.
>>>
>>
>> The problem is that on Windows there are no standard locations for a
>> Unix-style development library installation such as this, so the chances
>> are that the .pc file will point to entirely the wrong location.
>>
>> For example, please see
>> https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a
>> Github action that builds a completely vanilla zlib using VC++. If you look
>> at the uploaded artefact containing the build output and example the .pc
>> file, you'll see it references /zlib as the location, which is simply where
>> I built it in that action. On a developer's machine that's almost certainly
>> not going to be where it actually ends up. For example, on the pgAdmin
>> build farm, the dependencies all end up in C:\build64\[whatever]. On the
>> similar Github action I'm building for PostgreSQL, that artefact will be
>> unpacked into /build/zlib.
>>
>>
> The above link returned 404. But I found a successful build at
> https://github.com/dpage/winpgbuild/actions/runs/9175426807. I downloaded
> the artifact but didn't find .pc file as I wanted to look into the content
> of that file.
>

Yeah, sorry - that was an old one.

Following some offline discussion with Andrew I realised I was building
zlib incorrectly - using cmake/msbuild instead of nmake and some manual
copying. Whilst the former does create a working library and a sane looking
installation, it's not the recommended method, which is documented in a
very non-obvious way - far less obvious than "oh look, there's a cmake
config - let's use that".

The new build is done using the recommended method, which works with PG16
and below out of the box with no need to rename any files.


>
> I had a word with Murali who mentioned he encountered a similar issue
> while building PG17 on windows. He worked-around is by using a template .pc
> file that includes these lines:
> --
> prefix=${pcfiledir}/../..
> exec_prefix=${prefix}
> libdir=${prefix}/lib
> sharedlibdir=${prefix}/lib
> includedir=${prefix}/include
> --
>

The issue here is that there is no .pc file created with the correct way of
building zlib, and even if there were (or I created a dummy one),
pkg-config isn't really a thing on Windows.

I'd also note that from what Andrew has shown me of the zlib installation
on the buildfarm member drongo, there is no .pc file there either, and yet
seems to work fine (and my zlib installation now has the exact same set of
files as his does).


>
> But in general I agree with you on the issue of Meson's dependency on
> pkgconfig files to detect the third party libraries.
>
> Of course, for my own builds I can easily make everything use consistent
>> directories, however most people who are likely to want to build PostgreSQL
>> may not want to also build all the dependencies themselves as well, as some
>> are a lot more difficult than zlib. So what tends to happen is people find
>> third party builds or upstream official builds.
>>
>> I would therefore argue that if the .pc file that's found doesn't provide
>> correct paths for us, then Meson should fall back to searching in the paths
>> specified on its command line for the appropriate libraries/headers (which
>> is what it does for OpenSSL for example, as that doesn't include a .pc
>> file). This is also what happens with PG16 and earlier.
>>
>> One other thing I will note is that PG16 and earlier try to use the wrong
>> filename for the import library. For years, it's been a requirement to do
>> something like this: "copy \zlib\lib\zlib.lib \zlib\lib\zdll.lib" to make a
>> build succeed against a "vanilla" zlib build. I haven't got as far as
>> figuring out if the same is true with Meson yet.
>>
>> --
>> Dave Page
>> pgAdmin: https://www.pgadmin.org
>> PostgreSQL: https://www.postgresql.org
>> EDB: https://www.enterprisedb.com
>>
>>
>
> --
> Sandeep Thakkar
>
>
>

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: zlib detection in Meson on Windows broken?

2024-05-21 Thread Dave Page
On Tue, 21 May 2024 at 16:04, Andres Freund  wrote:

> Hi,
>
> On 2024-05-20 11:58:05 +0100, Dave Page wrote:
> > I have very little experience with Meson, and even less interpreting it's
> > logs, but it seems to me that it's not including the extra lib and
> include
> > directories when it runs the test compile, given the command line it's
> > reporting:
> >
> > cl
> C:\Users\dpage\git\postgresql\build\meson-private\tmpg_h4xcue\testfile.c
> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od
> /Oi-
> >
> > Bug, or am I doing something silly?
>
> It's a buglet. We rely on meson's internal fallback detection of zlib, if
> it's
> not provided via pkg-config or cmake. But it doesn't know about our
> extra_include_dirs parameter. We should probably fix that...
>

Oh good, then I'm not going bonkers. I'm still curious about how it works
for Andrew but not me, however fixing that buglet should solve my issue,
and would be sensible behaviour.

Thanks!

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: zlib detection in Meson on Windows broken?

2024-05-22 Thread Dave Page
On Tue, 21 May 2024 at 18:00, Andres Freund  wrote:

> Hi,
>
> On 2024-05-20 11:58:05 +0100, Dave Page wrote:
> > I then attempt to build PostgreSQL:
> >
> >  meson setup build
> > -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include
> > -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl
> > -Dzlib=enabled --prefix=c:/build64/pgsql
> >
> > Which results in the output in output.txt, indicating that OpenSSL was
> > correctly found, but zlib was not. I've also attached the meson log.
>
> I forgot to mention that earlier: Assuming you're building something to be
> distributed, I'd recommend --auto-features=enabled/disabled and specifying
> specifically which dependencies you want to be used.
>

Good idea - thanks.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: zlib detection in Meson on Windows broken?

2024-05-22 Thread Dave Page
On Tue, 21 May 2024 at 20:54, Andrew Dunstan  wrote:

>
> On 2024-05-21 Tu 11:04, Andres Freund wrote:
> > Hi,
> >
> > On 2024-05-20 11:58:05 +0100, Dave Page wrote:
> >> I have very little experience with Meson, and even less interpreting
> it's
> >> logs, but it seems to me that it's not including the extra lib and
> include
> >> directories when it runs the test compile, given the command line it's
> >> reporting:
> >>
> >> cl
> C:\Users\dpage\git\postgresql\build\meson-private\tmpg_h4xcue\testfile.c
> >> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od
> /Oi-
> >>
> >> Bug, or am I doing something silly?
> > It's a buglet. We rely on meson's internal fallback detection of zlib,
> if it's
> > not provided via pkg-config or cmake. But it doesn't know about our
> > extra_include_dirs parameter. We should probably fix that...
> >
>
> Yeah. Meanwhile, what I got working on a totally fresh Windows + VS
> install was instead of using extra_include_dirs etc to add the relevant
> directories to the environment LIB and INCLUDE settings before calling
> `"meson setup".
>

Yes, that works for me too.

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


  1   2   >