Re: postgresql 13.1: precision of spatial operations

2022-12-01 Thread Вадим Самохин
ср, 30 нояб. 2022 г., 20:51 Brad White :

> On 11/30/2022 9:48 AM, Вадим Самохин wrote:
>
> > Thank you so much Ivan, it worked!
> Can you give any more detail on which approach you took, for the sake of
> future followers?
>

Sure, I multiplied all points' coordinates by a factor of 10^6.

Here are the steps to demonstrate that the solution Ivan gave worked:

1. create table zones (
zone_id int,
zone_polygon polygon,
description text
);
create index zones__zone_polygon on zones using gist(zone_polygon poly_ops);

2. insert into zones (zone_polygon) values
('(37622012.9,55751936.7),(37621534.4,55753661.6),(37617206.4,55755950.9),(37622012.9,55751936.7)');

3. select count(1) from zones where zone_polygon @>
'(37617635,55755814)'::polygon;
 count
---
 0
(1 row)

>


posgres question / answer

2022-12-01 Thread Rizwan Shaukat


Hi Team,

im new to postgres, trying to learn postgres n i hv many questions in my mind, 
need to discuss in community, pls add access for questions.

Rizwan



Re: posgres question / answer

2022-12-01 Thread Ray O'Donnell

On 30/11/2022 22:39, Rizwan Shaukat wrote:


Hi Team,

im new to postgres, trying to learn postgres n i hv many questions in
my mind, need to discuss in community, pls add access for questions.


Hi there,

This is a mailing list - emails to it go to everyone on the list. Ask away!

Ray.


--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie





Re: postgres large database backup

2022-12-01 Thread Mladen Gogala

On 11/30/22 20:41, Michael Loftis wrote:


ZFS snapshots don’t typically have much if  any performance impact 
versus not having a snapshot (and already being on ZFS) because it’s 
already doing COW style semantics.



Hi Michael,

I am not sure that such statement holds water. When a snapshot is taken, 
the amount of necessary I/O requests goes up dramatically. For every 
block that snapshot points to, it is necessary to read the block, write 
it to the spare location and then overwrite it, if you want to write to 
a block pointed by snapshot. That gives 3 I/O requests for every block 
written. NetApp is trying to optimize it by using 64MB blocks, but ZFS 
on Linux cannot do that, they have to use standard CoW because they 
don't have the benefit of their own hardware and OS. And the standard 
CoW is tripling the number of I/O requests for every write to the blocks 
pointed to by the snapshot, for every snapshot. CoW is a very expensive 
animal, with horns.


Regards

--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: postgres large database backup

2022-12-01 Thread Mladen Gogala

On 11/30/22 20:51, Ron wrote:
ZFS will yes be slower than a raw disk (but that’s not an option for 
Pg anyway), and may or may not be faster than a different  filesystem 
on a HW RAID volume or storage array volume. It absolutely takes more 
care/clue/tuning to get Pg write performance on ZFS, and ZFS does 
duplicate some of Pg’s resiliency so there is duplicate work going on.


Ron, if this raw device reference is about ASM, Oracle has a file system 
on top of ASM, called ACFS, and I have been able to run PostgreSQL on 
top of ACFS. The reason to do that is redundancy. ASM/ACFS with 
PostgreSQL behaves similarly to Veritas Cluster, when one Postgres 
cluster goes down, the other one is started. And you don't have to pay 
for it, unless you start using storage snapshots. That ACFS feature 
requires a commercial license.


--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: postgres large database backup

2022-12-01 Thread Vijaykumar Jain
On Thu, Dec 1, 2022, 7:11 PM Mladen Gogala  wrote:

> On 11/30/22 20:41, Michael Loftis wrote:
>
>
> ZFS snapshots don’t typically have much if  any performance impact versus
> not having a snapshot (and already being on ZFS) because it’s already doing
> COW style semantics.
>
> Hi Michael,
>
> I am not sure that such statement holds water. When a snapshot is taken,
> the amount of necessary I/O requests goes up dramatically. For every block
> that snapshot points to, it is necessary to read the block, write it to the
> spare location and then overwrite it, if you want to write to a block
> pointed by snapshot. That gives 3 I/O requests for every block written.
> NetApp is trying to optimize it by using 64MB blocks, but ZFS on Linux
> cannot do that, they have to use standard CoW because they don't have the
> benefit of their own hardware and OS. And the standard CoW is tripling the
> number of I/O requests for every write to the blocks pointed to by the
> snapshot, for every snapshot. CoW is a very expensive animal, with horns.
>

I am not an expert in this area, but we have zfs for specific instances
which have timeseries/event log data, and we also need compression.
One day, there was a need to snapshot a 35tb zfs pool and send it across
the network to a relplica, coz both the disks in the mirror degraded around
same time, I do not recall zfs snapshots took anything resource intensive,
and it was quick.ill ask around for actual time though.

We have more than 500 of these type of nodes with zfs (each having 20 disks
in mirror each 8tb) for event log with compression, and zfs works just
fine. This is a special setup where the data is assumed to be cold storage,
hence compression, so it was designed for heavy writes and occasional reads
queries only for debugging.

>


Re: Finding free time period on non-continous tstzrange field values

2022-12-01 Thread Tom Lane
Amitabh Kant  writes:
> I tried the following query :

> SELECT tstzmultirange(tstzrange('2022-11-25 00:00:00', '2022-11-30
> 00:00:00', '[]')) -
>   range_agg(time_range) AS availability
> FROM test_time_range
> WHERE time_range && tstzrange('2022-11-25 00:00:00', '2022-11-30 00:00:00',
> '[]');

> but then I receive the following error. My guess is I need to cast the
> tstzrange output, but can't seem to find the correct cast.

> ERROR: function tstzmultirange(tstzrange) does not exist
> LINE 1: SELECT tstzmultirange(tstzrange('2022-11-25 00:00:00', '2022... ^
> HINT: No function matches the given name and argument types. You might need
> to add explicit type casts. SQL state: 42883 Character: 8

That function certainly should exist:

psql (14.6)
Type "help" for help.

postgres=# \df tstzmultirange
  List of functions
   Schema   |  Name  | Result data type | Argument data types  | Type 
++--+--+--
 pg_catalog | tstzmultirange | tstzmultirange   |  | func
 pg_catalog | tstzmultirange | tstzmultirange   | VARIADIC tstzrange[] | func
 pg_catalog | tstzmultirange | tstzmultirange   | tstzrange| func
(3 rows)

My guess is that your server is not in fact PG14, but some
older version.

regards, tom lane




Re: postgres large database backup

2022-12-01 Thread Vijaykumar Jain
>  I do not recall zfs snapshots took anything resource intensive, and it
> was quick.ill ask around for actual time.
>

Ok just a small note, out ingestion pattern is write anywhere, read
globally. So we did stop ingestion while snapshot was taken as we could
afford it that way. Maybe the story is different when snapshot is taken on
live systems which generate a lot of delta.

>


Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Martijn Tonies (Upscene Productions)

Hi all,

In PosgreSQL version 13, the source code for a stored procedure or function 
in SQL/plpgsql/etc was stored in pg_proc.prosrc. This column would hold the 
original procedure or function body, verbatim.


Since version 14, the source code for a stored procedure or function written 
in plain (compound) SQL, a new feature, is no longer stored in 
pg_proc.prosrc, instead, there’s an additional column prosqlbody which 
returns some kind of pre-parsed SQL which has no use for the user.


I know you can use pg_get_functiondef to get a CREATE PROCEDURE/FUNCTION 
statement, but for compound SQL based routines in v14 and v15, this removes 
all formatting and comments in the original CREATE statement. For database 
development, including browsing an existing database and know what each 
routine can do, this is quite terrible.



What is the reason for this?


Can you modify the server code to store the original body in proc.prosrc 
again? It would be very helpful.



With regards,

Martijn Tonies
Upscene Productions
https://www.upscene.com

Database Workbench - developer tool for Oracle, SQL Server, PostgreSQL, 
MySQL, InterBase, NexusDB and Firebird. 






Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread David G. Johnston
On Thu, Dec 1, 2022 at 7:59 AM Martijn Tonies (Upscene Productions) <
m.ton...@upscene.com> wrote:

>
> Since version 14, the source code for a stored procedure or function
> written
> in plain (compound) SQL, a new feature, is no longer stored in
> pg_proc.prosrc, instead, there’s an additional column prosqlbody which
> returns some kind of pre-parsed SQL which has no use for the user.
>


> For database
> development, including browsing an existing database and know what each
> routine can do, this is quite terrible.
>
>
Frankly, this is not all that compelling.  The version controlled source
code should be readily referenced to find out the extended details of this
nature.  The function name, and a COMMENT ON comment, provide arguably
sufficient in-database knowledge for cursory browsing redirecting the
reader to the source code for implementation details and history.

Can you modify the server code to store the original body in proc.prosrc
> again? It would be very helpful.
>
>
I seem to recall that this option had been discussed and rejected when this
feature went in.  The parsed body is a feature because its contents can be
updated due to, e.g., renaming of objects.  The text source of the original
command would be unable to be updated in the same way and so it is possible
the text and the parsed contents would diverge over time, which is a
situation we do not want to have.

David J.


Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Dec 1, 2022 at 7:59 AM Martijn Tonies (Upscene Productions) <
> m.ton...@upscene.com> wrote:
>> Can you modify the server code to store the original body in proc.prosrc
>> again? It would be very helpful.

> I seem to recall that this option had been discussed and rejected when this
> feature went in.  The parsed body is a feature because its contents can be
> updated due to, e.g., renaming of objects.  The text source of the original
> command would be unable to be updated in the same way and so it is possible
> the text and the parsed contents would diverge over time, which is a
> situation we do not want to have.

Indeed.  We used to have a similar situation with respect to column
default expressions and CHECK constraint expressions.  Eventually we got
rid of the textual storage of both, because it couldn't be maintained
in a reasonable way.

I think the answer here is "don't use the new syntax if you want the
function body stored textually".  You can have one set of benefits,
or the other set, but not both at once.

regards, tom lane




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Dominique Devienne
On Thu, Dec 1, 2022 at 4:23 PM Tom Lane  wrote:
> "David G. Johnston"  writes:
> > On Thu, Dec 1, 2022 at 7:59 AM Martijn Tonies (Upscene Productions) <
> > m.ton...@upscene.com> wrote:
> >> Can you modify the server code to store the original body in proc.prosrc
> >> again? It would be very helpful.
>
> > I seem to recall that this option had been discussed and rejected when this
> > feature went in.  The parsed body is a feature because its contents can be
> > updated due to, e.g., renaming of objects.  The text source of the original
> > command would be unable to be updated in the same way and so it is possible
> > the text and the parsed contents would diverge over time, which is a
> > situation we do not want to have.
>
> Indeed.  We used to have a similar situation with respect to column
> default expressions and CHECK constraint expressions.  Eventually we got
> rid of the textual storage of both, because it couldn't be maintained
> in a reasonable way.
>
> I think the answer here is "don't use the new syntax if you want the
> function body stored textually".  You can have one set of benefits,
> or the other set, but not both at once.

FWIW, we have a custom schema introspection and diff'ing ad-hoc framework,
and the fact the original SQL is not conserved as-is has also created
issues for us.

On Oracle, our SQL was preserved as-is, so could be compared reliably. While on
PostgreSQL, some names-in-SQL are rewritten, the text reformatted,
etc... So this
byte-perfect comparison is no longer possible, and we must rely on heuristics
(a.k.a, hacks...) to do a fuzzy-compare (since we lack a real PG-compatible SQL
parsers to do an infoset-comparison instead, at the AST level for example).

So it's not just a matter of browsing the schema. For us, it's a
*functional* issue. --DD




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Martijn Tonies (Upscene Productions)
On Thu, Dec 1, 2022 at 7:59 AM Martijn Tonies (Upscene Productions) 
 wrote:

  Since version 14, the source code for a stored procedure or function written 
  in plain (compound) SQL, a new feature, is no longer stored in 
  pg_proc.prosrc, instead, there’s an additional column prosqlbody which 
  returns some kind of pre-parsed SQL which has no use for the user.


  For database 
  development, including browsing an existing database and know what each 
  routine can do, this is quite terrible.



Frankly, this is not all that compelling.  The version controlled source code 
should be readily referenced to find out the extended details of this nature.  
The function name, and a COMMENT ON comment, provide arguably sufficient 
in-database knowledge for cursory browsing redirecting the reader to the source 
code for implementation details and history.


Frankly, David, you’re wrong. Although I agree a version controlled source code 
is a very good way to maintain a proper version of your development database, 
–while- developing and changing source code, it’s simply not the easiest way to 
run scripts, compared to having a tool that allows more interactive 
development, code commenting (instead of “COMMENT ON”), SQL Insight, browsing 
the current structure and so on.




  Can you modify the server code to store the original body in proc.prosrc 
  again? It would be very helpful.



I seem to recall that this option had been discussed and rejected when this 
feature went in.  The parsed body is a feature because its contents can be 
updated due to, e.g., renaming of objects.  The text source of the original 
command would be unable to be updated in the same way and so it is possible the 
text and the parsed contents would diverge over time, which is a situation we 
do not want to have.


Ah yes, automatic renaming of database objects with its dependencies can be 
useful. Oracle invalidates routines that use the objects, marks those routines 
‘invalid’ and they need to be recompiled, that would fail when the source code 
references an object that no longer exists. This also means that you actually 
know which files you need to touch in your version controlled source 

With regards,

Martijn Tonies
Upscene Productions
https://www.upscene.com

Database Workbench - developer tool for Oracle, SQL Server, PostgreSQL, MySQL, 
InterBase, NexusDB and Firebird.

Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Laurenz Albe
On Thu, 2022-12-01 at 16:38 +0100, Dominique Devienne wrote:
> FWIW, we have a custom schema introspection and diff'ing ad-hoc framework,
> and the fact the original SQL is not conserved as-is has also created
> issues for us.
> 
> On Oracle, our SQL was preserved as-is, so could be compared reliably. While 
> on
> PostgreSQL, some names-in-SQL are rewritten, the text reformatted,
> etc...
>
> So it's not just a matter of browsing the schema. For us, it's a
> *functional* issue. --DD

This is arguable, but my opinion is that this is not a robust way to
do development.  You should use a schema versioning tool like Liquibase,
develop schema migration scripts and maintain the SQL code in a source
repository like other software.

At any rate, you won't be able to do it in your accustomed way in
PostgreSQL.

Yours,
Laurenz Albe




Re: postgres large database backup

2022-12-01 Thread Michael Loftis
On Thu, Dec 1, 2022 at 06:40 Mladen Gogala  wrote:

> On 11/30/22 20:41, Michael Loftis wrote:
>
>
> ZFS snapshots don’t typically have much if  any performance impact versus
> not having a snapshot (and already being on ZFS) because it’s already doing
> COW style semantics.
>
> Hi Michael,
>
> I am not sure that such statement holds water. When a snapshot is taken,
> the amount of necessary I/O requests goes up dramatically. For every block
> that snapshot points to, it is necessary to read the block, write it to the
> spare location and then overwrite it, if you want to write to a block
> pointed by snapshot. That gives 3 I/O requests for every block written.
> NetApp is trying to optimize it by using 64MB blocks, but ZFS on Linux
> cannot do that, they have to use standard CoW because they don't have the
> benefit of their own hardware and OS. And the standard CoW is tripling the
> number of I/O requests for every write to the blocks pointed to by the
> snapshot, for every snapshot. CoW is a very expensive animal, with horns.
>

Nope, ZFS does not behave that way.  Yup AFAIK all other snapshotting
filesystems or volume managers do.  One major architectural decision of ZFS
is the atomicity of writes.  Data at rest stays at rest.  Thus it does NOT
overwrite live data.  Snapshots do not change the write path/behavior in
ZFS. In ZFS writes are atomic, you’re always writing new data to free
space, and accounting for where the current record/volume block within a
file or volume actually lives on disk.  If a filesystem, volume manager, or
RAID system, is overwriting data and in the middle of that process and has
an issue that breaks that write, and that data is also live data, you can't
be atomic, you've now destroyed data (RAID write hole is one concept of
this).  That’s why adding a snapshot isn’t an additional cost for ZFS.  For
better or worse you're paying that snapshot cost already because it already
does not overwrite live data.  If there's no snapshot once the write is
committed and the refcount is zero for the old blocks, and it's safe (TXG
committed), those old blocks go back to the free pool to be potentially
used again.  There's a bunch of optimization to that and how it actually
happens, but at the end of the day, your writes do not overwrite your data
in ZFS, writes of data get directed at free space, and eventually the
on-disk structures get an atomic update that happens to say it now lives
here.  In the time between that all happening the ZIL (which may live on
its own special devices called SLOG -- this is why you often see the terms
ZIL/journal/SLOG/log vdev used interchangeably) is the durable bit, but
that's never normally read, it's only read back during recovery.   This is
also where the ZFS filesystem property of recordsize or volblocksize
(independently configurable on every filesystem/volume within a pool) is
important for performance.  If you clobber a whole record ZFS isn't going
to read anything extra when it gets around to committing, it knows the
whole record changed and can safely write a whole new record (every 5s it
goes about this TXG commit, so two 64k writes are still slower with a 128k
recordsize, but still shouldn't pull in that 128k record).  There's other
optimizations there, but at the end of the day as long as the chosen
recordsize/volblocksize that matches up to your writes, and your writes are
aligned to that within your file or volume, you'll not see an extra read of
the data as part of it's normal flow of committing data.  Snapshots don't
change that.

Because of those architectural decisions, CoW behavior is part of ZFS'
existing performance penalty, so when you look at that older Oracle ASM vs
ZFS article, remember that that extra...what was it 0.5ms?... is accounting
for most, probably all of the penalties for a snapshot too if you want (or
need) it.  It's fundamental to how ZFS works and provides data
durability+atomicity.  This is what ZFS calls it's snapshots essentially
free, because you're already paying the performance for it.   What would
ASM do if it had a snapshot to manage?  Or a few dozen on the same data?
Obviously during the first writes to those snapshotted areas you'd see it.
Ongoing performance penalties with those snapshots? Maybe ASM has an
optimization that saves that benchmark a bunch of time if there is no
snapshot.  But once one exists it takes a different write path and adds a
performance penalty?  If a snapshot was taken in the middle of the
benchmark?  Yeah there's going to be some extra IOPS when you take the
snapshot to say "a snapshot now exists" for ZFS, but that doesn't
dramatically change it's underlying write path after that point.

That atomicity and data durability also means that even if you lose the
SLOG devices (which hold the ZIL/journal, if you don't have SLOG/log vdev
then it's in-pool) you do not lose all the data.  Only stuff that somehow
remained uncommitted after the ZIL failed. Say you had some sort of hard
fault/cras

Re: postgres large database backup

2022-12-01 Thread Michael Loftis
On Thu, Dec 1, 2022 at 9:21 AM Michael Loftis  wrote:
>
>
>
> On Thu, Dec 1, 2022 at 06:40 Mladen Gogala  wrote:
>>
>> On 11/30/22 20:41, Michael Loftis wrote:
>>
>>
>> ZFS snapshots don’t typically have much if  any performance impact versus 
>> not having a snapshot (and already being on ZFS) because it’s already doing 
>> COW style semantics.
>>
>> Hi Michael,
>>
>> I am not sure that such statement holds water. When a snapshot is taken, the 
>> amount of necessary I/O requests goes up dramatically. For every block that 
>> snapshot points to, it is necessary to read the block, write it to the spare 
>> location and then overwrite it, if you want to write to a block pointed by 
>> snapshot. That gives 3 I/O requests for every block written. NetApp is 
>> trying to optimize it by using 64MB blocks, but ZFS on Linux cannot do that, 
>> they have to use standard CoW because they don't have the benefit of their 
>> own hardware and OS. And the standard CoW is tripling the number of I/O 
>> requests for every write to the blocks pointed to by the snapshot, for every 
>> snapshot. CoW is a very expensive animal, with horns.

And if you want to know more, ARS wrote a good ZFS 101 article -- the
write semantics I described in overview are on page three,
https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/3/


-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Dominique Devienne
On Thu, Dec 1, 2022 at 5:10 PM Laurenz Albe  wrote:
> On Thu, 2022-12-01 at 16:38 +0100, Dominique Devienne wrote:
> > FWIW, we have a custom schema introspection and diff'ing ad-hoc framework,
>
> This is arguable, but my opinion is that this is not a robust way to
> do development.  You should use a schema versioning tool like Liquibase,
> develop schema migration scripts and maintain the SQL code in a source
> repository like other software.

We don't maintain SQL. We maintain a *Logical* model, and generate the
physical model from it.
FKs, NKs, Enums, CHECK constraints, indexes, etc.. all that
"implementation details"
is programmatically generated, and always consistent, from a much
higher-level and simpler model.
And you also get auto-upgrade most of the time, greatly increasing
development velocity too.

I would argue that NOT doing it this way, is the non-robust way myself :)
We've been refactoring a large data-model maintained manually like you advocate,
and I can't tell you how many anomalies we've discovered and had to fix,
using the more robust formalism of using a high-level logical model
and (DDL) code gen.

I guess is a DBA-versus-Developer point-of-view difference. --DD




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Adrian Klaver

On 12/1/22 09:24, Dominique Devienne wrote:

On Thu, Dec 1, 2022 at 5:10 PM Laurenz Albe  wrote:

On Thu, 2022-12-01 at 16:38 +0100, Dominique Devienne wrote:

FWIW, we have a custom schema introspection and diff'ing ad-hoc framework,


This is arguable, but my opinion is that this is not a robust way to
do development.  You should use a schema versioning tool like Liquibase,
develop schema migration scripts and maintain the SQL code in a source
repository like other software.


We don't maintain SQL. We maintain a *Logical* model, and generate the
physical model from it.
FKs, NKs, Enums, CHECK constraints, indexes, etc.. all that
"implementation details"
is programmatically generated, and always consistent, from a much
higher-level and simpler model.
And you also get auto-upgrade most of the time, greatly increasing
development velocity too.

I would argue that NOT doing it this way, is the non-robust way myself :)
We've been refactoring a large data-model maintained manually like you advocate,
and I can't tell you how many anomalies we've discovered and had to fix,
using the more robust formalism of using a high-level logical model
and (DDL) code gen.

I guess is a DBA-versus-Developer point-of-view difference. --DD


What this points to is that there are multiple ways to handle this, many 
external to the server itself. My take is that the system catalogs are 
there for the proper operation of the server and that is their task, 
first and foremost. If you can piggyback of that then great, but with 
the knowledge that the information may change to meet the needs of the 
server not external users.


--
Adrian Klaver
adrian.kla...@aklaver.com





how to secure pg_hba.conf

2022-12-01 Thread Rizwan Shaukat
Hi,

we hv requiremnt from security to secure pg_hba.conf file was encryption or 
password protected on server to protect ip visibilty because these server 
access by application n thy can amend as well. how we can achive it pls 

Rizwan



Re: how to secure pg_hba.conf

2022-12-01 Thread Christophe Pettus



> On Dec 1, 2022, at 05:45, Rizwan Shaukat  wrote:
> we hv requiremnt from security to secure pg_hba.conf file was encryption or 
> password protected on server to protect ip visibilty because these server 
> access by application n thy can amend as well. how we can achive it pls 

The only practical way of securing pg_hba.conf is to secure access to the Linux 
user that PostgreSQL is running as (assuming you are on Linux, of course).  
That user will need to open and read the file, so there's no way of securing it 
against that user.



Re: how to secure pg_hba.conf

2022-12-01 Thread Ron

On 12/1/22 07:45, Rizwan Shaukat wrote:

Hi,

we hv requiremnt from security to secure pg_hba.conf file was encryption or 
password protected on server to protect ip visibilty because these server 
access by application n thy can amend as well. how we can achive it pls


pg_hba.conf should only be readable by user postgres, while your application 
should not be running as user postgres.  Thus, there should not be a problem.


--
Angular momentum makes the world go 'round.




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Dominique Devienne
On Thu, Dec 1, 2022 at 6:41 PM Adrian Klaver  wrote:
> On 12/1/22 09:24, Dominique Devienne wrote:
> > I guess is a DBA-versus-Developer point-of-view difference. --DD
>
> What this points to is that there are multiple ways to handle this, many
> external to the server itself. My take is that the system catalogs are
> there for the proper operation of the server and that is their task,
> first and foremost. If you can piggyback of that then great, but with
> the knowledge that the information may change to meet the needs of the
> server not external users.

I see. Still, Oracle preserves SQL as-is. SQLite preserve SQL as-is.
Would be nice if PostgreSQL did too. That's all I'm saying.

Having in libpq functionality that allows to do the same kind of SQL
normalization / rewriting done server-side would help. Then I could use
that to generate the DDL "just right" the first time.

For now, our current work-around is a two step process.
We first generate at-build-time the DDL using "our syntax" and
instantiate the schema.
Then introspect that and re-generate code with the "rewritten syntax".
Subsequent generation (it's dynamic, at runtime) will use the re-generated code
that matches the syntax re-write. Thus now the introspection and
diff'ing match the in-memory DDL.
Still, that's a PITA. I of course don't contest that PostgreSQL
maintains what it needs.
But ALSO maintaining the original, at least until a re-write is
necessary on renames,
would go A LONG WAY to satisfy the OP and myself in our use-cases. FWIW. --DD




Re: how to secure pg_hba.conf

2022-12-01 Thread David G. Johnston
On Thu, Dec 1, 2022 at 11:36 AM Rizwan Shaukat 
wrote:

> we hv requiremnt from security to secure pg_hba.conf file was encryption
> or password protected on server to protect ip visibilty because these
> server access by application n thy can amend as well. how we can achive it
> pls
>
>
You cannot with the present implementation of the system - pg_hba.conf is
read by the PostgreSQL process as a file.  I do not think the server is
prepared for that file to be some kind of program whose stdout is the
contents and you could arrange for that program to do whatever it is you'd
like.

That said, it isn't clear to me what you mean by "these server access by
application n thy can amend as well".  You are welcome to make the file
read-only except by root if amending it is a concern.  I don't understand
what exposure knowing ip addresses gives - I suppose knowledge is helpful
but security by obscurity isn't real security.

David J.


Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Christophe Pettus



> On Dec 1, 2022, at 11:05, Dominique Devienne  wrote:
> 
> I see. Still, Oracle preserves SQL as-is. SQLite preserve SQL as-is.
> Would be nice if PostgreSQL did too. That's all I'm saying.

Since this is a custom-built system, there is nothing keeping you from creating 
your own table in the database that stores the original text of the function.



Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Dominique Devienne
On Thu, Dec 1, 2022 at 8:09 PM Christophe Pettus  wrote:
> > On Dec 1, 2022, at 11:05, Dominique Devienne  wrote:
> > I see. Still, Oracle preserves SQL as-is. SQLite preserve SQL as-is.
> > Would be nice if PostgreSQL did too. That's all I'm saying.
>
> Since this is a custom-built system, there is nothing keeping you from 
> creating your own table in the database that stores the original text of the 
> function.

That's not the point. If a DBA updates one of our triggers or proc or
whatever else,
the recorded info in a custom table won't be affected. We are diff'ing
the server-side
schema, against the expected in-memory model of the physical model.

Thus the dictionaries are the only source of truth we can trust for
the current state of the schema.
And beside minor syntactic differences, and some more troublesome
object-name rewrites, this is
exactly what we want. The system itself needs to preserve the original
DDL IMHO. --DD




Re: how to secure pg_hba.conf

2022-12-01 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Dec 1, 2022 at 11:36 AM Rizwan Shaukat 
> wrote:
>> we hv requiremnt from security to secure pg_hba.conf file was encryption
>> or password protected on server to protect ip visibilty because these
>> server access by application n thy can amend as well. how we can achive it
>> pls

> You cannot with the present implementation of the system - pg_hba.conf is
> read by the PostgreSQL process as a file.  I do not think the server is
> prepared for that file to be some kind of program whose stdout is the
> contents and you could arrange for that program to do whatever it is you'd
> like.

Even more to the point: if you are afraid of hostile actors being able
to access files inside your data directory, it seems to me that
pg_hba.conf is very far down the list of things to worry about.  What's
to stop the same actors from examining/modifying other configuration
files, or even the actual database contents?  If you don't think your
data directory is secure, you have problems that Postgres can't fix.

regards, tom lane




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Tom Lane
Dominique Devienne  writes:
> On Thu, Dec 1, 2022 at 8:09 PM Christophe Pettus  wrote:
>> Since this is a custom-built system, there is nothing keeping you from 
>> creating your own table in the database that stores the original text of the 
>> function.

> That's not the point. If a DBA updates one of our triggers or proc or
> whatever else,
> the recorded info in a custom table won't be affected. We are diff'ing
> the server-side
> schema, against the expected in-memory model of the physical model.

> Thus the dictionaries are the only source of truth we can trust for
> the current state of the schema.
> And beside minor syntactic differences, and some more troublesome
> object-name rewrites, this is
> exactly what we want. The system itself needs to preserve the original
> DDL IMHO. --DD

Do you really fail to see the contradictions in this?  You want the
database to preserve the original DDL, but you also want it to update
in response to subsequent alterations.  You can't have both those
things.

regards, tom lane




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Gavan Schneider
On 2 Dec 2022, at 6:51, Tom Lane wrote:

> Dominique Devienne  writes:
>> On Thu, Dec 1, 2022 at 8:09 PM Christophe Pettus  wrote:
>>> Since this is a custom-built system, there is nothing keeping you from 
>>> creating your own table in the database that stores the original text of 
>>> the function.
>
>> That's not the point. If a DBA updates one of our triggers or proc or
>> whatever else,
>> the recorded info in a custom table won't be affected. We are diff'ing
>> the server-side
>> schema, against the expected in-memory model of the physical model.
>
>> Thus the dictionaries are the only source of truth we can trust for
>> the current state of the schema.
>> And beside minor syntactic differences, and some more troublesome
>> object-name rewrites, this is
>> exactly what we want. The system itself needs to preserve the original
>> DDL IMHO. --DD
>
> Do you really fail to see the contradictions in this?  You want the
> database to preserve the original DDL, but you also want it to update
> in response to subsequent alterations.  You can't have both those
> things.
>
At the risk of stating the (over) obvious…

NEW:  the Pg system has a parsed/tokenised version of functions and such like, 
with the excellent feature that these will be kept synchronised with any name 
change of objects referenced
OLD: easily available text version of functions, etc. with comments and layout 
(but not easily updatable in the event referenced objects get renamed) — 
sysadmin usage

The contradiction is obvious but both needs can be justified…

NEXT(?): human readable version of function, etc definitions be generated from 
the parsed version, with the addition of  tokens within this parsed version 
that allow programer’s comments to be reinstated. Leave the layout style 
conventions to the system and promise to keep this style consistent and 
maintained as part of the feature. Putting this generated (and updated as 
needed) text into pg_proc.prosrc would seem to be the least change needed to 
allow existing usage to move with the new Pg versions (and maybe help pgAdmin 
as well)

Regards

Gavan Schneider
——
Gavan Schneider, Sodwalls, NSW, Australia
Explanations exist; they have existed for all time; there is always a 
well-known solution to every human problem — neat, plausible, and wrong.
— H. L. Mencken, 1920




Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread raf
On Thu, Dec 01, 2022 at 04:38:57PM +0100, Dominique Devienne 
 wrote:

> On Thu, Dec 1, 2022 at 4:23 PM Tom Lane  wrote:
> > "David G. Johnston"  writes:
> > > On Thu, Dec 1, 2022 at 7:59 AM Martijn Tonies (Upscene Productions) <
> > > m.ton...@upscene.com> wrote:
> > >> Can you modify the server code to store the original body in proc.prosrc
> > >> again? It would be very helpful.
> >
> > > I seem to recall that this option had been discussed and rejected when 
> > > this
> > > feature went in.  The parsed body is a feature because its contents can be
> > > updated due to, e.g., renaming of objects.  The text source of the 
> > > original
> > > command would be unable to be updated in the same way and so it is 
> > > possible
> > > the text and the parsed contents would diverge over time, which is a
> > > situation we do not want to have.
> >
> > Indeed.  We used to have a similar situation with respect to column
> > default expressions and CHECK constraint expressions.  Eventually we got
> > rid of the textual storage of both, because it couldn't be maintained
> > in a reasonable way.
> >
> > I think the answer here is "don't use the new syntax if you want the
> > function body stored textually".  You can have one set of benefits,
> > or the other set, but not both at once.
> 
> FWIW, we have a custom schema introspection and diff'ing ad-hoc framework,
> and the fact the original SQL is not conserved as-is has also created
> issues for us.
> 
> On Oracle, our SQL was preserved as-is, so could be compared reliably. While 
> on
> PostgreSQL, some names-in-SQL are rewritten, the text reformatted,
> etc... So this
> byte-perfect comparison is no longer possible, and we must rely on heuristics
> (a.k.a, hacks...) to do a fuzzy-compare (since we lack a real PG-compatible 
> SQL
> parsers to do an infoset-comparison instead, at the AST level for example).
> 
> So it's not just a matter of browsing the schema. For us, it's a
> *functional* issue. --DD

Same here. Accessing the loaded stored procedure source
is how I audit the state of stored procedures in the
database against the code in the code repository.
Without the ability to make that comparison, there is
no way to audit the stored procedures, and the only way
to make sure that the database is up to date with the
stored procedures would be to reload every single
stored procedure. I might have to alter the procedure
loading program to store its own copy of the source code
in the database somewhere, and just hope that nobody
loads stored procedures using any other tool. Even so,
it will slow down loading old database backups and
bringing their schema and code up to date. But that's
probably OK compared with the time to load the backup
itself.

cheers,
raf





Re: Stored procedure code no longer stored in v14 and v15, changed behaviour

2022-12-01 Thread Martijn Tonies (Upscene Productions)

On 12/1/22 09:24, Dominique Devienne wrote:
> I guess is a DBA-versus-Developer point-of-view difference. --DD

What this points to is that there are multiple ways to handle this, many
external to the server itself. My take is that the system catalogs are
there for the proper operation of the server and that is their task,
first and foremost. If you can piggyback of that then great, but with
the knowledge that the information may change to meet the needs of the
server not external users.


I see. Still, Oracle preserves SQL as-is. SQLite preserve SQL as-is.
Would be nice if PostgreSQL did too. That's all I'm saying.


So do InterBase, Firebird, SQL Server, MySQL (except for Views, strangely 
enough),

MariaDB, NexusDB, SQL Anywhere, and, frankly, all others I know of.

And this is used all the time by database developers.

And at least InterBase and Firebird -also- stored a 'parsed version' (in 
binary).



With regards,

Martijn Tonies
Upscene Productions
https://www.upscene.com

Database Workbench - developer tool for Oracle, SQL Server, PostgreSQL, 
MySQL, InterBase, NexusDB and Firebird.