Re: archive command doesnt work

2024-02-08 Thread Alpaslan AKDAĞ
Hello Laurenz,

Thank you for your answer.

I have checked the pg_is_in_recovery() and here are the results from
primary and hot stand by server.

Primary (old standby ):
select pg_is_in_recovery();
 pg_is_in_recovery
---
 f
(1 row)

hot standby(old primary):
select pg_is_in_recovery();
 pg_is_in_recovery
---
 t
(1 row)
and there is also standby.signal file in standby server.
So it seems that there is nothing wrong.

recovery_min_apply_delay = '2d' are set in warm standby servers.   Before
the switchover we had the same settings but we did not have this problem.
How can I fix it?

Best regards,
Alpaslan

On Wed, Feb 7, 2024 at 9:06 PM Laurenz Albe 
wrote:

> On Wed, 2024-02-07 at 12:07 +0100, Alpaslan AKDAĞ wrote:
> > We have a primary, a hot standby and 2 warm standby servers with 2 days
> delay.
> >
> > After switchover since 01.02.2024 hot standby server does not archive
> wal files. I couldn't find the problem.
> >
> > related postgresql.conf lines:
> > archive_mode = on
> > archive_command = 'cp %p /archive/archived_wal/%f && cp %p
> /opt/postgres/backup/archived_wal/%f'
> >
> > and in log files there are lines like below but no error or failure
> lines:
> >
> > postgresql-2024-02-05_00.log:2024-02-05 06:25:50.272 CET [979081]
> LOG:  restartpoint complete: wrote 477 buffers (0.0%); 0 WAL file(s) added,
> 0
> > removed, 4 recycled; write=47.946 s, sync=0.005 s, total=47.964 s; sync
> files=85, longest=0.001 s, average=0.001 s; distance=65484 kB,
> estimate=413488
> >  kB
> > postgresql-2024-02-05_00.log-2024-02-05 06:25:50.272 CET [979081]
> LOG:  recovery restart point at 25C/74083E58
> > postgresql-2024-02-05_00.log-2024-02-05 06:25:50.272 CET [979081]
> DETAIL:  Last completed transaction was at log time 2024-02-05
> 06:25:50.223799
> > +01.
>
> That looks like the standby wasn't promoted and is still in recovery,
> so it won't generate WAL.
>
> Restartpoints are only written on servers in recovery mode.  Moreover,
> a promote request would generate a log entry.
>
> Another option would be that the promote request is further up in the log,
> and the server takes a long time to replay all the changes
> (if the "2 days delay" you mention are set in "recovery_min_apply_delay").
>
> What do you get for
>
>   SELECT pg_is_in_recovery();
>
> Yours,
> Laurenz Albe
>


Re: Error on create extension pgcrypto. Undefined symbol "EVP_bf_ecb"

2024-02-08 Thread Daniel Gustafsson
> On 7 Feb 2024, at 18:53, Marco Aurelio  wrote:
> 
> I changed the path and now the correct version of openssl appears in the 
> config.log, even so, after recompiling and reinstalling both postgresql and 
> the pgcrypto contrib, when trying to create the extension in a database I 
> continue to receive the same error, any further suggestions ?

the _bf functions relate to the Blowfish cipher which was obsoleted in OpenSSL
3 so you need to load the legacy provider in your openssl configuration in
order to use it.

--
Daniel Gustafsson





cookiecutter template for Postgres extensions

2024-02-08 Thread Florents Tselai

Hi all,

I built a cookiecutter template for Postgres extensions. 

This cookiecutter prompts for some basic extension metadata (e.g., name, 
Postgres version) and then automatically generates the necessary boilerplate 
files. 

These include:

* Standard extension files (Makefile, .control, c, .h, .sql) pre-populated with 
stubs.
* Regression Tests
* GitHub Actions for building and testing on multiple Postgres versions
* Docker support
* pgxn integration


To use it: 

pip install cookiecutter
cookiecutter gh:Florents-Tselai/cookiecutter-postgres-extension  


Demo video: https://youtu.be/zVxY3ZmE5bU

All the best,
Flo

Re: How to do faster DML

2024-02-08 Thread Greg Sabino Mullane
On Thu, Feb 8, 2024 at 12:12 AM Lok P  wrote:

> Rearranging the table columns by typlen desc, didn't give much storage
> space reduction.
>

It's not so much "biggest to smallest" as making sure you don't have any
gaps when you move from one data type to another. You may have not had any
"holes" originally, so it's hard to state anything without data. The other
thing you can do is use smaller data types if you can get away with it.
smallint better than int, int better than bigint, etc.

So it seems the compression does not apply for the rows inserted using
> "CTAS" or "INSERT AS SELECT". Does that mean it is only applicable for the
> row by row inserts but not batch inserts(which a bigger system normally
> will have)? I was not expecting this though, so it was disappointing.
>

TOAST compression doesn't care how the data arrived. It does have criteria
though as to how large something is before it gets toasted and/or
compressed. See:

https://www.crunchydata.com/blog/postgres-toast-the-greatest-thing-since-sliced-bread

Cheers,
Greg


Re: Partitioning options

2024-02-08 Thread Greg Sabino Mullane
On Thu, Feb 8, 2024 at 12:42 AM sud  wrote:
...

> The key transaction table is going to have ~450 Million transactions per
> day and the data querying/filtering will always happen based on the
> "transaction date" column.
>
...

> Should we go for simple daily range partitioning on the transaction_date
> column?
>

This one gets my vote. That and some good indexes.

Cheers,
Greg


Fwd: pgadmin not opening in concurrent sessions

2024-02-08 Thread Siraj G
Hello!

I installed PgAdmin4 8.2 in a Windows platform, for all the users. The
issue we are having is, if any user launches it, nobody else can launch it.
Eg., userB logs in and launches PgAdmin, it gets launched. Now if UserA and
UserB tried to login, they get nothing, no event/response from Windows/the
tool.

Has anyone experienced this issue? If so, is there any workaround or fix
for this?

I tried to de-install and install the 7.8 version, but to no avail.
[image: unnamed.png]
Regards
Siraj


Re: Fwd: pgadmin not opening in concurrent sessions

2024-02-08 Thread Adrian Klaver

On 2/8/24 07:27, Siraj G wrote:

Hello!

I installed PgAdmin4 8.2 in a Windows platform, for all the users. The 
issue we are having is, if any user launches it, nobody else can launch it.
Eg., userB logs in and launches PgAdmin, it gets launched. Now if UserA 
and UserB tried to login, they get nothing, no event/response from 
Windows/the tool.


Has anyone experienced this issue? If so, is there any workaround or fix 
for this?


The better place to ask this is:

https://www.postgresql.org/list/pgadmin-support/



I tried to de-install and install the 7.8 version, but to no avail.
unnamed.png
Regards
Siraj


--
Adrian Klaver
adrian.kla...@aklaver.com







Re: Clarification regarding managing advisory locks in postgresql

2024-02-08 Thread Merlin Moncure
On Thu, Jan 25, 2024 at 4:44 AM Sasmit Utkarsh 
wrote:

> Okay Thanks. Also please help me understand the below scenarios
>
> From the above statement, I understand is (please correct if I'm wrong
> here), When we fork a client process, each process gets its own database
> connection or transaction context.
>
So far so good


> Therefore, locks acquired in one process (or transaction) do not directly
> affect locks in another process (or transaction).
>
Not following you here. By definition, a lock impacts other processes;
that's the entire purpose.  The affect other processes in that two
processes cannot take a lock on the same thing at the same time.


> Now, I'm faced with another situation where I'm using libpq in C as client
> programs and while calling some function it acquires pg_advisory_lock for
> the request  with some identifier in transaction A. This can be thought
> of as “lock the operation with id = X”  and then make some SQL
> requests(retrieve) from the database. During that if it forks into another
> process B,
>

Client side code should not fork and preserve connections across the fork.
This is multi-threaded access to a connection, and generally speaking you
should not have 2+ threads hitting the same connection returned from
libpq.  This is undefined behavior, so that your questions below this I
suspect are moot.

merlin

>


Re: Partitioning options

2024-02-08 Thread veem v
On Thu, 8 Feb 2024 at 20:08, Greg Sabino Mullane  wrote:

> On Thu, Feb 8, 2024 at 12:42 AM sud  wrote:
> ...
>
>> The key transaction table is going to have ~450 Million transactions per
>> day and the data querying/filtering will always happen based on the
>> "transaction date" column.
>>
> ...
>
>> Should we go for simple daily range partitioning on the transaction_date
>> column?
>>
>
> This one gets my vote. That and some good indexes.
>
> Cheers,
> Greg
>
>
Hello Greg,

Out of curiosity, As OP mentioned that there will be Joins and also filters
on column Customer_id column , so why don't you think that subpartition by
customer_id will be a good option? I understand List subpartition may not
be an option considering the new customer_ids gets added slowly in the
future(and default list may not be allowed) and also OP mentioned, there is
skewed distribution of data for customer_id column. However what is the
problem if OP will opt for HASH subpartition on customer_id in this
situation?

Is it because the number of partitions will be higher i.e.

If you go with simple range partitioning, for 5 months you will have ~150
daily range partitions and with each index the count of partition will gets
double, for e.g if you will have 10 indexes, the total partitions will be =
~150 table partition+ (10*150)index partition= 1650 total number of
partitions.

If OP goes for , range-hash, and hash will mostly have to be 2^N, so say 8,
hash sub-partitions , then the total number of partitions will be = (8*150)
table partitions+ (8*150*10) index partitions= ~13200 partitions.

Though there are no theoretical limits to the number of partitions in
postgres, there are some serious issues noted in the past with higher
number of table partitions. One such is below. Is this the reason?

https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits

Regards
Veem


Re: archive command doesnt work

2024-02-08 Thread Laurenz Albe
On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
> Thank you for your answer.
> 
> I have checked the pg_is_in_recovery() and here are the results
> from primary and hot stand by server.
> 
> Primary (old standby ):
> select pg_is_in_recovery();
>  pg_is_in_recovery 
> ---
>  f
> (1 row)
> 
> hot standby(old primary):
> select pg_is_in_recovery();
>  pg_is_in_recovery 
> ---
>  t
> (1 row)
> and there is also standby.signal file in standby server.
> So it seems that there is nothing wrong.
> 
> recovery_min_apply_delay = '2d' are set in warm standby servers.
> Before the switchover we had the same settings but we did not have this 
> problem.

It is becoming increasingly obvious that you never actually performed
a switchover: it seems that you didn't promote the standby.

Either use

  pg_ctl promote -D /path/to/datadir

on the command line or

  SELECT pg_promote();

in SQL.

Yours,
Laurenz Albe




Re: archive command doesnt work

2024-02-08 Thread Mateusz Henicz
czw., 8 lut 2024 o 21:10 Laurenz Albe  napisał(a):

> On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
> > Thank you for your answer.
> >
> > I have checked the pg_is_in_recovery() and here are the results
> > from primary and hot stand by server.
> >
> > Primary (old standby ):
> > select pg_is_in_recovery();
> >  pg_is_in_recovery
> > ---
> >  f
> > (1 row)
> >
> > hot standby(old primary):
> > select pg_is_in_recovery();
> >  pg_is_in_recovery
> > ---
> >  t
> > (1 row)
> > and there is also standby.signal file in standby server.
> > So it seems that there is nothing wrong.
> >
> > recovery_min_apply_delay = '2d' are set in warm standby servers.
> > Before the switchover we had the same settings but we did not have this
> problem.
>
> It is becoming increasingly obvious that you never actually performed
> a switchover: it seems that you didn't promote the standby.
>
> Either use
>
>   pg_ctl promote -D /path/to/datadir
>
> on the command line or
>
>   SELECT pg_promote();
>
> in SQL.
>
> Yours,
> Laurenz Albe
>
>
>
>From what you sent:


*"After switchover since 01.02.2024 hot standby server does not archive wal
files. I couldn't find the problem."*
Hot standby normally does not archive WAL files. Only the primary server
does that unless you have archive_mode set to "always" but your
configuration shows that you have it "on" only.


*"related postgresql.conf lines:*
*archive_mode = on "*

Cheers,
Mateusz


Re: archive command doesnt work

2024-02-08 Thread Alpaslan AKDAĞ
Now I am confused. In Primary pg_is_in_recovery() should be false and in
standby node should be true. Am I wrong?

Thanks and regards
Alpaslan

On Thu, Feb 8, 2024 at 9:10 PM Laurenz Albe 
wrote:

> On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
> > Thank you for your answer.
> >
> > I have checked the pg_is_in_recovery() and here are the results
> > from primary and hot stand by server.
> >
> > Primary (old standby ):
> > select pg_is_in_recovery();
> >  pg_is_in_recovery
> > ---
> >  f
> > (1 row)
> >
> > hot standby(old primary):
> > select pg_is_in_recovery();
> >  pg_is_in_recovery
> > ---
> >  t
> > (1 row)
> > and there is also standby.signal file in standby server.
> > So it seems that there is nothing wrong.
> >
> > recovery_min_apply_delay = '2d' are set in warm standby servers.
> > Before the switchover we had the same settings but we did not have this
> problem.
>
> It is becoming increasingly obvious that you never actually performed
> a switchover: it seems that you didn't promote the standby.
>
> Either use
>
>   pg_ctl promote -D /path/to/datadir
>
> on the command line or
>
>   SELECT pg_promote();
>
> in SQL.
>
> Yours,
> Laurenz Albe
>


Re: Partitioning options

2024-02-08 Thread Jim Nasby

On 2/8/24 1:43 PM, veem v wrote:


On Thu, 8 Feb 2024 at 20:08, Greg Sabino Mullane > wrote:




Should we go for simple daily range partitioning on the
transaction_date column?


This one gets my vote. That and some good indexes.





Hello Greg,

Out of curiosity, As OP mentioned that there will be Joins and also 
filters on column Customer_id column , so why don't you think that 
subpartition by customer_id will be a good option? I understand List 
subpartition may not be an option considering the new customer_ids gets 
added slowly in the future(and default list may not be allowed) and also 
OP mentioned, there is skewed distribution of data for customer_id 
column. However what is the problem if OP will opt for HASH subpartition 
on customer_id in this situation?


Is it because the number of partitions will be higher i.e.

If you go with simple range partitioning, for 5 months you will have 
~150 daily range partitions and with each index the count of partition 
will gets double, for e.g if you will have 10 indexes, the total 
partitions will be = ~150 table partition+ (10*150)index partition= 1650 
total number of partitions.


If OP goes for , range-hash, and hash will mostly have to be 2^N, so say 
8, hash sub-partitions , then the total number of partitions will be = 
(8*150) table partitions+ (8*150*10) index partitions= ~13200 partitions.


Though there are no theoretical limits to the number of partitions in 
postgres, there are some serious issues noted in the past with higher 
number of table partitions. One such is below. Is this the reason?


https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits 



The issue with partitioning by customer_id is that it won't do much (if 
anything) to improve data locality. When partitioning by date, you can 
at least benefit from partition elimination *IF* your most frequent 
queries limit the number of days that the query will look at. Per the 
OP, all queries will include transaction date. Note that does NOT 
actually mean the number of days/partitions will be limited (ie, WHERE 
date > today - 150 will hit all the partitions), but if we assume that 
the majority of queries will limit themselves to the past few days then 
partitioning by date should greatly increase data locality.


Also, when it comes to customer partitioning... really what you probably 
want there isn't partitioning but sharding.

--
Jim Nasby, Data Architect, Austin TX





Re: archive command doesnt work

2024-02-08 Thread Alpaslan AKDAĞ
Hello Mateusz

Thank you for your answer.

If it is like that, there is not any problem. I tried to find an
explanation that hot standby archive wal files or not but I couldn't.
Is this information  from the Documentation?

Best Regards,
Alpaslan

On Thu, Feb 8, 2024 at 9:24 PM Mateusz Henicz 
wrote:

>
> czw., 8 lut 2024 o 21:10 Laurenz Albe 
> napisał(a):
>
>> On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
>> > Thank you for your answer.
>> >
>> > I have checked the pg_is_in_recovery() and here are the results
>> > from primary and hot stand by server.
>> >
>> > Primary (old standby ):
>> > select pg_is_in_recovery();
>> >  pg_is_in_recovery
>> > ---
>> >  f
>> > (1 row)
>> >
>> > hot standby(old primary):
>> > select pg_is_in_recovery();
>> >  pg_is_in_recovery
>> > ---
>> >  t
>> > (1 row)
>> > and there is also standby.signal file in standby server.
>> > So it seems that there is nothing wrong.
>> >
>> > recovery_min_apply_delay = '2d' are set in warm standby servers.
>> > Before the switchover we had the same settings but we did not have this
>> problem.
>>
>> It is becoming increasingly obvious that you never actually performed
>> a switchover: it seems that you didn't promote the standby.
>>
>> Either use
>>
>>   pg_ctl promote -D /path/to/datadir
>>
>> on the command line or
>>
>>   SELECT pg_promote();
>>
>> in SQL.
>>
>> Yours,
>> Laurenz Albe
>>
>>
>>
> From what you sent:
>
>
> *"After switchover since 01.02.2024 hot standby server does not archive
> wal files. I couldn't find the problem."*
> Hot standby normally does not archive WAL files. Only the primary server
> does that unless you have archive_mode set to "always" but your
> configuration shows that you have it "on" only.
>
>
> *"related postgresql.conf lines:*
> *archive_mode = on "*
>
> Cheers,
> Mateusz
>


Re: archive command doesnt work

2024-02-08 Thread Mateusz Henicz
Hey,
Yes, it is.
https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-MODE

czw., 8 lut 2024 o 21:35 Alpaslan AKDAĞ 
napisał(a):

> Hello Mateusz
>
> Thank you for your answer.
>
> If it is like that, there is not any problem. I tried to find an
> explanation that hot standby archive wal files or not but I couldn't.
> Is this information  from the Documentation?
>
> Best Regards,
> Alpaslan
>
> On Thu, Feb 8, 2024 at 9:24 PM Mateusz Henicz 
> wrote:
>
>>
>> czw., 8 lut 2024 o 21:10 Laurenz Albe 
>> napisał(a):
>>
>>> On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
>>> > Thank you for your answer.
>>> >
>>> > I have checked the pg_is_in_recovery() and here are the results
>>> > from primary and hot stand by server.
>>> >
>>> > Primary (old standby ):
>>> > select pg_is_in_recovery();
>>> >  pg_is_in_recovery
>>> > ---
>>> >  f
>>> > (1 row)
>>> >
>>> > hot standby(old primary):
>>> > select pg_is_in_recovery();
>>> >  pg_is_in_recovery
>>> > ---
>>> >  t
>>> > (1 row)
>>> > and there is also standby.signal file in standby server.
>>> > So it seems that there is nothing wrong.
>>> >
>>> > recovery_min_apply_delay = '2d' are set in warm standby servers.
>>> > Before the switchover we had the same settings but we did not have
>>> this problem.
>>>
>>> It is becoming increasingly obvious that you never actually performed
>>> a switchover: it seems that you didn't promote the standby.
>>>
>>> Either use
>>>
>>>   pg_ctl promote -D /path/to/datadir
>>>
>>> on the command line or
>>>
>>>   SELECT pg_promote();
>>>
>>> in SQL.
>>>
>>> Yours,
>>> Laurenz Albe
>>>
>>>
>>>
>> From what you sent:
>>
>>
>> *"After switchover since 01.02.2024 hot standby server does not archive
>> wal files. I couldn't find the problem."*
>> Hot standby normally does not archive WAL files. Only the primary server
>> does that unless you have archive_mode set to "always" but your
>> configuration shows that you have it "on" only.
>>
>>
>> *"related postgresql.conf lines:*
>> *archive_mode = on "*
>>
>> Cheers,
>> Mateusz
>>
>


Re: archive command doesnt work

2024-02-08 Thread Alpaslan AKDAĞ
Thank you very much.

I got my answer and also read from the documentation. Primary and standby
both seem fine.

Best Regards
Alpaslan


On Thu, Feb 8, 2024 at 9:39 PM Mateusz Henicz 
wrote:

> Hey,
> Yes, it is.
>
> https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-MODE
>
> czw., 8 lut 2024 o 21:35 Alpaslan AKDAĞ 
> napisał(a):
>
>> Hello Mateusz
>>
>> Thank you for your answer.
>>
>> If it is like that, there is not any problem. I tried to find an
>> explanation that hot standby archive wal files or not but I couldn't.
>> Is this information  from the Documentation?
>>
>> Best Regards,
>> Alpaslan
>>
>> On Thu, Feb 8, 2024 at 9:24 PM Mateusz Henicz 
>> wrote:
>>
>>>
>>> czw., 8 lut 2024 o 21:10 Laurenz Albe 
>>> napisał(a):
>>>
 On Thu, 2024-02-08 at 09:23 +0100, Alpaslan AKDAĞ wrote:
 > Thank you for your answer.
 >
 > I have checked the pg_is_in_recovery() and here are the results
 > from primary and hot stand by server.
 >
 > Primary (old standby ):
 > select pg_is_in_recovery();
 >  pg_is_in_recovery
 > ---
 >  f
 > (1 row)
 >
 > hot standby(old primary):
 > select pg_is_in_recovery();
 >  pg_is_in_recovery
 > ---
 >  t
 > (1 row)
 > and there is also standby.signal file in standby server.
 > So it seems that there is nothing wrong.
 >
 > recovery_min_apply_delay = '2d' are set in warm standby servers.
 > Before the switchover we had the same settings but we did not have
 this problem.

 It is becoming increasingly obvious that you never actually performed
 a switchover: it seems that you didn't promote the standby.

 Either use

   pg_ctl promote -D /path/to/datadir

 on the command line or

   SELECT pg_promote();

 in SQL.

 Yours,
 Laurenz Albe



>>> From what you sent:
>>>
>>>
>>> *"After switchover since 01.02.2024 hot standby server does not archive
>>> wal files. I couldn't find the problem."*
>>> Hot standby normally does not archive WAL files. Only the primary server
>>> does that unless you have archive_mode set to "always" but your
>>> configuration shows that you have it "on" only.
>>>
>>>
>>> *"related postgresql.conf lines:*
>>> *archive_mode = on "*
>>>
>>> Cheers,
>>> Mateusz
>>>
>>


Re: Partitioning options

2024-02-08 Thread Greg Sabino Mullane
>
> Out of curiosity, As OP mentioned that there will be Joins and also
> filters on column Customer_id column , so why don't you think that
> subpartition by customer_id will be a good option? I understand List
> subpartition may not be an option considering the new customer_ids gets
> added slowly in the future(and default list may not be allowed) and also OP
> mentioned, there is skewed distribution of data for customer_id column.
> However what is the problem if OP will opt for HASH subpartition on
> customer_id in this situation?
>

It doesn't really gain you much, given you would be hashing it, the
customers are unevenly distributed, and OP talked about filtering on the
customer_id column. A hash partition would just be a lot more work and
complexity for us humans and for Postgres. Partitioning for the sake of
partitioning is not a good thing. Yes, smaller tables are better, but they
have to be smaller targeted tables.

sud wrote:

130GB of storage space as we verified using the "pg_relation_size"
> function, for a sample data set.


You might also want to closely examine your schema. At that scale, every
byte saved per row can add up.

Cheers,
Greg


Re: Clarification regarding managing advisory locks in postgresql

2024-02-08 Thread Greg Sabino Mullane
On Thu, Jan 25, 2024 at 5:44 AM Sasmit Utkarsh 
wrote:

> Therefore, locks acquired in one process (or transaction) do not directly
> affect locks in another process (or transaction).
>

Ummaybe if you describe the exact problem you are trying to solve with
advisory locks, we can find the best solution. It may not even involve
advisory locks.

Cheers,
Greg


Re: Partitioning options

2024-02-08 Thread Justin
Hi Sud,

Would not look at HASH partitioning as it is very expensive to add or
subtract the number of partitions.

Would probably look at a nested partitioning using  customer ID using range
or list of IDs then  by transaction date,  Its easy to add  partitions and
balance the partitions segments.

Keep in mind that SELECT queries being used on the partition must  use the
partitioning KEY in the WHERE clause of the query or performance will
suffer.

Suggest doing a query analysis before deploying partition to confirm the
queries WHERE clauses matched the planned partition rule.  I suggest that
80% of the queries of the executed queries must match the partition rule if
not don't deploy partitioning or change  all the queries in the
application to match the partition rule


On Thu, Feb 8, 2024 at 3:51 PM Greg Sabino Mullane 
wrote:

> Out of curiosity, As OP mentioned that there will be Joins and also
>> filters on column Customer_id column , so why don't you think that
>> subpartition by customer_id will be a good option? I understand List
>> subpartition may not be an option considering the new customer_ids gets
>> added slowly in the future(and default list may not be allowed) and also OP
>> mentioned, there is skewed distribution of data for customer_id column.
>> However what is the problem if OP will opt for HASH subpartition on
>> customer_id in this situation?
>>
>
> It doesn't really gain you much, given you would be hashing it, the
> customers are unevenly distributed, and OP talked about filtering on the
> customer_id column. A hash partition would just be a lot more work and
> complexity for us humans and for Postgres. Partitioning for the sake of
> partitioning is not a good thing. Yes, smaller tables are better, but they
> have to be smaller targeted tables.
>
> sud wrote:
>
> 130GB of storage space as we verified using the "pg_relation_size"
>> function, for a sample data set.
>
>
> You might also want to closely examine your schema. At that scale, every
> byte saved per row can add up.
>
> Cheers,
> Greg
>
>


Re: archive command doesnt work

2024-02-08 Thread Laurenz Albe
On Thu, 2024-02-08 at 21:28 +0100, Alpaslan AKDAĞ wrote:
> Now I am confused. In Primary pg_is_in_recovery() should be false and in
> standby node should be true. Am I wrong?

Right.  I guess I got confused which is your primary and which is your
standby.  That's normal in the context of switchover :^/

Yours,
Laurenz Albe




Migrate schemas

2024-02-08 Thread Lorusso Domenico
Hello guys,
I have 4 schemas with cross references (e.g.: a function refers to a
rowtype of a table of another schema, or a table invokes a function).

Backup schemas by pgadmin the resulting script doesn't follow the correct
order to ensure the object creations.

There is a way to tell postgresql to check the reference at the end of
script? Or a way to arrange DDL in the right order?

thanks'


-- 
Domenico L.


Re: Migrate schemas

2024-02-08 Thread Adrian Klaver

On 2/8/24 16:18, Lorusso Domenico wrote:

Hello guys,
I have 4 schemas with cross references (e.g.: a function refers to a 
rowtype of a table of another schema, or a table invokes a function).


Backup schemas by pgadmin the resulting script doesn't follow the 
correct order to ensure the object creations.


There is a way to tell postgresql to check the reference at the end of 
script? Or a way to arrange DDL in the right order?


Order is not guaranteed:

https://www.postgresql.org/docs/current/app-pgdump.html

-n pattern
--schema=pattern

Dump only schemas matching pattern; this selects both the schema 
itself, and all its contained objects. When this option is not 
specified, all non-system schemas in the target database will be dumped. 
Multiple schemas can be selected by writing multiple -n switches. The 
pattern parameter is interpreted as a pattern according to the same 
rules used by psql's \d commands (see Patterns), so multiple schemas can 
also be selected by writing wildcard characters in the pattern. When 
using wildcards, be careful to quote the pattern if needed to prevent 
the shell from expanding the wildcards; see Examples below.

Note

When -n is specified, pg_dump makes no attempt to dump any other 
database objects that the selected schema(s) might depend upon. 
Therefore, there is no guarantee that the results of a specific-schema 
dump can be successfully restored by themselves into a clean database.

Note

Non-schema objects such as large objects are not dumped when -n is 
specified. You can add large objects back to the dump with the 
--large-objects switch.



If you think you know the order then, see below. Though if the dumps 
above where done by schema into separate files then things get complicated.


https://www.postgresql.org/docs/current/app-pgrestore.html

-l
--list

List the table of contents of the archive. The output of this 
operation can be used as input to the -L option. Note that if filtering 
switches such as -n or -t are used with -l, they will restrict the items 
listed.

-L list-file
--use-list=list-file

Restore only those archive elements that are listed in list-file, 
and restore them in the order they appear in the file. Note that if 
filtering switches such as -n or -t are used with -L, they will further 
restrict the items restored.


list-file is normally created by editing the output of a previous 
-l operation. Lines can be moved or removed, and can also be commented 
out by placing a semicolon (;) at the start of the line. See below for 
examples.



Best bet is to dump the entire database.



thanks'


--
Domenico L.



--
Adrian Klaver
adrian.kla...@aklaver.com





Multiple connections over VPN password fail error

2024-02-08 Thread Sanjay Minni
Hi All

while trying to make multiple connects with different role names to a
single database over VPN i faced a password error issue when trying to
connect a send user
It seems I had to change this line in pg_hba.conf and it worked:

   `# IPv4 external connections thru VPN
#TYPE   DATABASE  USER   ADDRESS  METHOD
host   all   all  trust `<=(from the earlier
scram-sha-256)

is this the way and is this correct from a security point of view ?

regards
Sanjay Minni


Re: Multiple connections over VPN password fail error

2024-02-08 Thread Daniel Gustafsson
> On 9 Feb 2024, at 08:41, Sanjay Minni  wrote:

> while trying to make multiple connects with different role names to a single 
> database over VPN i faced a password error issue when trying to connect a 
> send user
> It seems I had to change this line in pg_hba.conf and it worked:
> 
>`# IPv4 external connections thru VPN
> #TYPE   DATABASE  USER   ADDRESS  METHOD 
> host   all   all  trust `<=(from the earlier 
> scram-sha-256)
> 
> is this the way and is this correct from a security point of view ?

While correctness and security always needs to be evaluated from the specific
needs of an installation, the odds are pretty good that "No" is the correct
answer here.  To quote the documentation on the "trust" setting:

"Allow the connection unconditionally.  This method allows anyone that
can connect to the PostgreSQL database server to login as any
PostgreSQL user they wish, without the need for a password or any other
authentication."

I would recommend immediately reverting back to the scram-sha-256 setting and
figuring out why you were unable to login.

--
Daniel Gustafsson