Re: Valid until

2024-05-14 Thread Muhammad Imtiaz
Hi,
What is the outcome of this query for the specified user? Does the database
contain a value in the rolvaliduntil column?
 SELECT rolname, rolvaliduntil FROM pg_authid;
Regards,
M.Imtiaz

On Tue, 14 May 2024 at 20:45, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 5/14/24 00:57, Rama Krishnan wrote:
> >> I recently set a user's password validity to "2024-05-13", but despite
> >> this, the user is still able to connect to the database. Is this a bug,
> >> or is the "valid until" parameter just for identification purposes,
> >> indicating when the password will expire?
>
> >  The VALID UNTIL clause sets a date and time after which the role's
> > password is no longer valid. If this clause is omitted the password will
> > be valid for all time.
>
> The important point there is that the *password* is not usable after
> the specified date.  If the user logs in via some non-password-based
> authentication method, that's fine (and it's on the infrastructure
> of that auth method to enforce whatever restrictions it thinks are
> appropriate).
>
> regards, tom lane
>
>
>


Re: pg_stat_io clarifications: background worker, writes and reads

2024-05-14 Thread Muhammad Imtiaz
Hi,

In PostgreSQL, the pg_stat_io view provides detailed statistics on I/O
operations. Background process perform maintenance tasks and other
background operations essential to the functioning of the PostgreSQL
database.
They include processes such as:

1. Autovacuum Workers
2. WAL Writer
3. Background Writer
4. Logical Replication Workers
5. Custom Background Workers

In the pg_stat_io view, statistics related to I/O operations performed by
these background workers are recorded.

Regards,
Imtiaz


On Wed, 15 May 2024, 01:26 Dimitrios Apostolou,  wrote:

> Hello list,
>
> what is the "background worker" in the pg_stat_io statistics view? I'm
> reading the documentation but can't figure this one out knowing that it is
> not autovacuum or bgwriter. And I'm not aware of any extension I might
> have with registered background worker.
>
> Additionally, how can it be evictions > writes? I would expect every
> eviction to cause a write.
>
> Finally about "hits", I understand they are reads found in shared_buffers,
> so they never registered into the "reads" counter. So is "reads" in
> pg_stat_io the equivalent to misses, i.e. the opposite of "hits", the read
> attempts not found in the shared_buffers, that needed to be fetched from
> the disk (or OS buffercache)?
>
> backend_type|object | context |  reads  |  read_time  |
> writes | write_time | writebacks | writeback_time | extends | extend_time |
> op_bytes |   hits| evictions | reuses | fsyncs | fsync_time |
> stats_reset
>
> ---+---+-+-+-+++++-+-+--+---+---++++---
>   background worker | relation  | normal  | 5139575 | 2196288.011 |
> 63277 |1766.94 |  0 |  0 |   0 |   0 |
>8192 | 876913705 |   5139653 ||  0 |  0 | 2024-04-08
> 08:50:02.971192+00
>
>
> Thank you in advance,
> Dimitris
>
>
>
>


Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres to connect to SQL server database

2024-05-28 Thread Muhammad Imtiaz
Hi ,

You need to install the  LLVM toolset.

1)Check that if it is installed or not :

yum list installed | grep llvm-toolset

2)If it is not installed, you can install it using the following command.
sudo yum install llvm-toolset-7

Regards,
Muhammad Imtiaz



On Wed, 29 May 2024 at 05:27, Ian Lawrence Barwick 
wrote:

> 2024年5月29日(水) 6:10 Sumit Kochar :
> >
> > Installation of Foreign data Wrapper on EDB Postgres to connect to SQL
> server database is not working.
> >
> >
> >
> > https://github.com/tds-fdw/tds_fdw/issues/357
> >
> >
> >
> > Please advise if this has been encountered or a workaround is available.
> (...)
>
> > [root@hostip tds_fdw-2.0.3]# make USE_PGXS=1 install
> >
> > /opt/rh/llvm-toolset-7/root/usr/bin/clang -Wno-ignored-attributes
> -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2
> -I./include/ -fvisibility=hidden  -I. -I./ -I/usr/edb/as13/include/server
> -I/usr/edb/as13/include/internal -I/usr/libexec/edb-icu66/include/
> -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -flto=thin -emit-llvm
> -c -o src/tds_fdw.bc src/tds_fdw.c
> >
> > make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found
>
> IIRC you'll need to find and install the SCLO package for Oracle Linux
> 7, assuming it's
> available. For CentOS it's this repository:
> http://mirror.centos.org/centos/7/sclo/ .
>
> Regards
>
> Ian Barwick
>
>
>


Re: Pgpool with high availability

2024-05-28 Thread Muhammad Imtiaz
Hi,

Can you please provide the Pgpool logs? Additionally, in a distributed
environment, ensure that Pgpool is properly configured. You can follow
these documented steps to set up the Pgpool configurations
pgpool.conf,pcp.conf and pool_hba.conf .

Regards,
Muhammad Imtiaz

On Tue, 28 May 2024 at 23:01, Adrian Klaver 
wrote:

>
>
> On 5/28/24 1:31 AM, vijay patil wrote:
> >
> > HI Team,
> >
> > "I'm encountering challenges while configuring Pgpool with high
> > availability. The initial setup is completed, and Pgpool is operational
> > on a single node, functioning without issues. However, upon attempting
> > to start Pgpool on any additional nodes, particularly node 2, it becomes
> > immediately unreachable.
>
> And how we are supposed to arrive at an answer with essentially no
> information provided?
>
> Need:
>
> 1) Configuration for initial setup.
>
> 2) A more detailed explanation of what "... upon attempting
> to start Pgpool on any additional nodes" means? Include configuration
> changes.
>
> 3) The error messages.
>
> 4) Where the nodes are located?
>
>
> >
> > I'm seeking assistance to address this issue. My setup consists of three
> > nodes, each hosting both PostgreSQL and Pgpool services."
> >
> >
> > Thanks
> >
> > Vijay
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>


Re: Replication lag in Postgres

2024-07-13 Thread Muhammad Imtiaz
Hi,

I recommend the following configurations/options in this case:

• wal_sender_timeout: This setting determines how long the primary server
waits for the standby server to acknowledge receipt of WAL data. Adjusting
this can help ensure timely data transfer.

• wal_keep_size: Ensures that enough WAL files are retained for the standby
to catch up if it falls behind.

• checkpoint_timeout: Adjust the checkpoint frequency to ensure WAL files
are regularly flushed and sent to the standby server regularly.

• pg_receivewal: Use this tool to continuously archive WAL files to a safe
location.It will helpful if there is a delay in streaming replication, you
have a backup of WAL files.

Regards,
Muhammad Imtiaz

On Fri, 12 Jul 2024, 20:11 Mukesh Tanuku,  wrote:

> Hello everyone.
> Firstly thanks to the community members who are addressing all the queries
> that are posted. Those give us more insights about the issues/doubts in the
> postgres.
>
> I have a question with postgres HA setup.
> We are setting up a 2 node postgres cluster with async streaming
> replication, we want to define a RPO (Recovery point objective) in case of
> primary failure.
>
> How can we best define the RPO in this setup? since it's an async
> streaming replication setup there might be a chance of data loss which is
> proportional to the replication delay.
>
> Is there any way we can configure the delay duration, like for example to
> make sure every 10 mins the standby sync has to happen with primary?
>
> Thank you
> Regards
> Mukesh T
>
>


Re: Monitoring DB size

2024-07-15 Thread Muhammad Imtiaz
Hi,

You can choose from the following options. Hopefully, they will meet your
requirements

1) pg_stat_user_tables view
2) pgstattuple extension

Regards,

Muhammad imtiaz

On Mon, 15 Jul 2024, 19:42 Shenavai, Manuel, 
wrote:

> Hi everyone,
>
>
>
> we currently capture the db size (pg_database_size) which gives the “Disk
> space used by the database with the specified name”. Is it possible to
> further split this data how much space is occupied by live tuples, dead
> tuples and free space?
>
>
>
> We would like to have something like:
>
> DB Size: 200 MB, live tuple 100MB, Dead Tuple: 20 MB, free space 80MB
>
>
>
> Is this possible?
>
>
>
> Best regards,
>
> Manuel
>


Re: Fwd: Regarding tables detach concurrently with run_maintenance_proc()

2024-07-21 Thread Muhammad Imtiaz
Hi ,

You can consider the pg_pathman extension.



*Muhammad Imtiaz*

*PostgreSQL Technical Support Lead *
*/ Pakistan R&D*
*Mobile: +923345072521*
*Email: imtia...@bitnine.net *

On Fri, Jul 19, 2024 at 7:55 PM Durgamahesh Manne 
wrote:

>
>
> On Fri, Jul 19, 2024 at 7:55 PM Christoph Berg  wrote:
>
>> Re: Durgamahesh Manne
>> > with pg_partman By default proc() does not detach tables concurrently.
>> How
>> > do we implement tables detach concurrently without blocking other
>> sessions
>> > Here queries not using date column to detach tables with
>> > run_maintenance_proc() which is not using concurrently  based on the
>> > retention policy which leads to scan all available child tables hence
>> need
>> > to trigger this proc with concurrently option to avoid blocking other
>> child
>> > tables beyond rentention policy while running statements on them
>>
>> You might have more success by filing pg_partman issues at
>> https://github.com/pgpartman/pg_partman/issues
>>
>> > Do we have any other alternative rather than using pg_partman()?
>>
>> Well you can just run the same commands manually that pg_partman would
>> run.
>>
>> Christoph
>>
>
> Hi
> You might have more success by filing pg_partman issues at
> https://github.com/pgpartman/pg_partman/issues >>> okay
>  My intention is to have any other extension other than pg_partman to
> manage table partitions manually
>
> Regards
> Durga Mahesh
>


Re: Scheduling pg_repack job with pg_cron

2024-07-30 Thread Muhammad Imtiaz
Hi ,

Yes, you can use it. Create a bash script containing the pg_repack
command logic, then call this bash script from a PL/pgSQL function.
Finally, schedule this PL/pgSQL function using pg_cron as follows.
SELECT cron.schedule('pg_repack_job', '00 3 * * 1-5', $$SELECT repackfunc();$$);


*Muhammad Imtiaz*

*PostgreSQL Technical Support Lead *
*/ Pakistan R&D*
*Mobile: +923345072521*
*Email: imtia...@bitnine.net *



On Tue, Jul 30, 2024 at 1:58 PM Durgamahesh Manne 
wrote:

> Hi
> Respected Team
>
> Is there any way to schedule a pg_repack job with pg_cron within the
> instance ?
>
> If yes then please please let me know the best approach to schedule it
> with pg_cron within the instance.( not in bastion host)
>
> your response is highly valuable
>
> Regards.
> Durga Mahesh
>


Re: Can't change tcp_keepalives_idle

2024-08-06 Thread Muhammad Imtiaz
Hi,

This parameter is supported only on systems that support TCP_KEEPINTVL. And
please ensure that you are not accidentally connecting through a
Unix-domain socket.  If the result of this query is null then it is a Unix
socket based connection.

SELECT client_addr FROM pg_stat_activity WHERE pid = pg_backend_pid();


Regards,
Muhammad Imtiaz

On Sun, Aug 4, 2024 at 4:16 PM Abraham, Danny  wrote:

> Hi,
>
> Change tcp_keepalives_idle in postgresql.conf from 0 to 300 , then pg_ctl
> reload but it stays with the old 0 value.
> Tried restart and it did not work. PG Log says it was changed.
> Reset_cal marked OK, but no real change ( ins how)
>
> -[ RECORD 1 ]---+-
> name| tcp_keepalives_idle
> setting | 0
> unit| s
> category| Connections and Authentication / Connection Settings
> short_desc  | Time between issuing TCP keepalives.
> extra_desc  | A value of 0 uses the system default.
> context | user
> vartype | integer
> source  | session
> min_val | 0
> max_val | 2147483647
> enumvals|
> boot_val| 0
> reset_val   | 1800
> sourcefile  |
> sourceline  |
> pending_restart | f
>
> Thanks
>
> Danny
>
>
>