Re: The Curious Case of the Table-Locking UPDATE Query

2021-07-06 Thread hubert depesz lubaczewski
On Mon, Jul 05, 2021 at 08:22:39PM -0300, Emiliano Saenz wrote:
> We have a huge POSTGRES 9.4 database in the production environment (several
> tables have more than 100.000.00 registers). Last two months we have had
> problems with CPU utilization. Debugging the locks (on pg_locks) we notice
> that sometimes simple UPDATE (by primary key) operation takes out
> ACCESS_EXCLUSIVE_LOCK mode over these huge tables so POSTGRES DB collapses
> and it generates excessive CPU consumption. My question is, How is it
> possible that UPDATE operation takes out ACCESS_EXCLUSIVE_LOCK mode?
> More information, this system never manifests this behavior before and we
> don't make software changes on last 2 years


To be able to help we will need pg_stat_activity data for the for
backend that has this lock, and pg_locks information for it too.

And, please, send text, and not screenshot.

Best regards,

depesz





When to REINDEX a serial key?

2021-07-06 Thread Ron

Server: RDS Postgresql 12.5
Client: Vanilla Postgresql 12.5

Like most systems, we have lots of tables indexed on sequences. Thus, all 
new keys are inserted into the "lower right hand corner" of the b-tree.


The legacy RDBMS which I used to manage has a tool for analyzing (*not* in 
the Postgresql meaning of the word) an index, and displaying a histogram of 
how many layers deep various parts of an index are. Using that histogram, 
you can tell whether or not an index needs to be rebuilt.


How does one get the same effect in Postgresql?

--
Angular momentum makes the world go 'round.


Re: When to REINDEX a serial key?

2021-07-06 Thread David Rowley
On Tue, 6 Jul 2021 at 21:35, Ron  wrote:
> The legacy RDBMS which I used to manage has a tool for analyzing (not in the 
> Postgresql meaning of the word) an index, and displaying a histogram of how 
> many layers deep various parts of an index are.  Using that histogram, you 
> can tell whether or not an index needs to be rebuilt.
>
> How does one get the same effect in Postgresql?

There are a few suggestions in
https://wiki.postgresql.org/wiki/Show_database_bloat

David




Question about how to handle numeric (decimal) data types while using libpq

2021-07-06 Thread Sudheer H R
Hello,

I am trying to use libpq for interfacing with PostgreSQL from a C/C++ based 
application.

I have tried to use binary format of data for both sending and receiving data 
to and from server (resultFormat = 1).

As I understand most binary types, int, float etc… are encoded in bing-endian 
byte order and they can be converted to appropriate host specific memory 
representations.

However NUMERIC datatype is a more complex structure with multiple ints and 
pointers.

I would like to know if there are published library functions that convert (to 
and from) NUMERIC field to say, long double datatype


Any pointers in this regard is appreciated.

Regards,
Sudheer






Re: Question about how to handle numeric (decimal) data types while using libpq

2021-07-06 Thread Dmitry Igrishin
On Tue, Jul 6, 2021, 14:04 Sudheer H R  wrote:

> Hello,
>
> I am trying to use libpq for interfacing with PostgreSQL from a C/C++
> based application.
>
> I have tried to use binary format of data for both sending and receiving
> data to and from server (resultFormat = 1).
>
> As I understand most binary types, int, float etc… are encoded in
> bing-endian byte order and they can be converted to appropriate host
> specific memory representations.
>
> However NUMERIC datatype is a more complex structure with multiple ints
> and pointers.
>
> I would like to know if there are published library functions that convert
> (to and from) NUMERIC field to say, long double datatype
>
>
> Any pointers in this regard is appreciated.
>
Why not just use text format for NUMERIC?

>


Re: Question about how to handle numeric (decimal) data types while using libpq

2021-07-06 Thread Sudheer H R
It is not possible to use binary for some fields and text for some fields.

If text format has to be used, it will be applicable for all fields.

And this method involved converting to string format (sprint) on the server 
side and binary format from string (equivalent of sscanf), which is costly.



> On 06-Jul-2021, at 5:18 PM, Dmitry Igrishin  wrote:
> 
> 
> 
> On Tue, Jul 6, 2021, 14:04 Sudheer H R  > wrote:
> Hello,
> 
> I am trying to use libpq for interfacing with PostgreSQL from a C/C++ based 
> application.
> 
> I have tried to use binary format of data for both sending and receiving data 
> to and from server (resultFormat = 1).
> 
> As I understand most binary types, int, float etc… are encoded in bing-endian 
> byte order and they can be converted to appropriate host specific memory 
> representations.
> 
> However NUMERIC datatype is a more complex structure with multiple ints and 
> pointers.
> 
> I would like to know if there are published library functions that convert 
> (to and from) NUMERIC field to say, long double datatype
> 
> 
> Any pointers in this regard is appreciated.
> Why not just use text format for NUMERIC?



Re: When to REINDEX a serial key?

2021-07-06 Thread Ron

On 7/6/21 4:52 AM, David Rowley wrote:

On Tue, 6 Jul 2021 at 21:35, Ron  wrote:

The legacy RDBMS which I used to manage has a tool for analyzing (not in the 
Postgresql meaning of the word) an index, and displaying a histogram of how 
many layers deep various parts of an index are.  Using that histogram, you can 
tell whether or not an index needs to be rebuilt.

How does one get the same effect in Postgresql?

There are a few suggestions in
https://wiki.postgresql.org/wiki/Show_database_bloat


How does bloat relate to a lopsided b-tree?

--
Angular momentum makes the world go 'round.




Re: When to REINDEX a serial key?

2021-07-06 Thread Peter Eisentraut



On 06.07.21 14:19, Ron wrote:

On 7/6/21 4:52 AM, David Rowley wrote:

On Tue, 6 Jul 2021 at 21:35, Ron  wrote:
The legacy RDBMS which I used to manage has a tool for analyzing (not 
in the Postgresql meaning of the word) an index, and displaying a 
histogram of how many layers deep various parts of an index are.  
Using that histogram, you can tell whether or not an index needs to 
be rebuilt.


How does one get the same effect in Postgresql?

There are a few suggestions in
https://wiki.postgresql.org/wiki/Show_database_bloat


How does bloat relate to a lopsided b-tree?


There is no such thing as a lopsided B-tree, because a B-tree is by 
definition self-balancing.  Perhaps that answers your original question.


Bloat is generally something people are concerned about when they think 
about reindexing their indexes.  But append-only workloads, such as what 
you describe, normally don't generate bloat.





Re: Question about how to handle numeric (decimal) data types while using libpq

2021-07-06 Thread Peter Eisentraut

On 06.07.21 13:04, Sudheer H R wrote:

I am trying to use libpq for interfacing with PostgreSQL from a C/C++ based 
application.

I have tried to use binary format of data for both sending and receiving data 
to and from server (resultFormat = 1).

As I understand most binary types, int, float etc… are encoded in bing-endian 
byte order and they can be converted to appropriate host specific memory 
representations.

However NUMERIC datatype is a more complex structure with multiple ints and 
pointers.

I would like to know if there are published library functions that convert (to 
and from) NUMERIC field to say, long double datatype


There is a third-party library called libpqtypes out there that adds 
support for handling specific types on the libpq side.  But I'm not sure 
how up to date it is.





Re: When to REINDEX a serial key?

2021-07-06 Thread Rob Sargent
> 
> There is no such thing as a lopsided B-tree, because a B-tree is by 
> definition self-balancing.  Perhaps that answers your original question.
> 
You do incur the cost of rebalancing often and the cost/frequency/extent  is 
related to fill factor.  

> 
> 




Need suggestions about live migration from PG 9.2 to PG 13

2021-07-06 Thread Lucas
Hello all,

I'm currently working on a migration from PG 9.2 to PG 13 (RDS) and would like 
some suggestions, please.

Our current database stack is:

> master (pg 9.2) --> slave (pg 9.2) --> slave (pg 9.2 - cascading replication)
>                          --> bucardo (ec2 instance) --> RDS (pg 13)

The original plan was:

1.  Get bucardo replication working
2.  Deploy a test environment using the new PG 13 RDS database
3.  Use the test environment and test as much as possible, to make sure our 
application works with PG 13
4.  Test more
5.  Decide on a date to move all customers to the new database
6.  Stop the replication

However, the business decided that's not the way they want to move forward. If 
we have issues it would impact our customers, etc, even though we do have the 
test environment up and running and the application works with it.

Now, the business wants to move a few customers to RDS and leave the rest on PG 
9.2... then gradually migrate them to RDS. But they want customers data to be 
available in both databases, in case we need to move the customers back to 9.2 
if we face any issues. So that means a bidirectional replication.

I am not comfortable with that, using Bucardo, and was hopping you guys could 
suggest an alternative solution? if anybody has ever done something like this, 
could share their experiences?

BTW - The DB is ~ 1.5TB so pg_upgrade is out of the question, as it takes ages.

Thanks in advance!
Lucas

publickey - root@sud0.nz - 0xC5E964A1.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


pg_upgrade as a way of cloning an instance?

2021-07-06 Thread Luca Ferrari
Hi all,
someone pointed me out that pg_upgrade can be used to do a clone of
the database, specifying the same binaries such as

pg_upgrade -B /usr/pgsql-13/bin -b /usr/pgsql-13/bin -D /data/clone -d /data/src

I tested it and it seems to work, even if I don't see any point in
running it (and most notably it requires a downtime on the original
cluster).
Any opinion about that?

Thanks,
Luca