Regarding publish_via_partiton_root with pglogical

2024-07-20 Thread Durgamahesh Manne
Hi

Respected Team

I know the use case of implementing the partitions with publication and
subscription of built-in logical replication
CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
(publish_via_partition_root = true); This will use parent table to replica
data changes to target from source
But
Could you please provide a logic to implement the same with pglogical ?
Do i need to add this publish_via_partition_root to
pglogical.replication_set_add_table functionality ?
you response is valuable in this aspect

Regards,
Durga Mahesh


Re: Re. Select with where condition times out

2024-07-20 Thread sivapostg...@yahoo.com
 Executed VACUUM FULL VERBOSEfollowed byREINDEX DATABASE dbname;
It didn't increase the performance, still time out happened.  VACUUM didn't 
find any dead rows in that particular table.  

Yes, the actual query and conditions were not given in my first comment.  
Actually where condition is not on the date field alone and the query with 
current date is only a sample.   
What I did,1.  Took backup (pg_dump) of the database from the server it's 
running.   [ Server config. Xeon Silver 4208, Windows Server 2019 Standard ].
2.  Restored in another desktop system, installing PG 11 afresh.
3.  Performance was excellent.  Within milliseconds I got the result.  
Application was run from the desktop.
4.  Restored the database in the same server, as another database.  Improved 
performance but doesn't match the performance of the desktop.  Application run 
from the server itself.  

Now server got two databases with exactly the same data.   Old one takes more 
than 15 minutes; newer one takes few seconds.  Application run from the server 
and also from clients.  In both conditions, the result is same.  
What else I need to do to correct this issue?
I can easily replace the old database with the backup.  Is that only option?
Happiness Always
BKR Sivaprakash
On Thursday, 18 July, 2024 at 05:23:39 pm IST, Francisco Olarte 
 wrote:  
 
 On Thu, 18 Jul 2024 at 11:38, sivapostg...@yahoo.com
 wrote:
> Hello,
> PG V11
>
> Select count(*) from table1
> Returns 10456432
>
> Select field1, field2 from table1 where field3> '2024-07-18 12:00:00'
> Times out

How do you send the query / how does it time out? Is that the real
query? Is table a table or a view? What does explain say?

> Any possible way(s) to do this?

If your client is timing out, increase timeout, if imposible you can
try fetching in batches, but more detail would be needed.

Suggestions to improve total time had already being given, try to
decrease bloat if you have it, but AFAIK timeouts are configurable, so
it may just be you have a too low timeout.

If it had been working, is field3 indexed? How is the table modified?

Because with a configured timeout, whit an unindexed table ( forcing a
table scan ) the query may be working for years before you hit the bad
spot. Also, the query includes todays date, so I doubt it has been
used for years, probably "a similar one has been used for years", and
probably that is not your real table ( or you have a naming problem ).
Without giving real info, people cannot give you real solutions.

Francisco Olarte.
  

Re: Re. Select with where condition times out

2024-07-20 Thread Ron Johnson
(Because VACUUM FULL rewrites the table, an implicit REINDEX occurs.)

I don't see mention of analyzing the database.

Also, VACUUM FULL probably doesn't do what you think it does.

On Sat, Jul 20, 2024 at 7:44 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:

> Executed
> VACUUM FULL VERBOSE
> followed by
> REINDEX DATABASE dbname;
>
> It didn't increase the performance, still time out happened.  VACUUM
> didn't find any dead rows in that particular table.
>
> Yes, the actual query and conditions were not given in my first comment.
> Actually where condition is not on the date field alone and the query with
> current date is only a sample.
>
> What I did,
> 1.  Took backup (pg_dump) of the database from the server it's running.
>  [ Server config. Xeon Silver 4208, Windows Server 2019 Standard ].
> 2.  Restored in another desktop system, installing PG 11 afresh.
> 3.  Performance was excellent.  Within milliseconds I got the result.
> Application was run from the desktop.
> 4.  Restored the database in the same server, as another database.
> Improved performance but doesn't match the performance of the desktop.
> Application run from the server itself.
>
> Now server got two databases with exactly the same data.   Old one takes
> more than 15 minutes; newer one takes few seconds.  Application run from
> the server and also from clients.  In both conditions, the result is same.
>
> What else I need to do to correct this issue?
>
> I can easily replace the old database with the backup.  Is that only
> option?
>
> Happiness Always
> BKR Sivaprakash
>
> On Thursday, 18 July, 2024 at 05:23:39 pm IST, Francisco Olarte <
> fola...@peoplecall.com> wrote:
>
>
> On Thu, 18 Jul 2024 at 11:38, sivapostg...@yahoo.com
>  wrote:
> > Hello,
> > PG V11
> >
> > Select count(*) from table1
> > Returns 10456432
> >
> > Select field1, field2 from table1 where field3> '2024-07-18 12:00:00'
> > Times out
>
> How do you send the query / how does it time out? Is that the real
> query? Is table a table or a view? What does explain say?
>
>
> > Any possible way(s) to do this?
>
>
> If your client is timing out, increase timeout, if imposible you can
> try fetching in batches, but more detail would be needed.
>
> Suggestions to improve total time had already being given, try to
> decrease bloat if you have it, but AFAIK timeouts are configurable, so
> it may just be you have a too low timeout.
>
> If it had been working, is field3 indexed? How is the table modified?
>
> Because with a configured timeout, whit an unindexed table ( forcing a
> table scan ) the query may be working for years before you hit the bad
> spot. Also, the query includes todays date, so I doubt it has been
> used for years, probably "a similar one has been used for years", and
> probably that is not your real table ( or you have a naming problem ).
> Without giving real info, people cannot give you real solutions.
>
> Francisco Olarte.
>
>


Re: Re. Select with where condition times out

2024-07-20 Thread Francisco Olarte
Hi:

Please, avoid top posting, specially when replying to long mail with
various points,m it makes it nearly impossible to track what you are
replying to.

On Sat, 20 Jul 2024 at 13:44, sivapostg...@yahoo.com
 wrote:
> Executed
> VACUUM FULL VERBOSE
> followed by
> REINDEX DATABASE dbname;

As it has been already said, vacuum full implies reindex ( it
basically copies old table to a new one, including indexes, swaps
them, deletes old one ).
> It didn't increase the performance, still time out happened.  VACUUM didn't 
> find any dead rows in that particular table.

The no dead rows is the interesting part.

> Yes, the actual query and conditions were not given in my first comment.  
> Actually where condition is not on the date field alone and the query with 
> current date is only a sample.

Then they are worthless and harmful. Query time problems is normally
data and statistics dependent and always query dependent.

The query you posted has only two ways to be done, and few ways to be
improved. Suggestions for it will probably be harmful for other
queries.

> What I did,
> 1.  Took backup (pg_dump) of the database from the server it's running.   [ 
> Server config. Xeon Silver 4208, Windows Server 2019 Standard ].
> 2.  Restored in another desktop system, installing PG 11 afresh.
> 3.  Performance was excellent.  Within milliseconds I got the result.  
> Application was run from the desktop.
> 4.  Restored the database in the same server, as another database.  Improved 
> performance but doesn't match the performance of the desktop.  Application 
> run from the server itself.

What you did not:
- Show your tables and indexes.
- Show your real queries.
- Tell us what "the application is" ( i.e., "psql", "a java app using
JDBC", ... )

> Now server got two databases with exactly the same data.   Old one takes more 
> than 15 minutes; newer one takes few seconds.  Application run from the 
> server and also from clients.  In both conditions, the result is same.

After what has been happening, I have to ask. Do you mean ONE server
with two databases, or TWO servers with one database each? Also, what
are the especs of the server and the desktops, and the postgres
configuration on each? A misconfigured server can easily send query
time through the roof ( i.e., DB servers want real RAM, if you
configure postgres with too much mem and it swaps you can make a query
really slow )

> What else I need to do to correct this issue?

No clue.

> I can easily replace the old database with the backup.  Is that only option?

Ah, one clue. From the info I have in this and previous mails, that is
the only option for me. Having more info someone may have ideas, but
so far the only thing I have concluded is three databases, fast in
server, slow in server and desktop, test only. So my only options are
fast server and slow server. So my solution would be   "use fast
server". As I said, maybe having more data we could suggest "analyze
that table with these parameters", or "make this index" or "rewrite
this condition in this way", but this is impossible to do with the
data you provided.

Regards.
Francisco Olarte.




Re: Re. Select with where condition times out

2024-07-20 Thread Michael Nolan
On Thu, Jul 18, 2024 at 4:38 AM sivapostg...@yahoo.com
 wrote:
>
> Hello,
> PG V11
>
> Select count(*) from table1
> Returns 10456432
>
> Select field1, field2 from table1 where field3> '2024-07-18 12:00:00'
> Times out
>
> The above query was working fine for the past 2 years.
>
> Backup was taken a day back.  Need to recover complete data as far as 
> possible.
>
> Any possible way(s) to do this?
>
> BKR Sivaprakash
>

If you do a full backup, does it complete in a normal manner and the usual time?

Have you tried doing a shutdown and restart of the database, or
possibly rebooting the server?

You may need to alter the database server settings to increase the
maximum query time.

Mike Nolan
htf...@gmail.com




Bloated pg_catalog.pg_largeobjects

2024-07-20 Thread postgresql
Hello All,

I've got a cluster that's having issues with pg_catalog.pg_largeobject getting 
massively bloated. Vacuum is running OK and there's 700GB of free space in the 
table and only 100GB of data, but subsequent inserts seem to be not using space 
from the FSM and instead always allocating new pages. The table just keeps 
growing.

Is this a known thing, maybe something special about LOs?

Also, is the only way to recover space here a vacuum full on the table since 
it's a catalog table?

Thanks,

-- 
Jon Erdman (aka StuckMojo on IRC)
    PostgreSQL Zealot