Transactions

2019-04-09 Thread Karl Martin Skoldebrand
Hi,

Is there a way to track "transactions" by default (i.e. without anyone having 
set up anything specific). The problem I am facing is that users are claiming 
that settings are disappearing with them doing anything to affect them. It 
would be good to be able to see what postgresql thinks is going on.
*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time.
*Subscriber stops receiving selected [tickets].
*Upon checking settings the selected Severity/BU/Service has been unselected.

/M.



Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Sv: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:26:29, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com >: 
Hi,



Is there a way to track “transactions” by default (i.e. without anyone having 
set up anything specific). The problem I am facing is that users are claiming 
that settings are disappearing with them doing anything to affect them. It 
would be good to be able to see what postgresql thinks is going on.

*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time.

*Subscriber stops receiving selected [tickets].

*Upon checking settings the selected Severity/BU/Service has been unselected.
Not "without anyone having set up anything specific", but you can change the 
setting in postgresql.conf to: log_statement = 'all' and reload the settings. 
You can now see all SQL executed in the log and can debug what's going on. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com  www.visena.com 
  

Re: Transactions

2019-04-09 Thread Fabio Pardi
Hi Karl Martin,

you could set log_min_duration_statement=0 at the global level (in the config 
file) or at session level too. 

regards,

fabio pardi




On 09/04/2019 11:26, Karl Martin Skoldebrand wrote:
> Hi,
> 
>  
> 
> Is there a way to track “transactions” by default (i.e. without anyone having 
> set up anything specific). The problem I am facing is that users are claiming 
> that settings are disappearing with them doing anything to affect them. It 
> would be good to be able to see what postgresql thinks is going on.
> 
> /*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
> subscriber configuration in WEBAPP. This works for some time./
> 
> /*Subscriber stops receiving selected [tickets]./
> 
> /*Upon checking settings the selected Severity/BU/Service has been 
> unselected./
> 
>  
> 
> /M.
> 
>  
> 
> 
> 
> Disclaimer:  This message and the information contained herein is proprietary 
> and confidential and subject to the Tech Mahindra policy statement, you may 
> review the policy at http://www.techmahindra.com/Disclaimer.html externally 
> http://tim.techmahindra.com/tim/disclaimer.html internally within 
> TechMahindra.
> 
> 
> 




RE: Transactions

2019-04-09 Thread Karl Martin Skoldebrand
How much impact on performance and disk space would this or
set log_min_duration_statement=0
have?

I have no idea as to how common this is, or when it happens, so it would need 
to run until this reported again (or some reasonable time if it doesn’t happen).

/M.

From: Andreas Joseph Krogh 
Sent: 09 April 2019 11:41
To: pgsql-general@lists.postgresql.org
Subject: Sv: Transactions

På tirsdag 09. april 2019 kl. 11:26:29, skrev Karl Martin Skoldebrand 
mailto:ks0c77...@techmahindra.com>>:
Hi,

Is there a way to track “transactions” by default (i.e. without anyone having 
set up anything specific). The problem I am facing is that users are claiming 
that settings are disappearing with them doing anything to affect them. It 
would be good to be able to see what postgresql thinks is going on.
*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time.
*Subscriber stops receiving selected [tickets].
*Upon checking settings the selected Severity/BU/Service has been unselected.


Not "without anyone having set up anything specific", but you can change the 
setting in postgresql.conf to:

log_statement = 'all'

and reload the settings.

You can now see all SQL executed in the log and can debug what's going on.


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Sv: RE: Transactions

2019-04-09 Thread Andreas Joseph Krogh
På tirsdag 09. april 2019 kl. 11:56:28, skrev Karl Martin Skoldebrand <
ks0c77...@techmahindra.com >: 
How much impact on performance and disk space would this or 

set log_min_duration_statement=0

have?



I have no idea as to how common this is, or when it happens, so it would need 
to run until this reported again (or some reasonable time if it doesn’t happen).
Well, the answer here is of course "it depends"... If you have lots of 
activity the logs will fill up quite quickly, but you can easily test this in 
production and just turn off logging again by setting it to 'none' and reload 
settings (no need to restart). You can also only log modifications by setting 
log_statement = 'mod' Also watch out for triggers modifying stuff. -- Andreas 
Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com
  www.visena.com   


Re: Transactions

2019-04-09 Thread Achilleas Mantzios

On 9/4/19 12:26 μ.μ., Karl Martin Skoldebrand wrote:


Hi,

Is there a way to track “transactions” by default (i.e. without anyone having set up anything specific). The problem I am facing is that users are claiming that settings are disappearing with them 
doing anything to affect them. It would be good to be able to see what postgresql thinks is going on.


/*Subscriber adds Severity/BU/Service by ticking the corresponding box in 
subscriber configuration in WEBAPP. This works for some time./

/*Subscriber stops receiving selected [tickets]./

/*Upon checking settings the selected Severity/BU/Service has been unselected./



If what you need is business-like trail to specific relations (tables) (most probably what you need) then you may find pgaudit very useful. Its author dwsteele also wrote pgbackrest, as a tool it 
works wonders and the support is superb.


/M.



Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at 
http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: 9.6.11- could not truncate directory "pg_serial": apparent wraparound

2019-04-09 Thread Pavel Suderevsky
>
> On Sun, Apr 7, 2019 at 2:31 AM Pavel Suderevsky 
> wrote:
> > Probably if you advise me what could cause "pg_serial": apparent
> wraparound messages I would have more chances to handle all the performance
> issues.
>
> Did you see that warning at some point before the later error?
>
Thomas,

Thank you for your reply!

No, there have never been such warnings.

I wonder if this condition required you to have a serializable
> transaction running (or prepared) while you consume 2^30 AKA ~1
> billion xids.  I think it is unreachable in v11+ because commit
> e5eb4fa8 allowed for more SLRU pages to avoid this artificially early
> wrap.


Do I understand right that this is about Virtual txids? Have no idea how
even a something close to a billion of transaction ids could be consumed on
this system.

--
Pavel Suderevsky
E: psuderev...@gmail.com


Re: Logical replication failed recovery

2019-04-09 Thread Adrian Klaver

On 4/9/19 6:08 AM, Lou Tseng wrote:

Please reply to list also.
Ccing list.


Hi adrian,

You are right.  Increasing the wal_kept_segments didn't solve the 
problem. However, I didn't find any scenario in the documentation 
applied to my case.

Is there a way to get the databases resynced?


It is still not clear to me how you got into this state. You might want 
to spend some time looking at the logs to see if you can figure that 
out, to prevent a recurrence.


In the meantime I would say the way to get re-synced is to start over. 
In other words empty the table(s) on the subscriber side and start the 
subscriptions over again.




Thanks!

Lou Tseng

lts...@advancedpricing.com 

Advanced Medical Pricing Solutions 
35 Technology Parkway South, Suite. 100
Peachtree Corners, GA 30092



*From:* Adrian Klaver 
*Sent:* Sunday, April 7, 2019 10:05 AM
*To:* Lou Tseng; pgsql-general@lists.postgresql.org
*Subject:* Re: Logical replication failed recovery
On 4/7/19 6:22 AM, Lou Tseng wrote:

Hi folks,

Is there a good tutorial to recover from logical replication out of 
sync?  We ran into this error stating WAL has been removed and now 
replica is out of sync.  Also, is there a way to increase the number of 
WAL kept on the master?


I have to believe there is more to the story. When a subscription is
enabled it sets up a replication
slot(www.postgresql.org/docs/11/warm-standby.html#STREAMING-REPLICATION-SLOTS 
) 


that should ensure that the WAL is not recycled until it is used. Did
you stop a Subscription then
restart it? Or one of the other scenarios shown here?:

https://www.postgresql.org/docs/11/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT



Thanks!

2019-04-07 12:28:37.180 UTC [22292] ERROR:could not receive data from 
WAL stream: ERROR:requested WAL segment 0001520800EB has 
already been removed


2019-04-07 12:28:37.182 UTC [114186] LOG:worker process: logical 
replication worker for subscription 8907992 (PID 22292) exited with exit 
code 1




Lou Tseng

lts...@advancedpricing.com 

Advanced Medical Pricing Solutions 
35 Technology Parkway South, Suite. 100
Peachtree Corners, GA 30092





--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Fwd: Postgresql with nextcloud in Windows Server

2019-04-09 Thread Adrian Klaver

On 4/8/19 6:51 PM, 김준형 wrote:

Thanks for your answers.

1) First time, I set that value to 200. I think it is connection issue, 
so increase max_connection = 1200.


Connections consume resources, that can come back to hurt you.

After I show this email, I search max connection is associated to 
shared_buffers.
My configuration set shared_buffers = 192GB because PostgreSQL documents 
says set shared_buffers to quarter of memory will be good.


With the caveats that this is for a dedicated database server and 
max_wal_size may need to be increased also. Is this machine dedicated to 
Postgres?




2) How to see that? I searched postgresql connection pooler but I didn't 
get good answer.


http://www.craigkerstiens.com/2014/05/22/on-connection-pooling/

http://www.pgpool.net/mediawiki/index.php/Main_Page

http://pgbouncer.github.io/


Before you go any further in any direction I would spend the time 
monitoring your database/OS to see what is actually going on 
hour-to-hour/day-to-day. Flipping settings based on hunches may work by 
chance but most likely will cause more issues. Not sure how Postgres was 
installed on the server and whether you have access to make changes. 
Assuming changes can be made, some suggestions:


1) From the contrib modules:

pg_stat_statements
https://www.postgresql.org/docs/11/pgstatstatements.html

2) Third part extension:

PGAudit
https://www.pgaudit.org/



3) I saw pg_stat_activity but nothing specially.
Sometimes Nextcloud appeared in a short time like this.

  1795037 | abminext    | 41836 |  1795036 | oc_abmiadmin |  
               | ip address|                 |       port numer| 
2019-04-09 10:35:38.527147+09 |                               | 
2019-04-09 10:35:38.594062+09 | 2019-04-09 10:35:38.594259+09 | Client  
         | ClientRead          | idle   |             |              |  
 
 
 
 
 
                                                          +| client backend
          |             |       |          |              |  
               |             |                 |             |  
                      |                               |  
              |                               |                 |
              |        |             |              |
              SELECT "remote", "share_token", "password", "mountpoint", 
"owner"  
 
 
 
                                        +|
          |             |       |          |              |  
               |             |                 |             |  
                      |                               |  
              |                               |                 |
              |        |             |              |
              FROM "oc_share_external"  
 
 
 
 
                                         +|
          |             |       |          |              |  
               |             |                 |             |  
                      |                               |  
              |                               |                 |
              |        |             |              |
              WHERE "user" = $1 AND "accepted" = $2  
 
 
 
 
                                        +|
          |             |       |          |              |  
               |             |                 |             |  
                      |                               |  
              |                               |                 |
              |        |             |       

Re: Transactions

2019-04-09 Thread Melvin Davidson
In addition to what Andreas has suggested, you should also verify user
claims by
A. Have them show you the query they executed,
B. Verify they have appropriate SELECT, INSERT, UPDATE ad/or DELETE
permissions
on the tables involved.


On Tue, Apr 9, 2019 at 6:10 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 9/4/19 12:26 μ.μ., Karl Martin Skoldebrand wrote:
>
> Hi,
>
>
>
> Is there a way to track “transactions” by default (i.e. without anyone
> having set up anything specific). The problem I am facing is that users are
> claiming that settings are disappearing with them doing anything to affect
> them. It would be good to be able to see what postgresql thinks is going on.
>
> **Subscriber adds Severity/BU/Service by ticking the corresponding box in
> subscriber configuration in WEBAPP. This works for some time.*
>
> **Subscriber stops receiving selected [tickets].*
>
> **Upon checking settings the selected Severity/BU/Service has been
> unselected.*
>
>
> If what you need is business-like trail to specific relations (tables)
> (most probably what you need) then you may find pgaudit very useful. Its
> author dwsteele also wrote pgbackrest, as a tool it works wonders and the
> support is superb.
>
>
>
> /M.
>
>
>
>
> 
>
> Disclaimer:  This message and the information contained herein is
> proprietary and confidential and subject to the Tech Mahindra policy
> statement, you may review the policy at
> http://www.techmahindra.com/Disclaimer.html externally
> http://tim.techmahindra.com/tim/disclaimer.html internally within
> TechMahindra.
>
>
> 
>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Sherrylyn Branchaw
If a table has 1.8M rows in reality, yet pg_stat_get_live_tuples() returns
1.8K, does it matter to the performance of the database, as long as
pg_class.reltuples is the right order of magnitude?

The query planner seems to use the accurate estimate, but what about the
autovacuum daemon? Or anything else?

In short, is pg_stat_get_live_tuples() (and the views that invoke it) used
by any built-in tooling, or is it just there for the user to do monitoring?

Thanks,
Sherrylyn


Re: Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Adrian Klaver

On 4/9/19 9:45 AM, Sherrylyn Branchaw wrote:
If a table has 1.8M rows in reality, yet pg_stat_get_live_tuples() 
returns 1.8K, does it matter to the performance of the database, as long 
as pg_class.reltuples is the right order of magnitude?


What version of Postgres?

How are you getting at the 1.8M number?



The query planner seems to use the accurate estimate, but what about the 
autovacuum daemon? Or anything else?


In short, is pg_stat_get_live_tuples() (and the views that invoke it) 
used by any built-in tooling, or is it just there for the user to do 
monitoring?


Thanks,
Sherrylyn



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Sherrylyn Branchaw
What version of Postgres?

Sorry, can't believe I forgot to include that! Postgres 9.6.9.

How are you getting at the 1.8M number?

SELECT COUNT(*)


Re: Unable to Vacuum Large Defragmented Table

2019-04-09 Thread Igal Sapir
Pavel,

On Mon, Apr 8, 2019 at 8:29 AM Pavel Stehule 
wrote:

>
>
> po 8. 4. 2019 v 17:22 odesílatel Igal Sapir  napsal:
>
>> Pavel,
>>
>> On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule 
>> wrote:
>>
>>>
>>> po 8. 4. 2019 v 7:57 odesílatel Igal Sapir  napsal:
>>>
 David,

 On Sun, Apr 7, 2019 at 8:11 PM David Rowley <
 david.row...@2ndquadrant.com> wrote:

> On Mon, 8 Apr 2019 at 14:57, Igal Sapir  wrote:
> > However, I have now deleted about 50,000 rows more and the table has
> only 119,688 rows.  The pg_relation_size() still shows 31MB and
> pg_total_relation_size() still shows 84GB.
> >
> > It doesn't make sense that after deleting about 30% of the rows the
> values here do not change.
>
> deleting rows does not make the table any smaller, it just creates
> dead rows in the table.  VACUUM tries to release the space used by
> those dead rows and turns it back into free space.  Normal vacuum (not
> FULL) can only shrink the table if completely empty pages are found at
> the end of the table.
>

 ACK


>
> > Attempting to copy the data to a different table results in the out
> of disk error as well, so that is in line with your assessment.  But it
> actually just shows the problem.  The new table to which the data was
> copied (though failed due to out of disk) shows 0 rows, but
> pg_total_relation_size() for that table shows 27GB.  So now I have an
> "empty" table that takes 27GB of disk space.
>
> I think the best thing to do is perform a normal VACUUM on the table
>

 Running VACUUM on the newly created table cleared the 27GB so that's
 good (I was planning to run normal VACUUM but ran FULL).

>>>
>>> you can drop some indexes, then you can run vacuum full, and create
>>> dropped indexes again.
>>>
>>
>> The table does not have any indexes.  It is mostly an append-only table.
>>
>>
>>>
>>>
>>>

> then CREATE EXTENSION IF NOT EXISTS pgstattuple; then do; SELECT *
> FROM pgstattuple('); and the same again on the toast table.
> If your table still contains many dead rows then perhaps an open
> transaction is stopping rows from being turned into free space.


 I am not sure how to read the below.  I see a lot of "free_space" but
 not "dead":

 -[ RECORD 1 ]--+
 ?column?   | primary
 table_len  | 32055296
 tuple_count| 120764
 tuple_len  | 9470882
 tuple_percent  | 29.55
 dead_tuple_count   | 0
 dead_tuple_len | 0
 dead_tuple_percent | 0
 free_space | 20713580
 free_percent   | 64.62
 -[ RECORD 2 ]--+
 ?column?   | toast
 table_len  | 88802156544
 tuple_count| 15126830
 tuple_len  | 30658625743
 tuple_percent  | 34.52
 dead_tuple_count   | 0
 dead_tuple_len | 0
 dead_tuple_percent | 0
 free_space | 57653329312
 free_percent   | 64.92

>>>
>>>
>>> it say, so your table can be reduced about 60%
>>>
>>
>> That's what I thought, and releasing 65% of 84GB would be major here, but
>> unfortunately I am unable to release it because VACUUM FULL requires more
>> space than I currently have available.
>>
>> Perhaps disabling the WAL, if possible, could help VACUUM FULL complete.
>> Or some way to do an in-place VACUUM so that it doesn't write all the data
>> to a new table.
>>
>
> maybe this article can be interesting for you
>
> https://www.depesz.com/2013/06/21/bloat-removal-by-tuples-moving/
>
>
It's very interesting, thank you.

I'm trying now to figure out the pointer on the primary table and the TOAST
table.  The TOAST table has fields like chunk_id and chunk_seq, so I'm
looking for the data in each row of the primary table that points to those
in each toast-ed row.

Any ideas?

Thanks,

Igal

p.s. Unfortunately, pg_repack and pgcompact did not work in my case.  They
show as if there is nothing to do even though I know that there is 2/3
bloat in the TOAST file.


>
>
>
>
>> Thank you,
>>
>> Igal
>>
>>
>>
>>>
>>>


> Once pgstattuples reports that "tuple_len" from the table, its toast
> table
> and all its indexes has been reduced to an acceptable value then you
> should try a VACUUM FULL.  Remember that VACUUM FULL must also write
> WAL, so if WAL is on the same volume, then you'll need to consider
> space required for that when deciding how much data to remove from the
> table.
>

 WAL is on the same volume.  The PGDATA directory is mounted in a Docker
 container.

 Isn't there any way to do an in-place VACUUM or pause the WAL at the
 risk of losing some data if recovery is required?

 There is a catch-22 here.  I can't reclaim the disk space because that
 requires disk space.  Surely I'm not the first one to have encountered tha

Re: Does pg_stat_get_live_tuples() matter?

2019-04-09 Thread Tom Lane
Sherrylyn Branchaw  writes:
> If a table has 1.8M rows in reality, yet pg_stat_get_live_tuples() returns
> 1.8K, does it matter to the performance of the database, as long as
> pg_class.reltuples is the right order of magnitude?

Hmmm ... what was in reltuples again?  Looking at the current code,
it looks like vacuum or autovacuum should set reltuples and n_live_tuples
to the same thing.  n_live_tuples gets adjusted incrementally by
subsequent transactions, and it wouldn't be too surprising if it drifted
away from reality, but it's a bit hard to believe that it could get to be
off by 1000X.  Have you suppressed autovacuum on this table?

We have fooled around with the logic that maintains these numbers, so
maybe it was different in 9.6.9.

Anyway, to answer your question, I don't see anything in the current core
code that pays attention to n_live_tuples.  reltuples definitely does
matter to the planner, and some of the sibling counters like n_dead_tuples
drive autovacuum, but nothing is examining n_live_tuples AFAICS.

regards, tom lane