Hi Andreas,
Yes I'm facing problem because of this huge WAL(archive log) generation. As
it is seriously consuming a lot of disk space almost close to 50GB per day
even if the DML's don't have that impact in this WAL generation.
Previously the archive_log size is nearly 2 to 3 GB a day. Now with t
Hello all,
So I have a view, for which I can select all rows in about 3s (returns ~80k
rows), but if I add a where clause on a column, it takes +300s to return
the ~8k lines.
>From the plan, I see that it expects to return only 1 row and so choose to
perform some nested loops. Of course, I did ru
Hi Laurenz,
Any Update, this is continuously hitting our production database.
Regards,
Rambabu Vakada,
PostgreSQL DBA.
On Tue, Jan 23, 2018 at 6:12 PM, Rambabu V wrote:
> Hi Laurenz,
>
> OOM error not recording in server level, it is only recording in our
> database logs.
>
> below is the err
Am 23.01.2018 um 14:59 schrieb Rambabu V:
total used free shared buffers cached
Mem: 58G 58G 358M 16G 3.6M 41G
-/+ buffers/cache: 16G 42G
Swap: 9.5G 687M 8.9G
*postgresql.conf parametes:*
*
Am 23.01.2018 um 12:51 schrieb pavan95:
Hi Andreas,
Yes I'm facing problem because of this huge WAL(archive log) generation. As
it is seriously consuming a lot of disk space almost close to 50GB per day
even if the DML's don't have that impact in this WAL generation.
Previously the archive_lo
Yes so many wals are continuing to be produced.
Deleting the wals after a backup of the database.
Yes archiving mode is on. And the warning message in log file is
" checkpoints are frequently occurring (1second apart). Consider increasing
checkpoint_segements parameter".
My doubt is previously
On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja
wrote:
> " checkpoints are frequently occurring (1second apart). Consider
> increasing checkpoint_segements parameter".
>
The custom on these lists is to bottom or inline post.
This tends to appear when someone decide to write a load script of the
f
Please don't top-posting
Am 23.01.2018 um 15:39 schrieb Pavan Teja:
Yes so many wals are continuing to be produced.
you have to identify why. Please check pg_stat_activity for
* autovacuum
* large inserts
* large updates
* large deletes
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Suppo
On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:
> Hello all,
>
> So I have a view, for which I can select all rows in about 3s (returns ~80k
> rows), but if I add a where clause on a column, it takes +300s to return
> the ~8k lines.
>
> From the plan, I see that it expects to re
Hi David,
If it's yes what needs to be done in order to stabilize this issue??
Thanks in advance.
Regards,
Pavan
On Jan 23, 2018 8:15 PM, "David G. Johnston"
wrote:
> On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja
> wrote:
>
>> " checkpoints are frequently occurring (1second apart). Consider
>>
Am 23.01.2018 um 16:20 schrieb Pavan Teja:
Hi David,
If it's yes what needs to be done in order to stabilize this issue??
Don't top-post ;-)
You can't prevent the generation of wal's (apart from using unlogged
tables, but i'm sure, that will be not your solution.)
Regards, Andreas
--
2018-01-23 16:18 GMT+01:00 Justin Pryzby :
> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:
>
>> Here is the default plan :
>
> Can you resend without line breaks or paste a link to explain.depesz?
I hope it's better like that. I've attached it too, just in case.
>
> The proble
I've have a look to the plan with pgadmin, and I think the problem is
rather here :
-> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual
time=27.427..28.896 rows=7359 loops=1)
Sort Key: amendment.id
Sort Method: quicksort Memory: 1227kB
-> Nested Loop (cost=183.44..4997.1
Hello,
Is there any way to check, how many transactions happened till date from the
point the database created and started accepting transactions ?
The reason for this doubt is to find whether my database has crossed 2
million transactions or not.
Strangely had an interesting observation, when
Hi,
I have to provide a summary of how much spaces is used in the large objects
table based on a group by condition.
I would expect an index only scan on the large object table, but a full seq
scan that last for hours is performed.
BigSql distribution
PostgreSQL 9.6.5 on x86_64-pc-mingw64, comp
On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja
wrote:
> Yes so many wals are continuing to be produced.
>
> Deleting the wals after a backup of the database.
>
> Yes archiving mode is on. And the warning message in log file is
>
> " checkpoints are frequently occurring (1second apart). Consider
> i
On Jan 23, 2018 9:37 PM, "Claudio Freire" wrote:
On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja
wrote:
> Yes so many wals are continuing to be produced.
>
> Deleting the wals after a backup of the database.
>
> Yes archiving mode is on. And the warning message in log file is
>
> " checkpoints a
In my opinion this is the Achilles heel of the postgres optimizer. Row
estimates should never return 1, unless the estimate is provably <=1. This
is particularly a problem with join estimates. A dumb fix for this is to
change clamp_join_row_est() to never return a value <2. This fixes most of
m
On Tue, 2018-01-23 at 19:29 +0530, Rambabu V wrote:
> Any Update, this is continuously hitting our production database.
>
> > OOM error not recording in server level, it is only recording in our
> > database logs.
> >
> > below is the error message:
> >
> > cat PostgreSQL-2018-01-23_06.csv|
On Tue, Jan 23, 2018 at 5:59 AM, Rambabu V wrote:
> > cat PostgreSQL-2018-01-23_06.csv|grep FATAL
What about ERROR, not just FATAL? Or grep for "out of memory"
>> *$ free -mh*
>> total used free sharedbuffers cached
>> Mem: 58G58G
On Tue, Jan 23, 2018 at 1:16 PM, Pavan Teja
wrote:
> On Jan 23, 2018 9:37 PM, "Claudio Freire" wrote:
>
>
>
> On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja
> wrote:
>
>> Yes so many wals are continuing to be produced.
>>
>> Deleting the wals after a backup of the database.
>>
>> Yes archiving mo
Hi Rambabu,
If you are finding some sessions then of course your database is
perfectly alright. As sessions won't consume any memory.
Kindly specify the issue briefly.
Regards,
Pavan
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Hi Claudio,
We didn't configure any replication to our production server. Which strace
are you talking about?
We did a keen observation that only at the time 9'th minute of the hour and
39'th minute of the hour the so called archive logs are generated even when
nobody is connecting from applicati
Thomas Kellerer schrieb am 19.01.2018 um 17:48:
>
> I wonder what performance implications that has on a server with
> around 50-100 active connections (through pgBouncer).
>
> My understanding of the documentation is, that Postgres will work
> just fine if we lower the limit, it simply releases t
On Tue, Jan 23, 2018 at 10:54:01PM -0700, pavan95 wrote:
> If you are finding some sessions then of course your database is
> perfectly alright. As sessions won't consume any memory.
Those have a cost as well when building transaction snapshots. Too much
of them is no good either, let's not forg
Then we should find like if there are any idle sessions with uncommitted
transactions. Those might be the culprits.
Regards,
Pavan
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
On Wed, Jan 24, 2018 at 3:54 AM, pavan95
wrote:
> Hi Claudio,
>
> We didn't configure any replication to our production server. Which strace
> are you talking about?
>
This one: https://linux.die.net/man/1/strace
You can attach it to a process (assuming you have the necessary
permissions) and i
27 matches
Mail list logo