On Tue, May 17, 2016 at 10:40 AM Alvaro Herrera
wrote:
> In 9.4, not really. In 9.5 there's a function mxid_age() that gives you
> the age of a multixact, so you'd grab the oldest from
> pg_database.datminmxid and compute the age of that one. Going from the
> oldest multi to the oldest offset ca
On Mon, May 16, 2016 at 6:18 PM Alvaro Herrera
wrote:
> Not really. Your best bet is to reduce the
> autovacuum_multixact_freeze_min_age limit, so that vacuums are able to
> get rid of multixacts sooner (and/or reduce
> autovacuum_multixact_freeze_table_age, so that whole-table vacuuming
> takes
This is Postgres 9.4.4. Custom settings are [in this gist](
https://gist.github.com/skehlet/47c7f92daa0bd3d1a3aee2bb001da140).
This is a new one for me, one of our bigger (~2.2TB) databases started
having the following error:
> Caused by: org.postgresql.util.PSQLException: ERROR: multixact "membe
Maybe my custom settings are relevant. Here they are in a gist:
https://gist.github.com/skehlet/08aeed3d06f1c35bc780
On Thu, Mar 17, 2016 at 11:47 AM Steve Kehlet
wrote:
> Sorry, seems like such a noob problem, but I'm stumped. This is postgres
> 9.4.5. I'll post my custom set
Sorry, seems like such a noob problem, but I'm stumped. This is postgres
9.4.5. I'll post my custom settings if desired but I don't think they're
needed.
We recently had an issue where the autovacuumer wasn't starting because
postgres couldn't resolve the hostname 'localhost' (we had bad perms on
adjusting our autovacuum settings and looking into
improving that blocked/blocking query.
On Fri, Aug 28, 2015 at 1:44 PM Steve Kehlet wrote:
> On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes wrote:
>
>> You should RESET the autovacuum_vacuum_scale_factor for the table. You
>> d
On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes wrote:
> You should RESET the autovacuum_vacuum_scale_factor for the table. You
> don't want it to be vacuumed aggressively, just autoanalyzed aggressively.
> Sorry if my copy-paste error led you astray on that.
>
No problem, done, thank you.
There is
On Fri, Aug 28, 2015 at 11:18 AM Tom Lane wrote:
> It looks like a VACUUM will do the cleanup during the first ginbulkdelete
> call, so you could probably handle this by running a manual "VACUUM
> VERBOSE" with the smallest possible maintenance_work_mem, and canceling it
> as soon as you see some
On Fri, Aug 28, 2015 at 10:27 AM Tom Lane wrote:
> Steve Kehlet writes:
> > Yeah head scratch. That is really weird. Still gathering data, any way I
> > can see for sure when these cleanup cycles are occurring?
>
> contrib/pgstattuple has a function that can report th
On Fri, Aug 28, 2015 at 12:10 PM Jeff Janes wrote:
> Did you change the system-wide autovacuum_analyze_scale_factor? If so,
> don't do that. You can use a table's storage parameters to set a custom
> autovacuum_analyze_scale_factor just for individual tables. So just the
> table with the troub
On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes wrote:
> Or what I usually do in a case like this is clone the database to a
>>> test/QA server then run pg_upgrade to get that running on 9.5, then hope
>>> what I learn transfers back to production.
>>
>>
I'll save this great idea.
> But the symptom
On Fri, Aug 28, 2015 at 10:11 AM Tom Lane wrote:
> Steve Kehlet writes:
> > This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
> > hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
> > column. During these episodes, UPDATEs that
This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
column. During these episodes, UPDATEs that normally take < 1sec take
upwards of 2-4 minutes, and all finish simultaneously, like they were all
blocked on
Just wanted to report that I rolled back my VM to where it was with 9.4.2
installed and it wouldn't start. I installed 9.4.4 and now it starts up
just fine:
> 2015-06-12 16:05:58 PDT [6453]: [1-1] LOG: database system was shut down
at 2015-05-27 13:12:55 PDT
> 2015-06-12 16:05:58 PDT [6453]: [2-1
On Fri, Jun 5, 2015 at 11:47 AM Andres Freund wrote:
> But I'd definitely like some
> independent testing for it, and I'm not sure if that's doable in time
> for the wrap.
>
I'd be happy to test on my database that was broken, for however much that
helps. It's a VM so I can easily revert back as
On Fri, May 29, 2015 at 12:08 PM Robert Haas wrote:
> OK, here's a patch.
>
I grabbed branch REL9_4_STABLE from git, and Robert got me a 9.4-specific
patch. I rebuilt, installed, and postgres started up successfully! I did a
bunch of checks, had our app run several thousand SQL queries against
On Wed, May 27, 2015 at 3:21 PM Alvaro Herrera
wrote:
> I think a patch like this should be able to fix it ... not tested yet.
>
Thanks Alvaro. I got a compile error, so looked for other uses of
SimpleLruDoesPhysicalPageExist and added MultiXactOffsetCtl, does this look
right?
+ (!InRecovery |
I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
just dropped new binaries in place) but it wouldn't start up. I found this
in the logs:
waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
database system was shut down at 2015-05-27 13:12:55 PDT
20
On Tue, May 26, 2015 at 7:27 PM Alvaro Herrera
wrote:
> See the docs about the freeze max age storage parameter -- the per-table
> setting can decrease the global setting but not increase it.
Thanks Alvaro, that explains it. I found it in the docs: "Note that
autovacuum will ignore attempts to
Hello, I'd like to postpone an "autovacuum: VACUUM public.mytable (to
prevent wraparound)" and handle it manually at another time. I thought I
could set these storage parameters on the large table in question
("mytable") like this:
ALTER TABLE mytable SET (
autovacuum_freeze_min_age=1000,
On Wed, May 6, 2015 at 7:24 PM Jeff Janes wrote:
> I've booked-marked these but haven't really looked into them to any
> extent. It would be awesome if you put the SQL one somewhere on
> http://wiki.postgresql.org. That way it is easier to find, and anyone
> who finds it can contribute explanat
On Wed, May 6, 2015 at 9:46 AM Jeff Janes wrote:
> vacuum_freeze_table_age controls when it promotes a vacuum *which is
> already going to occur* so that it scans the whole table. It doesn't
> specially schedule a vacuum to occur. When those tables see enough
> activity to pass autovacuum_vacuu
Hello, recently one of my tables needed a vacuum (to prevent wraparound)
and of course it happened at a really bad time, so since then I've been
learning about how Transaction ID Wraparound works and its associated
parameters.
I'm trying this query to see how close my tables are to hitting the
vac
Thank you for the guidance! So far so good with
max_standby_archive/streaming_delay, no cancellations.
Our queries on our Standby are getting cancelled and so we're investigating
how to prevent this. The standby is for running periodic reporting queries
only, we don't care if it falls behind a little bit, we just set this guy
up to reduce the load on the Primary.
While researching there appear to b
On Thu, Jul 3, 2014 at 10:04 AM, David Wall wrote:
> I'm running PG 9.3.4 on CentOS 6.4 and noted that backing up my database
> takes much longer than restoring it.
>
Are you dumping to a slower disk/storage than the database is using?
What does top -c look like during the dump vs. the restore?
>
> The cpu utilization increases to nearly 100%, in user space, and stays
> there, until the database is restarted.
>
> postgres 1323 47.1 2.3 6667212 6087388 ? Rs 00:00 276:00 \_
postgres: bitbucket bitbucket 172.17.10.1(5114) SELECT
I see you have long-query logging enabled, what was th
On Tue, Apr 8, 2014 at 6:32 PM, Tom Lane wrote:
> Well, here's the problem:
> > ExprContext: 812638208 total in 108 blocks; 183520 free (171
> > chunks); 812454688 used
>
> So something involved in expression evaluation is eating memory.
> Looking at the query itself, I'd have to bet on t
I should add I had to move testing to a different VM, with less RAM and
different tunings; they are here: https://gist.github.com/skehlet/10207086
On Tue, Apr 8, 2014 at 4:08 PM, Steve Kehlet wrote:
> On Tue, Apr 8, 2014 at 12:23 PM, Tom Lane wrote:
>
> Once you've got a ulim
On Tue, Apr 8, 2014 at 12:23 PM, Tom Lane wrote:
> Once you've got a ulimit in place so that malloc eventually fails with
> ENOMEM, the backend process should print a memory context dump on stderr
> when it hits that. Make sure your logging setup captures the process
> stderr someplace (logging_
>
> Did you check if your OS allows you to limit memory usage per user /
> process basis?
>
I'll definitely dig more into this. I found some stuff saying it isn't as
easy as one would expect, i.e. not just a ulimit and done. Or maybe it is,
in this case. I will play around.
> Did you either prof
I'm running into some scenarios where errant postgres queries are consuming
all memory and bringing down production boxes. I'm running Postgres 9.3.4
on CentOS 5.x VMs under XenServer. My custom config is here:
https://gist.github.com/skehlet/9984666
The incidents may be related to reports of an a
32 matches
Mail list logo