ches debating whether we
could change that, but they never really got anywhere afair.
Greetings,
Andres Freund
On 2018-02-28 13:15:45 +1300, David Rowley wrote:
> On 28 February 2018 at 11:11, Andres Freund wrote:
> > I'm fairly certain that only vacuum and table rewrites like cluster sets
> > all-visible,
>
> I don't think the pages are set all visible again after a rewrit
3 8230 23 14:17 ?00:00:33 postgres: 11/main: parallel
> worker for PID 8262
> ...
Looks like we're not doing a pgstat_report_activity() in the workers?
Any argument for not doing so?
Greetings,
Andres Freund
t around, so perhaps
that's happening somewhere outside of the file?
> As you can see, recv has received a non-zero value in flags
Well, no, not really. recv() has a 0 flags, it's just libc's internal
implementation that appears to be showing up weird afaict.
Greetings,
Andres Freund
Hi,
On 2018-03-05 17:57:51 -0300, Andre Oliveira Freitas wrote:
> Ok, on one hand glad to hear that is an artifact, on the other hand
> that means the issue is still out there...
>
> I'm not a maintainer of Freeswitch, I am an user of it. However I am
> trying to supply the maintainers with any r
be someone has got other experience with RAM of this size and those
> settings?
With a halfway modern PG I'd suggest to rather tune postgres settings
that control flushing. That leaves files like temp sorting in memory for
longer, while flushing things controlledly for other sources of
writes. See *_flush_after settings.
Greetings,
Andres Freund
On 2018-03-06 18:59:01 -0700, pinker wrote:
> Andres Freund wrote
> > With a halfway modern PG I'd suggest to rather tune postgres settings
> > that control flushing. That leaves files like temp sorting in memory for
> > longer, while flushing things controlledly for o
work in this way, that's
> why i'm asking.
The subject said logical decoding, not replication. There's a lot of
change data capture type workloads where decoding from the standby is
quite useful. And the design definitely would work for that, we've
explicitly took that into consideration.
Greetings,
Andres Freund
nxid FROM pg_class WHERE oid =
$1)) -- xid cutoff filter
$$;
SELECT * FROM check_rel('pg_authid') LIMIT 100;
and then display all items for one of the affected pages like
SELECT * FROM heap_page_items(get_raw_page('pg_authid', 34343));
Alvaro:
- Hm, we talked about code adding context for these kind of errors,
right? Is that just skipped for csvlog?
- Alvaro, does the above check_rel() function make sense?
Greetings,
Andres Freund
_page_items(get_raw_page('pg_authid', 34343));
> >
> >
> > Alvaro:
> > - Hm, we talked about code adding context for these kind of errors,
> > right? Is that just skipped for csvlog?
> > - Alvaro, does the above check_rel() function make sense?
> >
> > Greetings,
> >
> > Andres Freund
> >
>
> The function does NOT show any issue with either of those tables.
Uh, huh? Alvaro, do you see a bug in my query?
Greetings,
Andres Freund
318868
> (25 rows)
Could you show the contents of those two pages with a query like I had
in an earlier email?
Greetings,
Andres Freund
temporarily not visible as corrupted.
Greetings,
Andres Freund
On April 9, 2018 7:51:19 PM PDT, Peter Geoghegan wrote:
>On Mon, Apr 9, 2018 at 6:56 PM, Alexandre Arruda
>wrote:
>> (... and all other indexes returns null too)
>>
>> I tried with bt_index_check too. Same results.
>
>That's interesting, because it tells me that you have a table that
>appears t
lyze verbose fn06t;
> INFO: vacuuming "public.fn06t"
> ERROR: found multixact 76440919 from before relminmxid 122128619
What does the function in
https://www.postgresql.org/message-id/20180319181723.ugaf7hfkluqyo...@alap3.anarazel.de
say about your table?
Could you post pg_controldata output and
SELECT * FROM pg_class WHERE oid = 'public.fn06t'::regclass;
?
Greetings,
Andres Freund
On April 10, 2018 11:51:40 PM PDT, Pavan Deolasee
wrote:
>On Wed, Apr 11, 2018 at 8:31 AM, Alexandre Arruda
>wrote:
>
>> 2018-04-10 19:09 GMT-03:00 Peter Geoghegan :
>> > On Tue, Apr 10, 2018 at 4:31 AM, Alexandre Arruda
>
>> wrote:
>> >> Actualy, I first notice the problem in logs by autovacu
> open_datasync etc, is switching to one of these options something which
> should be considered or is this issue low level enough that all sync
> methods are impacted?
No, the issue is largely about datafiles whereas the setting you refer
to is about the WAL.
Greetings,
Andres Freund
it was renamed? No one has the old WAL file open, as far as I know.
Because it's a COW filesystem that doesn't overwrite in place. That's
how snapshots etc are implemented.
Greetings,
Andres Freund
ONTEXT: SQL statement "SELECT 1
> FROM ONLY "public"."clients" x WHERE "c_id" OPERATOR(pg_catalog.=) $1 FOR
> KEY SHARE OF x"
> 2018-05-04 04:05:20 PDT [62028]: [84-1] STATEMENT: INSERT /*
>
> Is there anything I'm not thinking of?
Probably some part of your application, or a function you're calling
from there, is setting a different timeout.
Greetings,
Andres Freund
cluster that has this error. We
> are monitoring transactions wraparound with the the check_postgres.pl
> script, the check is still running fine and no alert was given at all
> since the cluster has been running.
>
> Thank you in advance for any answer.
Could you report the result of
select ctid, xmin, xmax from pg_authid ;
and
CREATE EXTENSION pageinspect;
SELECT * FROM heap_page_items(get_raw_page('pg_authid', 0));
Thanks.
Greetings,
Andres Freund
ages patch + freeze +
> shared catalog combination, but cannot prove it yet.
>
> Looking for possible course of action.
> Probably simplest fix - drop and recreate these 6 affected users, but so
> far I willing spent some time research into this issue.
Could you use pageinspect to get the infomasks for the affected tuples?
Greetings,
Andres Freund
3429136
>
> So it's interesting where value of " from before relfrozenxid 248712603"
> come from.
Hm. That's indeed odd. Could you get a backtrace of the error with "bt
full" of the error?
Greetings,
Andres Freund
Hi,
On 2018-05-22 22:18:15 +0300, Maxim Boguk wrote:
> On Tue, May 22, 2018 at 9:47 PM, Andres Freund wrote:
> > > select relfrozenxid from pg_class where relname='pg_authid';
> > > relfrozenxid
> > > --
> > >2863
"history" about that postgres instance? What
version of postgres was it run on earliest? Was it ever pg_upgraded?
Were there any OS crashes? Other hardware issues? Was the cluster ever
used with fsync=off or full_page_writes=off?
Greetings,
Andres Freund
are you
creating / updating database users / roles?
Greetings,
Andres Freund
On 2018-05-22 16:39:58 -0700, Andres Freund wrote:
> Hi,
>
> On 2018-05-23 00:04:26 +0200, Paolo Crosato wrote:
> > I managed to recover the log of the first time we run into the issue, the
> > error was the same but on template1:
> >
> > May 8 11:26:46 xxx p
ctionId if the tuple is frozen (note the
HeapTupleHeaderXminFrozen() within).
Greetings,
Andres Freund
On 2018-05-24 13:30:54 -0700, Andres Freund wrote:
> On 2018-05-24 13:08:53 -0400, Alvaro Herrera wrote:
> > Hmm .. surely
> >
> > diff --git a/src/backend/access/heap/heapam.c
> > b/src/backend/access/heap/heapam.c
> > index 5016181fd7..5d7fa1fb45 100644
esql.org/message-id/1527193504642.36340%40amazon.com
Greetings,
Andres Freund
On 2018-05-24 17:13:11 -0400, Alvaro Herrera wrote:
> On 2018-May-24, Andres Freund wrote:
>
> > On 2018-05-24 16:49:40 -0400, Alvaro Herrera wrote:
> > > BTW is it just a coincidence or are all the affected tables pg_authid?
> > > Maybe the problem is shared rela
On 2018-05-24 16:46:24 -0400, Alvaro Herrera wrote:
> On 2018-May-24, Andres Freund wrote:
>
> > On 2018-05-24 13:08:53 -0400, Alvaro Herrera wrote:
> > > Hmm .. surely
>
> > > xid = HeapTupleHeaderGetXmin(tuple);
> > >
Hi,
I think I found the bug, and am about to post a fix for it belo
https://postgr.es/m/20180525203736.crkbg36muzxrj...@alap3.anarazel.de.
Greetings,
Andres Freund
Hi,
On 2018-05-29 07:54:52 +0200, Thomas Kellerer wrote:
> But neither the Linux binaries nor the Windows binaries were compiled with
> the --with-llvm option
> (will JITting be possible with Windows at all?)
Not in 11.
Greetings,
Andres Freund
; "show jit" gives me: "on"
You can check whether support actually could be loaded with SELECT
pg_jit_available();
Greetings,
Andres Freund
st me this
> will make Postgres go faster" would on a code change.
You do trust us to run code on your systems without having read every
line.
Greetings,
Andres Freund
On 2018-06-08 12:38:03 -0500, Jeremy Finzel wrote:
> On Tue, Jun 5, 2018 at 8:42 PM, Alexandre Arruda wrote:
>
> > Em seg, 28 de mai de 2018 às 16:44, Andres Freund
> > escreveu:
> > >
> > > Hi,
> > >
> > > I think I found the bug, and am a
gest
replacing the warning with a PANIC and then using the debugger to look
at the corefile generated (when using an appropriate ulimit). It's not
immediately obvious why there'd not be correct knowledge about the
oldest database around. If you do so, please include 'p *ShmemVariableCache'
output.
Greetings,
Andres Freund
;
if (ControlFile.checkPointCopy.oldestXid <
FirstNormalTransactionId)
ControlFile.checkPointCopy.oldestXid +=
FirstNormalTransactionId;
ControlFile.checkPointCopy.oldestXidDB = InvalidOid;
}
but we have codepath that doesn't check for oldestXidDB being
InvalidOid. Not great.
Greetings,
Andres Freund
On 2018-06-11 13:14:12 -0400, Tom Lane wrote:
> Andres Freund writes:
> > I suspect the issue is that pg_resetwal does:
> > if (set_xid != 0)
> > {
> > ControlFile.checkPointCopy.nextXid = set_xid;
>
> > /*
> >
on instead of the peculiar hex-slash-hex representation?
What do you want to do with the LSN?
Greetings,
Andres Freund
one... It does sound like a
likely explanation of your issue. Hard to tell without investigating a
*lot* more closely than I realistically can do remotely. It seems quite
possible to be the cause - I'd strongly suggest to upgrade to prevent
further occurances, or at least exclude it as a cause.
Greetings,
Andres Freund
t; (unlikely-to-happen) future changes.
The reason to not use int64 is that it's signed. lsns are
unsigned. Therefore you couldn't represent all LSNs without wrapping
into negative.
Greetings,
Andres Freund
Hi,
On 2018-06-13 15:55:05 +0530, Pavan Teja wrote:
> Too much of shared buffers allocation also causes problem. Read the
> documentation.
That's not in the docs.
- Andres
Hi,
On 2018-06-13 12:55:27 +0300, Vadim Nevorotin wrote:
> I have a very strange problem. I'm using PostgreSQL 9.6 with PostGIS 2.3
> (both from Debian Strecth repos) to store DB for OSM server (but actually
> it doesn't matter). And I've noticed, that on each new connection to DB
> first query is
Hi,
On 2018-06-13 10:49:39 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-06-13 12:55:27 +0300, Vadim Nevorotin wrote:
> >> I have a very strange problem. I'm using PostgreSQL 9.6 with PostGIS 2.3
> >> (both from Debian Strecth repos) to store DB for
won't kill
our
* children...
*/
fflush(stdout);
fflush(stderr);
abort();
}
Greetings,
Andres Freund
I think
> it could be related to Postgres and the number of schema/tables in the
> database, that's why I post this issue here.
There've been improvements made since 9.3. Upgrade.
Greetings,
Andres Freund
ng what type of performance regression this
> would show on something like ext4 (if any).
It's a *massive* regression on ext4 & xfs. You can very trivially
compare the performance of a new cluster (which doesn't have files to
recycle) against one that's running for a while.
Greetings,
Andres Freund
etter
solutions for what you're doing.
> Can I increase this number somehow?
It's not impossible, it's not entirely trivial either. The relevant
variables currently are 16bit wide, and the limit is close to the max
for that.
Greetings,
Andres Freund
On 2018-06-25 13:46:06 +0530, Akshaya Acharya wrote:
> On Mon, 25 Jun 2018 at 13:40, Andres Freund wrote:
>
> > Hi,
> >
> > On 2018-06-25 13:02:37 +0530, Akshaya Acharya wrote:
> > > Hello.
> > >
> > >
> > > Please
an't reproduce it either on 10.4, 10-current, master. Did
you build from source? Packages? Any extensions? Is there anything
missing from the above instruction to reproduce this?
Greetings,
Andres Freund
ations to an
indexes' "owning" relation in CacheInvalidateHeapTuple()?
else if (tupleRelId == IndexRelationId)
{
Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
/*
* When a pg_index row is updated, we should send out a
relcache inval
* for the index relation. As above, we don't know the shared
status
* of the index, but in practice it doesn't matter since
indexes of
* shared catalogs can't have such updates.
*/
relationId = indextup->indexrelid;
databaseId = MyDatabaseId;
}
Greetings,
Andres Freund
Hi,
On 2018-07-09 09:59:58 -0700, Peter Geoghegan wrote:
> On Thu, Jun 28, 2018 at 8:02 AM, Andres Freund wrote:
> > I believe this happens because there's currently no relcache
> > invalidation registered for the main relation, until *after* the index
> > is b
Hi,
On 2018-07-23 02:23:45 -0500, Ron wrote:
> We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
> needs to be migrated to a new data center and then restored to v9.6.9.
Have you considered using pg_upgrade instead?
Greetings,
Andres Freund
On 2018-07-23 09:17:41 -0500, Ron wrote:
> On 07/23/2018 09:11 AM, Andres Freund wrote:
> > Hi,
> >
> > On 2018-07-23 02:23:45 -0500, Ron wrote:
> > > We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
> > > needs to be migrat
snapshots -printf . | wc -c
> 12727975
Ugh, something absolutely clearly isn't right here.
> It looks like ZFS simply can't handle this many files: `ls` etc. hang
> forever.
ls probably works in general, it's just the sorting that makes it
hang. Should work with -f or such.
Greetings,
Andres Freund
On 2018-07-24 12:22:24 -0500, Dimitri Maziuk wrote:
> On 07/24/2018 12:21 PM, Alvaro Herrera wrote:
> >
> > Are you serious about us trying to diagnose a bug with this description?
>
> What do you want to know, exactly?
A recipe that we can follow and reproduce the issue
the period it cannot stream properly, it will
> eventually recover?
You might want to look into replication slots to ensure the primary
keeps the necessary segments, but not more, around. You might also want
to look at wal_compression, to reduce the bandwidth usage.
Greetings,
Andres Freund
On 2018-07-24 14:02:26 -0500, Dimitri Maziuk wrote:
> On 07/24/2018 01:43 PM, Andres Freund wrote:
> > On 2018-07-24 12:22:24 -0500, Dimitri Maziuk wrote:
> >> On 07/24/2018 12:21 PM, Alvaro Herrera wrote:
> >>>
> >>> Are you serious about us tryi
ISTM
> > there's some risks that it'd cause issues. Will you tackle this?
>
> Okay.
Any progress on that?
Peter, given that your patch made this more likely, and that you're a
committer these days, I'm opening an open items entry, and assign it to
you. Does that sound ok?
Greetings,
Andres Freund
Hi,
On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
> I believe this happens because there's currently no relcache
> invalidation registered for the main relation, until *after* the index
> is built. Normally it'd be the CacheInvalidateRelcacheByTuple(tuple) in
> index_up
On 2018-07-25 19:27:47 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
> >> I wonder why we don't just generally trigger invalidations to an
> >> indexes' "owning" relation in CacheInvalidateHe
... ad infinitum ...
> >
> > Would "permission denied" be relevant?
>
> Logical decoding is something I am still learning. The "permission denied"
> would to me be relevant only to the extent that it seems to be provoking:
Yes, it looks related. Looks like logical rep was trying to perform the
intiial sync of those tables, and couldn't due to permission errors.
> "LOG: logical decoding found consistent point at 19/E6942440"
> DETAIL: There are no running transactions."
>
> Others with more experience in this area would need to fill whether that
> might account for the 13 million files in the ~snapshot/ directory.
That indicates there's some error handling issues to be resolved. Petr?
Greetings,
Andres Freund
iggered ones), should do so for all that are
older. Is there a chance that you have some old leftover replication
slots around? Because that'd prevent cleanup?
What does SELECT * FROM pg_replication_slots ; return?
Greetings,
Andres Freund
d slow us down, which will be a
price everybody is paying.
Greetings,
Andres Freund
last few
years. That alone costs time. Additionally it's code we need to
maintain.
Greetings,
Andres Freund
On 2018-07-31 18:48:23 -0700, Peter Geoghegan wrote:
> On Mon, Jul 9, 2018 at 11:32 AM, Andres Freund wrote:
> > I assume we'll have to backpatch this issue, so I think it'd probably a
> > good idea to put a specific CacheInvalidateHeapTuple() in there
> > explicit
Hi,
On 2018-07-31 19:29:37 -0700, Peter Geoghegan wrote:
> On Tue, Jul 31, 2018 at 7:02 PM, Andres Freund wrote:
> > Not a fan of this comment. It doesn't really explain that well why it's
> > needed here, but then goes on to a relatively general explanation of why
Manual vacuuming tends
> to miss stuff, and it cannot react adequately to activity spikes.
But it shouldn't matter here, autovacuum will start regardless, no?
Greetings,
Andres Freund
On 2018-08-01 12:07:16 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> >> IMO, the action you need to take is enabling autovacuum. We've
> >> seen many many people go down the path you are taking, and it's
On 2018-08-01 12:20:15 -0400, Alvaro Herrera wrote:
> On 2018-Aug-01, Andres Freund wrote:
>
> > On 2018-08-01 12:07:16 -0400, Tom Lane wrote:
> > > Andres Freund writes:
> > > > On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> > > >> IMO, th
Hi,
On 2018-08-02 13:00:16 -0700, Peter Geoghegan wrote:
> On Tue, Jul 31, 2018 at 9:00 PM, Andres Freund wrote:
> > I don't think that's particularly relevant. We should always register an
> > invalidation before the relevant CommandCounterIncrement(), because that
nge
> that.
Especially because most people will only understand this after they've
been hit, as test scenarios will often just be quick enough.
Greetings,
Andres Freund
(),
errmsg("could not fsync file \"%s\": %m",
tmppath)));
}
pgstat_report_wait_end();
CloseTransientFile(fd);
Greetings,
Andres Freund
On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
> Any plans to support $subject?
There's LOADS of discussion on this on the lists.
Hi,
On 2018-08-16 14:32:34 +0200, Andreas Joseph Krogh wrote:
> På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund
> <mailto:and...@anarazel.de>>:
> On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
> > Any plans to support $subject?
>
> There
of failure, but
that doesn't really matter much).
Greetings,
Andres Freund
On 2018-08-22 11:14:48 -0300, Avi Vallarapu wrote:
> Seams like you are with the default or fairly less checkpoint_* settings.
>
> You need to look at several patterns to understand the best values for
> these settings.
>
> For now, i can say 300 seconds for checkpoint_timeout could be fairly ver
go unwritten before the autovac worker process
> itself is forced to write some.
I've not re-checked, but I'm not sure that's true if the buffer is
already in s_b, which it'll be for many workloads.
Greetings,
Andres Freund
DB.
> Given the nature of the bug do you think that would fix that table?
Yes, it's fairly likely that it does. You might even get around it by
just doing a no-op update of the whole table. Nevertheless you need to
upgrade, or more of that corruption can occur.
Greetings,
Andres Freund
at postgres works correctly via
the emulation stuff MS is doing. There is a native version of postgres
for windows however, and that is tested (and exercised by a lot of
installations).
Greetings,
Andres Freund
#x27;re talking
> about Windows.
The OP said "Ubuntu 18.04 as Windows bash" - so I assume this is
postgres compiled as a linux binary is running on MS's new-ish linux
emulation.
Greetings,
Andres Freund
have perfectly coherent
timestamp - clock drift between CPU sockets does still sometimes occur,
and was extremely common.
Greetings,
Andres Freund
r-x 1 u0138544 camplab 10002460672 Sep 21 15:49 test.blob
I suspect the layouting of such wide columns problably creates a lot of
pain. I'd try \copy and doing the query after \a.
Greetings,
Andres Freund
8
I don't think the calculation you're doing here is correct.
backend_xmin is an xid (max 2^32-1), whereas txid_snapshot_xmin returns
an xid *with* epoch (max 2^64-1). What you're measuring here is simply
the fact that the xid counter has wrapped around.
Greetings,
Andres Freund
data on the clientside.
Greetings,
Andres Freund
table takes an exclusive lock.
Greetings,
Andres Freund
On October 10, 2018 2:15:19 PM PDT, Ravi Krishna wrote:
>>
>> pg_restore doesn't take locks on the table for the COPY, it does so
>> because creating the table takes an exclusive lock.
>
>
>Interesting. I seem to recollect reading here that I can't have
>concurrent COPY on the same table beca
em is that you copied over replication slots to
your standby server, but that *on the standby* haven't set
max_replication_slots high enough.
Greetings,
Andres Freund
Hi,
Please try to quote properly.
On 2018-10-15 01:57:53 +0530, bhargav kamineni wrote:
> I got his on standby, could you please explain in detail about
> --*but that *on the standby* haven't set max_replication_slots high enough*
> .
What is max_replication_slots set to on the new standby?
I
Hi,
As I just wrote:
> On Mon, 15 Oct 2018 at 02:40, Andres Freund wrote:
> > Please try to quote properly.
On 2018-10-15 02:45:51 +0530, bhargav kamineni wrote:
> Yeah i have used rsync , Got it now will increase the max_replication_slots
> to high enough , Thank you Andres
already synced the pg_repslot to standby ,Is it okay if i remove
> the pg_repslot directory befor starting postgresql service ?
You can do that, but the nicer way probably is to just remove them via
sql once started. Something like
SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots;
Greetings,
Andres Freund
o allows to get rid
of the archive in some setups.
Greetings,
Andres Freund
t;
> select "fld1","fld2" from "schema_name"."table_name".
>
> So the foreign table in PG has to created in upper case within quotes. It is
> bit of an annoyance.
Please note that odbc_fdw is not maintained by the postgresql
developers, but a separate project.
Greetings,
Andres Freund
On October 17, 2018 10:57:37 AM PDT, Ravi Krishna wrote:
>
>>
>> Please note that odbc_fdw is not maintained by the postgresql
>developers, but a separate project.
>
>
>Translation: You are on your own. We are hoping this will make our
>migration out of DB2 quicker. Oh well.
Come on. We can'
ue to odbcfdw's authors/github tracker.
> > Or pay a company for support.
> >
>
> On a related note is fdw for Oracle and SQLServer supported by the community ?
They're not postgresql.org projects if that's what you're asking. IOW,
they probably have their own communities, it's just not this.
Greetings,
Andres Freund
On 2018-10-17 11:02:40 -0700, Adrian Klaver wrote:
> On 10/17/18 10:57 AM, Ravi Krishna wrote:
> >
> > >
> > > Please note that odbc_fdw is not maintained by the postgresql developers,
> > > but a separate project.
> >
> >
> > Translation: You are on your own. We are hoping this will make our
better approach wouldn't be to add an errcontext for
vaccuum, where continually update the block number etc. Theres plenty of
different sources of corruption that'd potentially cause debug messages
or errors, and that should get most of them.
Greetings,
Andres Freund
Hi,
On 2019-07-17 13:27:23 -0400, Tom Lane wrote:
> My concern here is that if we have blinders on to the extent of only
> processing that one table or DB, we're unnecessarily allowing bloat to
> occur in other tables, and causing that missed vacuuming work to pile
> up so that there's more of it
ime
configurable, that'd add overhead to some pretty central code.
Increasing the default would need a lot of benchmarks, to prove it
doesn't cause regressions in other common scenarios.
Greetings,
Andres Freund
ed, the kernel will coalesce packages regardless of the
send() size.
> Does it make sense to make this parameter configurable?
I'd much rather not. It's goign to be too hard to tune, and I don't see
any tradeoffs actually requiring that.
Greetings,
Andres Freund
1 - 100 of 221 matches
Mail list logo