On Sat, Aug 13, 2011 at 12:18 PM, Tom Lane wrote:
> I wrote:
>> Yeah. Also, to my mind this is only a fix that will be used in 9.0 and
>> 9.1 --- now that it's occurred to me that we could use tuple xmin/xmax
>> to invalidate catcaches instead of recording individual TIDs, I'm
>> excited about do
I wrote:
> Yeah. Also, to my mind this is only a fix that will be used in 9.0 and
> 9.1 --- now that it's occurred to me that we could use tuple xmin/xmax
> to invalidate catcaches instead of recording individual TIDs, I'm
> excited about doing that instead for 9.2 and beyond. I believe that
> th
Simon Riggs writes:
> With HOT, there is very little need to perform a VACUUM FULL on any
> shared catalog table. Look at the indexes...
This is not really a useful argument. People do do VAC FULL on
catalogs, whether we think they should or not. Also, it's not only
"shared" catalogs that are a
On Fri, Aug 12, 2011 at 09:26:02PM +0100, Simon Riggs wrote:
> With HOT, there is very little need to perform a VACUUM FULL on any
> shared catalog table. Look at the indexes...
>
> I would a suggest that VACUUM FULL perform only a normal VACUUM on
> shared catalog tables, then perform an actual V
On Fri, Aug 12, 2011 at 7:09 PM, Tom Lane wrote:
> Right at the moment I'm leaning to approach #2. I wonder if anyone
> sees it differently, or has an idea for a third approach?
You are trying to solve the problem directly, which seems overkill.
With HOT, there is very little need to perform a
On Fri, Aug 12, 2011 at 3:14 PM, Tom Lane wrote:
> In any case, it is now clear to me that this bug is capable of eating
> peoples' databases, as in "what just happened to our most critical
> table? Uh, it's not there anymore, boss, but we seem to have duplicate
> pg_class entries for this other
Heikki Linnakangas writes:
> On 12.08.2011 21:49, Robert Haas wrote:
>> I don't think it really matters whether we occasionally blow away an
>> entry unnecessarily due to a hash-value collision. IIUC, we'd only
>> need to worry about hash-value collisions between rows in the same
>> catalog; and
On 12.08.2011 21:49, Robert Haas wrote:
On Fri, Aug 12, 2011 at 2:09 PM, Tom Lane wrote:
2. Forget about targeting catcache invals by TID, and instead just use the
key hash value to determine which cache entries to drop.
Approach #2 seems a lot less invasive and more trustworthy, but it has th
Tom Lane wrote:
> In any case, it is now clear to me that this bug is capable of
> eating peoples' databases, as in "what just happened to our most
> critical table? Uh, it's not there anymore, boss, but we seem to
> have duplicate pg_class entries for this other table".
Based on this, I don'
Robert Haas wrote:
> Perhaps we should just fix this one in master and consider
> back-patching it if and when we get some plausibly related bug
> reports.
I'm not completely clear on what one would do to be vulnerable to
hitting the bug, or what the impact of hitting it would be. Tom
said:
Robert Haas writes:
> On Fri, Aug 12, 2011 at 2:09 PM, Tom Lane wrote:
>> 2. Forget about targeting catcache invals by TID, and instead just use the
>> key hash value to determine which cache entries to drop.
>> Right at the moment I'm leaning to approach #2. I wonder if anyone
>> sees it diffe
On Fri, Aug 12, 2011 at 2:09 PM, Tom Lane wrote:
> But in 9.0 and up, we have a problem. So far I've thought of two possible
> avenues to fix it:
>
> 1. When a SHAREDINVALCATALOG_ID inval message comes in (telling us a VAC
> FULL or CLUSTER just finished on a system catalog), enter that message
>
I've been testing the problem reported by Dave Gould by running "make
installcheck-parallel" together with a tight loop of "vacuum full
pg_class":
while true; do psql -c "vacuum full pg_class" regression; usleep 10; done
Even after fixing the cache-reset-recovery-order problem I described
yes
13 matches
Mail list logo