Daniel Shahaf wrote on Sat, 28 Mar 2020 21:54 +0000:
> Stefan Sperling wrote on Sat, 28 Mar 2020 11:32 +0100:
> > On Fri, Mar 27, 2020 at 10:58:48PM +0000, Daniel Shahaf wrote:  
> > > Well, I guess the documentation, once written, should point out that if
> > > any single revision build-repcache operates on adds a large number of
> > > reps, commits made in parallel to the build-repcache run will be starved
> > > out of adding their own rep-cache entries, which will manifest as
> > > "post-commit FS processing failed".  (We have a bug filed for this, but
> > > I can't find its number right now, sorry.)    
> > 
> > Good point.
> > 
> > Maybe the command's help output should mention that the repository will
> > be blocked for commits while this command is running? Even if that isn't
> > entirely true at the implementation level it would be good user-facing
> > advice.  
> 
> I don't think inaccuracies in the documentation are a good idea.  The
> proposal might discourage people from running this when they could;
> might lead people to run this in order to trigger the side effect of
> blocking commits for a short while, only to find commits did happen
> successfully; and might get people to distrust our documentation in
> general.
> 
> Therefore, I'd document the semantics accurately.  If need be, we can
> refer to fuller documentation in the release notes and/or the book.

By the way, couldn't we just fix this, rather than document it?
Looking at write_reps_to_cache(), it won't be that hard to rewrite it
to INSERT into rep-cache in batches of N reps per SQLite transaction,
rather than one SQLite transaction per commit.  This'll fix the bug
I mentioned.

Makes sense?

Denis, would you like to look into this?  It looks like an easy,
localized fix, and your build-repcache patch will benefit from it, too.

Cheers,

Daniel

Reply via email to