Thomas Bushnell BSG, le Mon 17 Mar 2008 15:09:12 -0400, a écrit : > > On Sun, 2008-03-16 at 08:52 +0100, [EMAIL PROTECTED] wrote: > > Hi, > > > > On Tue, Mar 11, 2008 at 11:19:32AM +0000, Samuel Thibault wrote: > > > [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit : > > > > > > [I] suggested a more adaptive approach: Keep track of the existing > > > > threads, and if none of them makes progress in a certain amount of > > > > time (say 100 ms), allow creating some more threads. But that was > > > > never implemented. Also, it still might cause considerable delays in > > > > some situations; and I'm not even sure it would fix all problems. (I > > > > didn't fully understand the problem discussed in this thread, so I > > > > don't know whether it would be fixed by that?) > > > > > > The problem I was observing is when you have a sync_all which triggers > > > the write of a lot of files, but unfortunately the superblock was > > > paged out, so that you aren't able to start another thread to reload > > > it. Whatever the thresholds you choose, with a big enough load you > > > will still have the problem of resisting to creating enough threads > > > for all these request, plus one for the superblock reload request. > > > > So the problem is that a lot of requests get queued before the first one > > gets very far, so that when the superblock read is finally requested, it > > ends up at the end of a long queue? > > No, the problem we are talking about is an outright deadlock.
Well, with a threshold on the number of created thread, that leads to a deadlock: since the superblock read requests is far after all the other ones, it doesn't get a thread. Samuel