> Hi Thomas,
> 
> >
> > Hi Lipeng,
> >
> > > May I know any comment or concern on this patch, thanks for your
> > > time
> > > 😄
> >
> > Thanks for your patience in getting this reviewed.
> >
> > A few remarks / questions.
> >
> > Which strategy is used in this implementation, read-preferring or
> > write- preferring?  And if read-preferring is used, is there a danger
> > of deadlock if people do unreasonable things?
> > Maybe you could explain that, also in a comment in the code.
> >
> 
> Yes, the implementation use the read-preferring strategy, and comments in
> code.
> When adding the test cases, I didn’t meet the situation which may cause the
> deadlock.
> Maybe you can give more guidance about that.
> 
> > Can you add some sort of torture test case(s) which does a lot of
> > opening/closing/reading/writing, possibly with asynchronous I/O and/or
> > pthreads, to catch possible problems?  If there is a system dependency
> > or some race condition, chances are that regression testers will catch this.
> >
> 
> Sure, as your comments, in the patch V6, I added 3 test cases with OpenMP to
> test different cases in concurrency respectively:
> 1. find and create unit very frequently to stress read lock and write lock.
> 2. only access the unit which exist in cache to stress read lock.
> 3. access the same unit in concurrency.
> For the third test case, it also help to find a bug:  When unit can't be 
> found in
> cache nor unit list in read phase, then threads will try to acquire write 
> lock to
> insert the same unit, this will cause duplicate key error.
> To fix this bug, I get the unit from unit list once again before insert in 
> write lock.
> More details you can refer the patch v6.
> 

Could you help to review this update? I really appreciate your assistance.

> > With this, the libgfortran parts are OK, unless somebody else has more
> > comments, so give this a couple of days.  I cannot approve the libgcc
> > parts, that would be somebody else (Jakub?)
> >
> > Best regards
> >
> >     Thomas
> >
> 
> Best Regards,
> Lipeng Zhu

Best Regards,
Lipeng Zhu

Reply via email to