Hi Pavel, > One thing I'm afraid of is that writers could finish too > early. Could we could artificially slow them down?
In test_rwlock the test does this: /* Wait for the threads to terminate. */ for (i = 0; i < THREAD_COUNT; i++) gl_thread_join (threads[i], NULL); set_atomic_int_value (&rwlock_checker_done, 1); for (i = 0; i < THREAD_COUNT; i++) gl_thread_join (checkerthreads[i], NULL); It waits until all 10 mutator threads are terminated, then sets a lock-protected variable rwlock_checker_done to 1, that signals to the 10 checker thread that they can terminate at the next occasion, and then waits for them to terminate. Are you saying that the kernel will schedule the 10 checker threads with higher priority than the 10 mutator threads, although I have *not* specified anything about priorities? That would be a kernel bug, IMO. Especially since the problem occurs only on one architecture. > Could we set PTHREAD_RWLOCK_PREFER_WRITER_NP (in test-lock.c) to avoid > those issues? I disagree. The test is a minimal test of the kernel's multithreading support. If among 10 mutator threads and 10 checker threads, all started with the same priority, it has such a severe bias that the mutator threads never get to run, you have a kernel bug. I should not need a non-portable threading function in order to get 20 threads to run reasonably. Imagine what scenarios you would then get with an application server and 400 threads. Bruno