I'm a complete novice when it comes to these things -

but is it possible that somewhere, something is doing an optimization?
Compiler? CPU?

Perhaps some subsystem along the way sees that fair scheduling would take
much longer.

-mike


On 6/10/07, Muli Ben-Yehuda <[EMAIL PROTECTED]> wrote:

On Sun, Jun 10, 2007 at 10:06:40AM +0300, Gilad Ben-Yossef wrote:

> AFAIK sched_yeild() precious meaning is and has always been: "put me
> at the end of the run queue for my priority. If there is no one else
> with my priority, I'll run again".

Quote akpm: "Changed sched_yield() semantics.  sched_yield() has
changed dramatically, and it can seriously impact existing
applications. A testcase (this is on 2.5.46, UP, no
preempt)". http://lkml.org/lkml/2002/12/2/155.

> By calling sched_yielf(), the scheduler we re-arrange tun run queue
> so that (assuming all worker threads are of the same priority), the
> thread releasing the lock will be put at the back of the run queue
> and some other threads, which ahs been marked ready to run after the
> lock has been released will get the chance to take the lock, that's
> why I expect sched_yield() to do the job. Note that for an SMP
> machine things are rather more complicated and also if there are
> numeroud tasks in different priroities... :-(

Yes, you are correct. sched_yield() will give an appearance of
fairness. But consider the cost (see below) - IMO it changes the
semantics of the test program in such a way as to make it
meaningless.

> Shahar, let us know what happens :-)

sched_yield() makes it run 20 times longer for TEST_SIZE=10000000, and
the difference grows with the test size.

moren:/home/muli # unset DO_SCHED_YIELD; time taskset 01 ./mf 5
Thread 40800940 started as thread #0
Thread 41001940 started as thread #1
Thread 41802940 started as thread #2
Thread 42003940 started as thread #3
Thread 42804940 started as thread #4
Results: 234626 1177504 1155320 2187538 5245012
Joined thread #0
Joined thread #1
Joined thread #2
Joined thread #3
Joined thread #4

real    0m1.187s
user    0m1.000s
sys     0m0.116s
moren:/home/muli # export DO_SCHED_YIELD=1; time taskset 01 ./mf 5
using sched_yield()
Thread 40800940 started as thread #0
Thread 41001940 started as thread #1
Thread 41802940 started as thread #2
Thread 42003940 started as thread #3
Thread 42804940 started as thread #4
Results: 106710 106711 106712 106711 106709
Results: 213147 213147 213148 213147 213147
Results: 319311 319312 319313 319312 319311
Results: 425650 425652 425653 425652 425651
Results: 531269 531270 531269 531269 531269
Results: 637252 637254 637253 637252 637253
Results: 743678 743681 743680 743679 743679
Results: 849960 849963 849962 849961 849961
Results: 957552 957554 957554 957552 957553
Results: 1065060 1065063 1065064 1065061 1065061
Results: 1172691 1172694 1172695 1172692 1172691
Results: 1280342 1280345 1280345 1280342 1280342
Results: 1388211 1388214 1388214 1388211 1388211
Results: 1495994 1495998 1495999 1495995 1495994
Results: 1603876 1603880 1603881 1603878 1603877
Results: 1711619 1711623 1711624 1711622 1711619
Results: 1819511 1819513 1819515 1819513 1819510
Results: 1927503 1927506 1927507 1927505 1927503
Results: 1999999 2000001 2000003 2000000 1999997
Joined thread #0
Joined thread #1
Joined thread #2
Joined thread #3
Joined thread #4

real    0m19.249s
user    0m3.008s
sys     0m15.909s

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>

#define TEST_SIZE 10000000

volatile int results[TEST_SIZE], position=0;

static int do_sched_yield = 0;

int worker( int threadnum )
{
   static pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZER;

   printf("Thread %lx started as thread #%d\n", pthread_self(),
threadnum);

   pthread_mutex_lock(&mutex);
   while( position<TEST_SIZE ) {
      // printf("results[%d]=%d\n", position, threadnum);
      results[position++]=threadnum;
      pthread_mutex_unlock(&mutex);
      if (do_sched_yield)
        sched_yield();
      pthread_mutex_lock(&mutex);
   }
   pthread_mutex_unlock(&mutex);

   return 0;
}

int *histo;

void printres(int numthreads)
{
   int i;

   printf("Results: ");
   for( i=0; i<numthreads; ++i )
      printf("%d ", histo[i]);

   printf("\n");
}

int main(int argc, char * argv[] )
{
   int numthreads=atoi(argv[1]);

   if( argc<2 || numthreads==0 )
      exit(1);

   pthread_t *threads=calloc(sizeof(pthread_t), numthreads);
   histo=calloc(sizeof(int), numthreads);

   int i;

   int oldpos=position;

   if (getenv("DO_SCHED_YIELD")) {
     do_sched_yield = 1;
     printf("using sched_yield()\n");
   }

   for( i=0; i<numthreads; ++i ) {
      pthread_create(threads+i, NULL, (void *(*)(void *))worker, (void
*)(unsigned long)i);
   }

   /* Reap the results every second */
   while( oldpos<TEST_SIZE ) {
      sleep(1);

      for( ; oldpos<position; ++oldpos ) {
         histo[results[oldpos]]++;
      }

      printres(numthreads);
   }

   for( i=0; i<numthreads; ++i ) {
      pthread_join( threads[i], NULL );
      printf("Joined thread #%d\n", i);
   }

   return 0;
}

=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Reply via email to