The gnulib usleep replacement says /* This file is _intentionally_ light-weight. Rather than using select or nanosleep, both of which drag in external libraries on some platforms, this merely rounds up to the nearest second if usleep() does not exist. If sub-second resolution is important, then use a more powerful interface to begin with. */
And the code confirms it int usleep (useconds_t micro) { unsigned int seconds = micro / 1000000; if (sizeof seconds < sizeof micro && micro / 1000000 != seconds) { errno = EINVAL; return -1; } if (!HAVE_USLEEP && micro % 1000000) seconds++; while ((seconds = sleep (seconds)) != 0); #undef usleep #if !HAVE_USLEEP # define usleep(x) 0 #endif return usleep (micro % 1000000); } The 'sleep' replacement on Win32 calls into the Win32-specific Sleep() function which allows milli-second granularity. Why doesn't usleep() call into Sleep() directly, so it gets milli-second granularity rather than rounding up to the nearest second ? In libvirt at least, we intentionally use usleep() over sleep() because we really do want sub-second granularity, which makes gnulib's usleep replacement rather unhelpful :-( I'd venture to suggest that the majority of apps using usleep only need milli-second granularity, so an impl that used Sleep() on Win32 would be pretty spot-on. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|