On Tue, Sep 27, 2016 at 1:48 AM, <jf...@ms4.hinet.net> wrote: > This function is in a DLL. It's small but may run for days before complete. I > want it > takes 100% core usage. Threading seems not a good idea for it shares the core > with others. Will the multiprocessing module do it?
The threads of a process do not share a single core. The OS schedules threads to distribute the load across all cores. However, CPython's global interpreter lock (GIL) does serialize access to the interpreter. If N threads want to use the interpreter, then N-1 threads are blocked while waiting to acquire the GIL. A thread that makes a potentially blocking call to a non-Python API should first release the GIL, which allows another thread to use the interpreter. Calling a ctypes function pointer releases the GIL if the function pointer is from CDLL, WinDLL, or OleDLL (i.e. anything but PyDLL). If your task can be partitioned and executed in parallel, you could use a ThreadPoolExecutor from the concurrent.futures module. Since the task is CPU bound, use os.cpu_count() instead of the default number of threads. https://docs.python.org/3/library/concurrent.futures -- https://mail.python.org/mailman/listinfo/python-list