>> On Mar 31, 2017, at 5:32 PM, Michael Schnell <mschn...@lumino.de> wrote:
>> 
>> Regarding the view of the application (disregarding execution speed) or of 
>> the application programmer, there is no difference between real ("Hardware") 
>>  and virtual (e.g. threads) parallelism. These dirty basics need to be 
>> handled by the software and hardware infrastructure.
>> 
>> The use of real (e.g. multi CPU) parallelism that the application allows for 
>> being divided into multiple parallel "Threads". his fact given Hardware 
>> parallelism can speed up the execution, while even virtual parallelism 
>> allows for improving the latency of definable parts the application.

> I’m not understanding how parallelism could apply to anything besides 
> breaking down a task so that it can run on multiple hardware compute units.

> Why would you ever break a task into 100 threads when you could just run it 
> one thread?

"Events".

One gets into the grey area of threads and "processors". As an example you 
could divide a program into two threads, one reading and one writing. 
Immediately after issuing a write request, you could start reading the next 
item in a separate thread before the write is complete. This works because the 
I/O subsystem is mostly independent so that the OS can schedule another thread 
(or process) which is not waiting for the I/O subsystem to reply. Using only a 
single thread, the whole program has to wait for the I/O write to finish before 
starting the next read.

In this way a single process on a single "processor" (at least CPU) can 
interleave tasks to speed up the overall performance of the application. This 
could be extended to a myriad of cases of course.

Hence the recent upsurge in "async" routines, which only works if used properly 
of course.

Cheers,
Gary.


_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to