The problem I have with this thread (no pun intended) is that it is not
comparing like with like. As demonstrated by many of the replies,
Parallelism and Threads are not the same thing.
I would offer the following definitions:
- Parallelism is a (design) concept for expressing collateral actions in
which the processing order of the actions is unspecified. They may take
place serially or contemporaneously in real time, or a mixture of the two.
- Threads are an implementation mechanism for realising collateral
actions within a single processing environment.
Neither of the above implies multiple CPUs or processing units.
On 31/03/17 07:43, Ryan Joseph wrote:
On Mar 30, 2017, at 3:06 PM, Michael Schnell <mschn...@lumino.de> wrote:
Huh, ok, but why parallelism is better and how to do it with fpc ?
Parallelism within a process always is based on threads.
AFAIK, fpc does not (yet) provide a more convenient abstraction for parallelism
(such as parallel loops) than TThread.
-Michael
It’s my understanding that for parallelism to make sense you need to have at
least more than 1 separate compute unit, be that a CPU core or a GPU.
If you had a GPU with 250 compute units or a CPU with 250 cores you would need
to design your task in a way so that it could be broken down into as many
discrete portions as possible so that you could take advantage of the multiple
cores running in parallel. Even if you didn’t have a single thread and the
execution blocked until finished you wouldn’t see any performance increases
unless you designed your program to scale for parallelism. Running 250 threads
on a single core isn’t going to be 250x faster but running 250 threads on 250
cores may be.
Regards,
Ryan Joseph
_______________________________________________
fpc-pascal maillist - fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal
_______________________________________________
fpc-pascal maillist - fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal