I'm not positive, but I'm pretty sure the sleep command in linux does
not behave the same way it does in windows.
As you know, in windows, a sleep command (even if delivered with a
parameter of 0) gives up time slices to other programs on the system.
This does not appear to be the case on linux.
On linux, the sleep command simply suspends the process for the
specified amount of time, but so far as I can tell, does nothing for
unused cpu cycles.
I've done a little digging, but I can't find any way on linux to give
away unused cpu cycles.
Perhaps the linux task switcher doesn't allow for this capability?
On 5/18/2021 3:59 PM, Bo Berglund via fpc-pascal wrote:
I have a pretty sizable console app written with Delphi 15 years ago but ported
to Linux using FreePascal (3.2.0) with Lazarus (2.0.12) as IDE. It runs as a
systemd service on a Raspberry Pi3.
Basically it is a scheduler, which checks every minute if there is a task to
run, otherwise it waits for the next minute to pass.
Meanwhile in another thread there is a TCP/IP socket server active for
communicating with the app over the network. So it is listening for incoming
connections.
This is working seemingly OK, but today when I checked the RPi I found using top
that it was running 11% CPU, which is strange because it has nothing to do at
the moment.
I have tried to be as conservative as possible regarding wait loops etc so in
such loops I always have a sleep() call, which in my Windows experience used to
stop excessive CPU usage.
So I was surprised to find the high CPU usage and now I am at a loss on how to
find *where* this is happening...
Any ideas on how to proceed?
Is there some Lazarus way to find this?
(But I cannot really run the application in service mode from within Lazarus...)
_______________________________________________
fpc-pascal maillist - fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal