On 12/23/24 22:23, Geoff Steckel wrote:
> On 12/23/24 1:43 PM, Christian Schulte wrote:
>> Not criticizing OpenBSD in any way. Let me try to explain a common use
>> case. There is a data source capable of providing X bytes per second at
>> max. The application needs to be setup in a way it can receive those X
>> bytes per second without spin locking or waiting for data. If it would
>> be "polling" too fast, it would slow down the whole system waiting for
>> data. If it would be "polling" too slow, it would not be able to process
>> those bytes fast enough. Those bytes need to be processed. So there is a
>> receiving process which needs to be able to consume exactly those X
>> bytes per second. That consumer also needs to be defined in a way it can
>> process those bytes in parallel as fast as possible. Sizing the consumer
>> too small, the producer will start spin locking or such and cannot keep
>> up with the data rate it needs to process, because the consumer does not
>> process the data fast enough. Sizing the consumer too big, the consumer
>> will start spin locking or such waiting for the producer to provide more
>> data. I am searching for an API to make the application adhere to those
>> situations automatically. Data rate on the receiving part decreases,
>> consumer part does not need to use Y processes in parallel all spin
>> locking waiting for more data. Data rate on the receiving part
>> increases, consumer needs to increase compute to not slow down the
>> receiver. Does this make things more clear?
>>
> Thank you for your explanation.
> 
> I'm assuming that there multiple logical streams which are
> extracted from the incoming packet streams and distributed
> to multiple consumers.

Exactly. Read samples from a stream. Group them in a way they can be
processed in parallel and that number is very dynamic. For example: Get
1000 samples and all of them can be processed in parallel vs. get 1000
samples and only 10 of them can be processed in parallel. Manage the
number of processors dynamically.

> Is it true that you are attempting to perfectly assign and utilize
> all CPUs for packet & application processing?
> If packet ordering is to be preserved it's not clear that
> perfect allocation is possible.
> 
>>>> I am searching for an API to make the application adhere to those
>>>> situations automatically
> 
> The only way I can see to achieve 100% perfect usage this is:
>   buffer the incoming packet stream deeply
>   inspect each packet to measure application resources needed
>   summarize those results over some time period

6 years manually calibrating parameters and the next catastrophe will
prove you wrong. Either someone needs to monitor the system 24/7 and be
able to tweak parameters immediately, or the application must do this
automatically and be done with it.

>   systemwide, measure resource capability and current utilization
>   determine systemwide resource allocation using some algorithm
>   adjust systemwide application resources
>   forward packets to each application

The only API I found is clock_getttime(CLOCK_MONOTONIC) and doing the
rest manually myself. I am not sure if I'd better be off using
CLOCK_THREAD_CPUTIME_ID because I am not sure if pthreads mutex
lock/unlock and condition wait/broadcast will be accounted for
correctly. Control flow becomes something like this:

t0 = start time
do the work - no syscalls here just number crunching on memory
t1 = end time

So add two syscalls for getting the time values into branches not doing
any syscalls intentionally. Calculate corresponding rate values
manually. Manage the number of worker threads, queue size, network
buffer size and what not based on this. Works for me on OpenBSD. Does
not work for me on Linux, because there it is not easy to keep things in
well known bounds. Seems there are no well known bounds. Make that the
sole application on a dedicated machine where nothing can get in between.

> 
> I worked on a network appliance which did complex resource
> allocation while forwarding packets. It wasn't simple.

Not simple indeed. Thank's for your thoughts on this.

-- 
Christian

Reply via email to