On 12/06/2016 10:20 AM, Stefan Hajnoczi wrote:
> On Mon, Dec 05, 2016 at 09:06:17PM +0100, Christian Borntraeger wrote:
>> On 12/01/2016 08:26 PM, Stefan Hajnoczi wrote:
>>> This patch is based on the algorithm for the kvm.ko halt_poll_ns
>>> parameter in Linux.  The initial polling time is zero.
>>>
>>> If the event loop is woken up within the maximum polling time it means
>>> polling could be effective, so grow polling time.
>>>
>>> If the event loop is woken up beyond the maximum polling time it means
>>> polling is not effective, so shrink polling time.
>>>
>>> If the event loop makes progress within the current polling time then
>>> the sweet spot has been reached.
>>>
>>> This algorithm adjusts the polling time so it can adapt to variations in
>>> workloads.  The goal is to reach the sweet spot while also recognizing
>>> when polling would hurt more than help.
>>>
>>> Two new trace events, poll_grow and poll_shrink, are added for observing
>>> polling time adjustment.
>>>
>>> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>>
>> Not sure way, but I have 4 host ramdisks with the same iothread as guest
>> virtio-blk. running fio in the guest on one of these disks will poll, as
>> soon as I have 2 disks  in fio I almost always see shrinks (so polling 
>> stays at 0) and almost no grows.
> 
> Shrinking occurs when polling + ppoll(2) time exceeds poll-max-ns.
> 
> What is the value of poll-max-ns

I used 50000ns as poll value. When using 500000ns it is polling again.

> and how long is run_poll_handlers_end - run_poll_handlers_begin?

Too long. I looked again and I realized that I used cache=none without
io=native. After adding io=native things are better. Even with 4 disks
polling still happens. So it seems that the mileage will vary depending
on the settings

Christian





Reply via email to