Matt, Could you share your benchmark. I'm seeing barely any difference With lock:
± hyperfine -- "ansible-playbook --forks 25 -i 100_hosts.yml 100_debugs.yml"*Benchmark #1*: ansible-playbook --forks 25 -i 100_hosts.yml 100_debugs.yml Time (*mean* ± σ): * 6.744 s* ± 0.107 s [User: 27.194 s, System: 1.963 s] Range (min … max): 6.537 s … 6.829 s 10 runs Without lock $ hyperfine -- "ansible-playbook --forks 25 -i 100_hosts.yml 100_debugs.yml"*Benchmark #1*: ansible-playbook --forks 25 -i 100_hosts.yml 100_debugs.yml Time (*mean* ± σ): * 6.675 s* ± 0.090 s [User: 27.160 s, System: 1.936 s] Range (min … max): 6.499 s … 6.748 s 10 runs $ git rev-parse HEAD c4e211a4291162c17054f44ad5381856ffff802f $ git diff | cat diff --git a/lib/ansible/plugins/strategy/__init__.py b/lib/ansible/plugins/strategy/__init__.py index 70dea1ea6f..31e0f58eb5 100644 --- a/lib/ansible/plugins/strategy/__init__.py +++ b/lib/ansible/plugins/strategy/__init__.py @@ -101,14 +101,13 @@ def results_thread_main(strategy): strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs) elif isinstance(result, TaskResult): strategy.normalize_task_result(result) - with strategy._results_lock: - # only handlers have the listen attr, so this must be a handler - # we split up the results into two queues here to make sure - # handler and regular result processing don't cross wires - if 'listen' in result._task_fields: - strategy._handler_results.append(result) - else: - strategy._results.append(result) + # only handlers have the listen attr, so this must be a handler + # we split up the results into two queues here to make sure + # handler and regular result processing don't cross wires + if 'listen' in result._task_fields: + strategy._handler_results.append(result) + else: + strategy._results.append(result) else: display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result)) except (IOError, EOFError): @@ -223,7 +222,6 @@ class StrategyBase: self._results = deque() self._handler_results = deque() - self._results_lock = threading.Condition(threading.Lock()) # create the result processing thread for reading results in the background self._results_thread = threading.Thread(target=results_thread_main, args=(self,)) @@ -524,15 +522,12 @@ class StrategyBase: cur_pass = 0 while True: try: - self._results_lock.acquire() if do_handlers: task_result = self._handler_results.popleft() else: task_result = self._results.popleft() except IndexError: break - finally: - self._results_lock.release() original_host = task_result._host original_task = task_result._task $ cat 100_hosts.yml all: hosts: local[001:100]: vars: ansible_connection: local $ cat 100_debugs.yml - hosts: all gather_facts: false tasks: - debug: loop: "{{ range(100) }}" On Mon, 22 Mar 2021 at 19:43, Matt Martz <m...@sivel.net> wrote: > I'll honestly say I was surprised too. I expected the opposite. These are > questions that chew away at my days; Running endless performance profiling. > > On Mon, Mar 22, 2021 at 2:41 PM Alex Willmer <a...@moreati.org.uk> wrote: > >> Huh, I would not have guessed that. Thanks, I'll try that next time I >> can't explain the presence of some code. >> >> On Mon, 22 Mar 2021 at 15:39, Matt Martz <m...@sivel.net> wrote: >> >>> I did some basic performance testing here, and the lock as currently >>> implemented provides a decent performance improvement. >>> >>> On my machine with 100 hosts, 100 debug tasks, and 25 forks, I found >>> that removing the lock increased execution time by close to 60 seconds. >>> >>> Based on that, I'd be hesitant to remove it. >>> >>> On Fri, Mar 19, 2021 at 5:41 PM al...@moreati.org.uk < >>> a...@moreati.org.uk> wrote: >>> >>>> >>>> ansible.plugins.strategy.StrategyBase._results_lock is acquired to >>>> control access to StrategyBase._results and StrategyBase._handler_results. >>>> Specifically >>>> >>>> - in ansible.plugins.strategy.results_thread_main() before append() ing >>>> a result >>>> - in StrategyBase_process_pending_results() before popleft() ing >>>> >>>> However StrategyBase._results & _handler_results are both >>>> collections.deque objects. From the Python docs >>>> >>>> > Deques support thread-safe, memory efficient appends and pops from >>>> either side of the deque. >>>> >>>> Thus AFAICT the lock isn't needed for the deque operations, and could >>>> possibly be removed. Have I missed something? Is the lock providing another >>>> protection I've not noticed? Is it there as a belt and braces measure? >>>> >>>> If there's no reason to have it, and interest in removing it, I'll >>>> submit a PR. Alternatively maybe there's benefit in keeping the lock and >>>> using wait()/notify() instead of some of the polling. >>>> >>>> Regards, Alex >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "Ansible Development" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to ansible-devel+unsubscr...@googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/ansible-devel/3a309938-189d-4b8b-b9c4-539afea81984n%40googlegroups.com >>>> <https://groups.google.com/d/msgid/ansible-devel/3a309938-189d-4b8b-b9c4-539afea81984n%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> >>> >>> -- >>> Matt Martz >>> @sivel >>> sivel.net >>> >> >> >> -- >> Alex Willmer <a...@moreati.org.uk> >> > > > -- > Matt Martz > @sivel > sivel.net > -- Alex Willmer <a...@moreati.org.uk> -- You received this message because you are subscribed to the Google Groups "Ansible Development" group. To unsubscribe from this group and stop receiving emails from it, send an email to ansible-devel+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-devel/CABv%3DZQEpjeGMc1xwsUyeppzckKeSxiSaJo6YS%3DcEMdg2Or2K6w%40mail.gmail.com.