One more thought, is there interest in running cross-host Flight
benchmarks, and perhaps validating them against iperf or a similar
tool? It would be great to get latency/throughput numbers and make
sure upgrades to gRPC don't tank performance on accident, and it would
help argue for why people should use Flight.

I assume localhost benchmarks with Flight would just work with the
existing benchmark infrastructure, as a starting point.

It might also be interesting to benchmark Flight implementations
against each other. This all probably fits a general need for more
Flight tests/benchmarks.

Best,
David

On 3/30/19, Antoine Pitrou <anto...@python.org> wrote:
>
> Le 29/03/2019 à 16:06, Wes McKinney a écrit :
>>
>>> * How to make it available to all developers? Do we want to integrate
>>> into CI or not?
>>
>> I'd like to eventually have a bot that we can ask to run a benchmark
>> comparison versus master. Reporting on all PRs automatically might be
>> quite a bit of work (and load on the machines)
>
> We should also have a daily (or weekly, but preferably daily IMO) run of
> the benchmarks on latest git master.  This would make it easy to narrow
> down the potential culprit for a regression.
>
> Regards
>
> Antoine.
>

Reply via email to