Hello

With the following changes, it seems we might reach the point where
we're able to run the Python-based benchmark suite accross multiple
commits (at least the ones not anterior to those changes):
https://github.com/apache/arrow/pull/1775

To make this truly useful, we would need a dedicated host.  Ideally a
(Linux) OS running on bare metal, with SMT/HyperThreading disabled.
If running virtualized, the VM should have dedicated physical CPU cores.

That machine would run the benchmarks on a regular basis (perhaps once
per night) and publish the results in static HTML form somewhere.

(note: nice to have in the future might be access to NVidia hardware,
but right now there are no CUDA benchmarks in the Python benchmarks)

What should be the procedure here?

Regards

Antoine.

Reply via email to