Hello,

note that we have(had?) the Python benchmarks continuously running and reported 
at https://pandas.pydata.org/speed/arrow/. Seems like this stopped in July 2018.

UWe

On Fri, Jan 18, 2019, at 9:23 AM, Antoine Pitrou wrote:
> 
> Hi Areg,
> 
> That sounds like a good idea to me.  Note our benchmarks are currently
> scattered accross the various implementations.  The two that I know of:
> 
> - the C++ benchmarks are standalone executables created using the Google
> Benchmark library, aptly named "*-benchmark" (or "*-benchmark.exe" on
> Windows)
> - the Python benchmarks use the ASV utility:
> https://github.com/apache/arrow/blob/master/docs/source/python/benchmarks.rst
> 
> There may be more in the other implementations.
> 
> Regards
> 
> Antoine.
> 
> 
> Le 18/01/2019 à 07:13, Melik-Adamyan, Areg a écrit :
> > Hello,
> > 
> > I want to restart/attach to the discussions for creating Arrow benchmarking 
> > dashboard. I want to propose performance benchmark run per commit to track 
> > the changes.
> > The proposal includes building infrastructure for per-commit tracking 
> > comprising of the following parts:
> > - Hosted JetBrains for OSS https://teamcity.jetbrains.com/ as a build 
> > system 
> > - Agents running in cloud both VM/container (DigitalOcean, or others) and 
> > bare-metal (Packet.net/AWS) and on-premise(Nvidia boxes?)
> > - JFrog artifactory storage and management for OSS projects 
> > https://jfrog.com/open-source/#artifactory2 
> > - Codespeed as a frontend https://github.com/tobami/codespeed 
> > 
> > I am volunteering to build such system (if needed more Intel folks will be 
> > involved) so we can start tracking performance on various platforms and 
> > understand how changes affect it.
> > 
> > Please, let me know your thoughts!
> > 
> > Thanks,
> > -Areg.
> > 
> > 
> > 

Reply via email to