On Mon, Jun 29, 2020 at 12:25 PM Ahmed Karaman <ahmedkhaledkara...@gmail.com> wrote: > > Hi, > > The second report of the TCG Continuous Benchmarking series builds > upon the QEMU performance metrics calculated in the previous report. > This report presents a method to dissect the number of instructions > executed by a QEMU invocation into three main phases: > - Code Generation > - JIT Execution > - Helpers Execution > It devises a Python script that automates this process. > > After that, the report presents an experiment for comparing the > output of running the script on 17 different targets. Many conclusions > can be drawn from the results and two of them are discussed in the > analysis section. > > Report link: > https://ahmedkrmn.github.io/TCG-Continuous-Benchmarking/Dissecting-QEMU-Into-Three-Main-Parts/ > > Previous reports: > Report 1 - Measuring Basic Performance Metrics of QEMU: > https://lists.gnu.org/archive/html/qemu-devel/2020-06/msg06692.html > > Best regards, > Ahmed Karaman
Hi Mr. Lukáš and Yonggang, I've created a separate "setup" page on the reports website. https://ahmedkrmn.github.io/TCG-Continuous-Benchmarking/setup/ It contains the hardware and OS information of the used system. It also contains all dependencies and setup instructions required to set up a machine identical to the one used in the reports. If you have any further questions or you're using a different Linux distribution, please let me know. Best regards, Ahmed Karaman