Hi, This series seems to have some coding style problems. See output below for more information:
Type: series Message-id: 20180111082452.27295.85707.stgit@pasha-VirtualBox Subject: [Qemu-devel] [RFC PATCH v3 00/30] replay additions === TEST SCRIPT BEGIN === #!/bin/bash BASE=base n=1 total=$(git log --oneline $BASE.. | wc -l) failed=0 git config --local diff.renamelimit 0 git config --local diff.renames True commits="$(git log --format=%H --reverse $BASE..)" for c in $commits; do echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..." if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then failed=1 echo fi n=$((n+1)) done exit $failed === TEST SCRIPT END === Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384 Switched to a new branch 'test' fbcc1a65e0 replay: don't process async events when warping the clock 1d748307de replay: improve replay performance 923f1e2ac4 scripts/qemu-gdb/timers.py: new helper to dump timer state 7f1db2f30e scripts/replay-dump.py: replay log dumper 9d89387197 scripts/analyse-locks-simpletrace.py: script to analyse lock times fe9d0ec29d util/qemu-thread-*: add qemu_lock, locked and unlock trace events 58cdd08f6e scripts/qemu-gdb: add simple tcg lock status helper 213a6c59bf replay: avoid recursive call of checkpoints 6774ab6c4d replay: check return values of fwrite d9a32c2033 replay: don't destroy mutex at exit d2edad4792 replay: push replay_mutex_lock up the call tree 034eb79587 replay: make locking visible outside replay code 69caf6e5a6 replay/replay-internal.c: track holding of replay_lock 4d1487b6a5 replay/replay.c: bump REPLAY_VERSION again 9b32e15532 cpus: only take BQL for sleeping threads d7f943773b cpus: push BQL lock to qemu_*_wait_io_event 1a17dfd948 target/arm/arm-powertctl: drop BQL assertions e56b70a8b2 icount: fixed saving/restoring of icount warp timers 16af898754 replay: save prior value of the host clock 876573aac0 replay: make safe vmstop at record/replay 021d114781 replay: added replay log format description dc9a4afa2a replay: fix save/load vm for non-empty queue 800d95810e replay: fixed replay_enable_events ad5f651b8a replay: fix processing async events 9aae87950a replay: disable default snapshot for record/replay 92651c0d68 blkreplay: create temporary overlay for underlaying devices e6202f9fc2 block: implement bdrv_snapshot_goto for blkreplay 4a2a5bab32 This patch adds a condition before overwriting exception_index fields. 3a3fe58934 cpu: flush TB cache when loading VMState 866b1800de hpet: recover timer offset correctly === OUTPUT BEGIN === Checking PATCH 1/30: hpet: recover timer offset correctly... Checking PATCH 2/30: cpu: flush TB cache when loading VMState... Checking PATCH 3/30: This patch adds a condition before overwriting exception_index fields.... Checking PATCH 4/30: block: implement bdrv_snapshot_goto for blkreplay... Checking PATCH 5/30: blkreplay: create temporary overlay for underlaying devices... Checking PATCH 6/30: replay: disable default snapshot for record/replay... Checking PATCH 7/30: replay: fix processing async events... Checking PATCH 8/30: replay: fixed replay_enable_events... Checking PATCH 9/30: replay: fix save/load vm for non-empty queue... Checking PATCH 10/30: replay: added replay log format description... Checking PATCH 11/30: replay: make safe vmstop at record/replay... Checking PATCH 12/30: replay: save prior value of the host clock... Checking PATCH 13/30: icount: fixed saving/restoring of icount warp timers... ERROR: spaces required around that '*' (ctx:VxV) #170: FILE: cpus.c:689: + .subsections = (const VMStateDescription*[]) { ^ total: 1 errors, 0 warnings, 173 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Checking PATCH 14/30: target/arm/arm-powertctl: drop BQL assertions... Checking PATCH 15/30: cpus: push BQL lock to qemu_*_wait_io_event... Checking PATCH 16/30: cpus: only take BQL for sleeping threads... Checking PATCH 17/30: replay/replay.c: bump REPLAY_VERSION again... Checking PATCH 18/30: replay/replay-internal.c: track holding of replay_lock... Checking PATCH 19/30: replay: make locking visible outside replay code... Checking PATCH 20/30: replay: push replay_mutex_lock up the call tree... Checking PATCH 21/30: replay: don't destroy mutex at exit... Checking PATCH 22/30: replay: check return values of fwrite... Checking PATCH 23/30: replay: avoid recursive call of checkpoints... ERROR: do not initialise statics to 0 or NULL #22: FILE: replay/replay.c:179: + static bool in_checkpoint = false; total: 1 errors, 0 warnings, 32 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Checking PATCH 24/30: scripts/qemu-gdb: add simple tcg lock status helper... Checking PATCH 25/30: util/qemu-thread-*: add qemu_lock, locked and unlock trace events... ERROR: line over 90 characters #112: FILE: util/qemu-thread-posix.c:158: +void qemu_cond_wait_impl(QemuCond *cond, QemuMutex *mutex, const char *file, const int line) total: 1 errors, 0 warnings, 108 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Checking PATCH 26/30: scripts/analyse-locks-simpletrace.py: script to analyse lock times... Checking PATCH 27/30: scripts/replay-dump.py: replay log dumper... Checking PATCH 28/30: scripts/qemu-gdb/timers.py: new helper to dump timer state... Checking PATCH 29/30: replay: improve replay performance... WARNING: line over 80 characters #32: FILE: cpus.c:1459: + (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0); total: 0 errors, 1 warnings, 94 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Checking PATCH 30/30: replay: don't process async events when warping the clock... === OUTPUT END === Test command exited with code: 1 --- Email generated automatically by Patchew [http://patchew.org/]. Please send your feedback to patchew-de...@freelists.org