On 02/27/2015 01:32 PM, Eric Blake wrote:
On 02/27/2015 10:24 AM, Vladimir Sementsov-Ogievskiy wrote:
Reviewed-by: John Snow <js...@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@parallels.com>
---
block.c | 1 +
include/qemu/hbitmap.h | 8 ++++++++
qapi/block-core.json | 4 +++-
util/hbitmap.c | 8 ++++++++
4 files changed, 20 insertions(+), 1 deletion(-)
+++ b/qapi/block-core.json
@@ -336,11 +336,13 @@
#
# @frozen: whether the dirty bitmap is frozen (Since 2.3)
#
+# @md5: md5 checksum of the last bitmap level (since 2.3)
+#
# Since: 1.3
##
{ 'type': 'BlockDirtyInfo',
'data': {'*name': 'str', 'count': 'int', 'granularity': 'uint32',
- 'disabled': 'bool', 'frozen': 'bool'} }
+ 'disabled': 'bool', 'frozen': 'bool', 'md5': 'str'} }
How long does it take to compute the md5 sum? Is enabling this
information unconditionally going to significantly slow down the call,
when the information is useful primarily only for debugging?
That said, it looks okay code-wise, so as long as I am not uncovering a
design flaw:
Reviewed-by: Eric Blake <ebl...@redhat.com>
A dirty bitmap that contains 1MiB of raw data using the default
granularity of 64KiB describes 512GiB of disk space.
Allocating 1MiB, filling each byte with a pattern (255 % offset),
computing the MD5 checksum and printing the hash takes, on my computer,
.006 seconds.
The same procedure with 8MiB (4TiB) and timing exclusively the hash
computation, it's 0.014033 seconds.
The time starts to get appreciable at around 64MiB of data, which would
imply 32TiB of disk, which takes about a tenth of a second.