>>> On 3/11/2014 at 10:24 PM, in message <20140311142451.gm7...@stefanha-thinkpad.redhat.com>, Stefan Hajnoczi <stefa...@gmail.com> wrote: > On Mon, Mar 10, 2014 at 03:31:49PM +0800, Chunyan Liu wrote: > > diff --git a/block/qed.h b/block/qed.h > > index 5d65bea..b024751 100644 > > --- a/block/qed.h > > +++ b/block/qed.h > > @@ -43,7 +43,7 @@ > > * > > * All fields are little-endian on disk. > > */ > > - > > +#define QED_DEFAULT_CLUSTER_SIZE 65536 > > enum { > > QED_MAGIC = 'Q' | 'E' << 8 | 'D' << 16 | '\0' << 24, > > > > @@ -69,7 +69,6 @@ enum { > > */ > > QED_MIN_CLUSTER_SIZE = 4 * 1024, /* in bytes */ > > QED_MAX_CLUSTER_SIZE = 64 * 1024 * 1024, > > - QED_DEFAULT_CLUSTER_SIZE = 64 * 1024, > > Why is this change made for cluster size but not table size?
According to existing create_options, "cluster size" has default value = QED_DEFAULT_CLUSTER_SIZE, after switching to create_opts, this has to be stringized and set to .def_value_str. That is, .def_value_str = stringify(QED_DEFAULT_CLUSTER_SIZE), so the QED_DEFAULT_CLUSTER_SIZE could not be a expression, here changes. "table size" has no default value in create_options, not need such changes > > Stefan > >