On 09/08/2010 03:48 PM, Anthony Liguori wrote:
On 09/08/2010 03:23 AM, Avi Kivity wrote:
On 09/08/2010 01:27 AM, Anthony Liguori wrote:
FWIW, L2s are 256K at the moment and with a two level table, it can
support 5PB of data.
I clearly suck at basic math today. The image supports 64TB today.
Dropping to 128K tables would reduce it to 16TB and 64k tables would
be 4TB.
Maybe we should do three levels then. Some users are bound to
complain about 64TB.
That's just the default size. The table size and cluster sizes are
configurable. Without changing the cluster size, the image can
support up to 1PB.
Loading very large L2 tables on demand will result in very long
latencies. Increasing cluster size will result in very long first write
latencies. Adding an extra level results in an extra random write every
4TB.
Today, we only need to sync() when we first allocate an L2 entry
(because their locations never change). From a performance
perspective, it's the difference between an fsync() every 64k vs.
every 2GB.
Yup. From a correctness perspective, it's the difference between a
corrupted filesystem on almost every crash and a corrupted filesystem
in some very rare cases.
I'm not sure I understand you're corruption comment. Are you claiming
that without checksumming, you'll often get corruption or are you
claiming that without checksums, if you don't sync metadata updates
you'll get corruption?
No, I'm claiming that with checksums but without allocate-on-write you
will have frequent (detected) data loss after power failures. Checksums
need to go hand-in-hand with allocate-on-write (which happens to be the
principle underlying zfs and btrfs).
qed is very careful about ensuring that we don't need to do syncs and
we don't get corruption because of data loss. I don't necessarily buy
your checksumming argument.
The requirement for checksumming comes from a different place. For
decades we've enjoyed very low undetected bit error rates. However the
actual amount of data is increasing to the point that it makes an
undetectable bit error likely, just by throwing a huge amount of bits at
storage. Write ordering doesn't address this issue.
Virtualization is one of the uses where you have a huge number of bits.
btrfs addresses this, but if you have (working) btrfs you don't need
qed. Another problem is nfs; TCP and UDP checksums are incredibly weak
and it is easy for a failure to bypass them. Ethernet CRCs are better,
but they only work if the error is introduced after the CRC is taken and
before it is verified.
Well, if we introduce a minimal format, we need to make sure it isn't
too minimal.
I'm still not sold on the idea. What we're doing now is pushing the
qcow2 complexity to users. We don't have to worry about refcounts
now, but users have to worry whether they're the machine they're
copying the image to supports qed or not.
The performance problems with qcow2 are solvable. If we preallocate
clusters, the performance characteristics become essentially the same
as qed.
By creating two code paths within qcow2.
You're creating two code paths for users.
It's not just the reference counts, it's the lack of guaranteed
alignment, compression, and some of the other poor decisions in the
format.
If you have two code paths in qcow2, you have non-deterministic
performance because users that do reasonable things with their images
will end up getting catastrophically bad performance.
We can address that in the tools. "By enabling compression, you may
reduce performance for multithreaded workloads. Abort/Retry/Ignore?"
A new format doesn't introduce much additional complexity. We provide
image conversion tool and we can almost certainly provide an in-place
conversion tool that makes the process very fast.
It requires users to make a decision. By the time qed is ready for mass
deployment, 1-2 years will have passed. How many qcow2 images will be
in the wild then? How much scheduled downtime will be needed? How much
user confusion will be caused?
Virtualization is about compatibility. In-guest compatibility first,
but keeping the external environment stable is also important. We
really need to exhaust the possibilities with qcow2 before giving up on it.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.