On Sun, Mar 10, 2013 at 10:32 AM, Dietmar Maurer wrote:
>> >> What I am looking for is a stripped down patch series with just a
>> >> backup block job (no backup archive writer or migration code). That
>> >> would be easily merged and saves you front rebasing this series as QEMU
>> changes.
>> >
> >> What I am looking for is a stripped down patch series with just a
> >> backup block job (no backup archive writer or migration code). That
> >> would be easily merged and saves you front rebasing this series as QEMU
> changes.
> >
> > That is Patch 2/6?
>
> Yes. I sent an RFC series that sh
On Fri, Mar 8, 2013 at 6:44 PM, Dietmar Maurer wrote:
>> >> This is a strong indicator that the backup archive code should
>> >>live outside QEMU. I doesn't make sense for proxmox, oVirt,
>> >>OpenStack, and others to each maintain their backup archive code
>> >>inside qemu.git, tied
> >> For these reasons, I'm against putting backup archive code into QEMU.
> >
> > That is OK for me - I already maintain the code outside of qemu.
>
> Does this mean you will keep this patch series out-of-tree?
You are 'against putting backup archive code into QEMU', so I need to
maintain it out
> >> 1. QEMU can neither backup nor restore without help from the management
> >>tool.
> >
> > Backup works perfectly with the current patches. You can easily
> > trigger a backup using a HMP command. This is not really important, but
> > works.
>
> If you send me a VMA file I can't restore i
On Fri, Mar 8, 2013 at 12:01 PM, Dietmar Maurer wrote:
>> > Anyways, and additional RPC layer always adds overhead, and It can be
>> completely avoided.
>> > Maybe not much, but I want to make backup as efficient as possible.
>>
>> The drawbacks outweight the performance advantage:
>
> not for me.
> > Anyways, and additional RPC layer always adds overhead, and It can be
> completely avoided.
> > Maybe not much, but I want to make backup as efficient as possible.
>
> The drawbacks outweight the performance advantage:
not for me.
> 1. QEMU can neither backup nor restore without help from th
On Thu, Mar 07, 2013 at 09:28:40AM +, Dietmar Maurer wrote:
> > > When we run backup, we need to read such block on every write from the
> > guest.
> > > So if we increase block size we get additional delays.
> >
> > Don't increase the bitmap block size.
> >
> > Just let the block job do larg
> > When we run backup, we need to read such block on every write from the
> guest.
> > So if we increase block size we get additional delays.
>
> Don't increase the bitmap block size.
>
> Just let the block job do larger reads. This is the bulk of the I/O
> workload. You
> can use large reads
> > > > You can make that as complex as you want. I simply do not need that.
> > >
> > > Then you also don't need the performance that you lose by using NBD.
> >
> > Why exactly?
>
> Because your format kills more performance than any NBD connection could.
That is not true.
On Wed, Mar 06, 2013 at 02:42:57PM +, Dietmar Maurer wrote:
> > > > Maybe you'd better use a different output format that doesn't
> > > > restrict you to 64k writes.
> > >
> > > The output format is not really the restriction. The problem is that
> > > an additional IPC layer add overhead, an d
Am 06.03.2013 um 18:39 hat Dietmar Maurer geschrieben:
> > > > How about variable block sizes? I mean this is a stream format that
> > > > has a header for each block anyway. Include a size there and be done.
> > >
> > > You can make that as complex as you want. I simply do not need that.
> >
> >
> > > How about variable block sizes? I mean this is a stream format that
> > > has a header for each block anyway. Include a size there and be done.
> >
> > You can make that as complex as you want. I simply do not need that.
>
> Then you also don't need the performance that you lose by using NBD
Am 06.03.2013 um 16:33 hat Dietmar Maurer geschrieben:
> > > When we run backup, we need to read such block on every write from the
> > guest.
> > > So if we increase block size we get additional delays.
> >
> > How about variable block sizes? I mean this is a stream format that has a
> > header
> > When we run backup, we need to read such block on every write from the
> guest.
> > So if we increase block size we get additional delays.
>
> How about variable block sizes? I mean this is a stream format that has a
> header
> for each block anyway. Include a size there and be done.
You can
Am 06.03.2013 um 15:42 hat Dietmar Maurer geschrieben:
> > > > Maybe you'd better use a different output format that doesn't
> > > > restrict you to 64k writes.
> > >
> > > The output format is not really the restriction. The problem is that
> > > an additional IPC layer add overhead, an d I do not
> > > Maybe you'd better use a different output format that doesn't
> > > restrict you to 64k writes.
> >
> > The output format is not really the restriction. The problem is that
> > an additional IPC layer add overhead, an d I do not want that (because it is
> totally unnecessary).
>
> I missed t
On Mon, Mar 04, 2013 at 02:33:16PM +, Dietmar Maurer wrote:
> > > > Is it using 64 KB writes and have you tried 256 KB writes?
> > >
> > > I use a modified 'qemu-img convert' at 64KB block size (I need 64KB for
> > backup).
> >
> > Maybe you'd better use a different output format that doesn't
> > > Is it using 64 KB writes and have you tried 256 KB writes?
> >
> > I use a modified 'qemu-img convert' at 64KB block size (I need 64KB for
> backup).
>
> Maybe you'd better use a different output format that doesn't restrict you to
> 64k writes.
The output format is not really the restricti
Am 04.03.2013 um 14:16 hat Dietmar Maurer geschrieben:
> > What are the details of the test?
> >
> > Is it using 64 KB writes and have you tried 256 KB writes?
>
> I use a modified 'qemu-img convert' at 64KB block size (I need 64KB for
> backup).
Maybe you'd better use a different output format
> What are the details of the test?
>
> Is it using 64 KB writes and have you tried 256 KB writes?
I use a modified 'qemu-img convert' at 64KB block size (I need 64KB for backup).
On Thu, Feb 28, 2013 at 03:24:27PM +, Dietmar Maurer wrote:
> > > Unfortunately, NBD add considerable overheads. I guess the socket
> > communications copies data.
> > > This is really unnecessary if I can write directly to the output stream.
> >
> > The disk is the bottleneck, not memory band
> The disk is the bottleneck, not memory bandwidth. Hard disks only do
> 10-100 MB/sec and SSDs only do a couple 100 MB/sec. Memory copy is
> insignificant compared to the I/O activity required to copy out the entire
> disk
> image, not to mention delaying guest writes until we read the original
> > Unfortunately, NBD add considerable overheads. I guess the socket
> communications copies data.
> > This is really unnecessary if I can write directly to the output stream.
>
> The disk is the bottleneck, not memory bandwidth. Hard disks only do
> 10-100 MB/sec and SSDs only do a couple 100 M
On Wed, Feb 27, 2013 at 03:50:53PM +, Dietmar Maurer wrote:
> > NBD enables interprocess communication - any form of IPC requires a protocol
> > and NBD is quite a trivial one. What is a simpler way of talking to a
> > backup
> > server?
>
> Unfortunately, NBD add considerable overheads. I g
> NBD enables interprocess communication - any form of IPC requires a protocol
> and NBD is quite a trivial one. What is a simpler way of talking to a backup
> server?
Unfortunately, NBD add considerable overheads. I guess the socket
communications copies data.
This is really unnecessary if I ca
On Fri, Feb 22, 2013 at 10:14:03AM +, Dietmar Maurer wrote:
> > > > QEMU must provide the mechanism for point-in-time backups of block
> > > > devices - your backup block job implements this.
> > > >
> > > > Where I disagree with this patch series is that you put the
> > > > management tool- sp
> > The management tooI just needs to convert the config - looks quite easy to
> me.
>
> It's not an easy problem. This is why there is no central vm-images.com where
> everyone can share/sell virtual appliances. You cannot trivially convert
> between
> VMware, oVirt, proxmox, Xen, EC2, etc.
F
On Thu, Feb 21, 2013 at 03:48:57PM +, Dietmar Maurer wrote:
> > > In future, we can allow to pass multiple config files - the vma
> > > archive format can already handle that.
> >
> > My point is that QEMU has no business dealing with the management tool's VM
> > configuration file. And I thi
On Thu, 21 Feb 2013 14:47:08 +0100
Stefan Hajnoczi wrote:
> On Thu, Feb 21, 2013 at 06:21:32AM +, Dietmar Maurer wrote:
> > > > +##
> > > > +# @backup:
> > > > +#
> > > > +# Starts a VM backup.
> > > > +#
> > > > +# @backup-file: the backup file name
> > > > +#
> > > > +# @format: format of t
> > In future, we can allow to pass multiple config files - the vma
> > archive format can already handle that.
>
> My point is that QEMU has no business dealing with the management tool's VM
> configuration file. And I think the management tool-specific archive format
> also
> shouldn't be in Q
On Thu, Feb 21, 2013 at 06:21:32AM +, Dietmar Maurer wrote:
> > > +##
> > > +# @backup:
> > > +#
> > > +# Starts a VM backup.
> > > +#
> > > +# @backup-file: the backup file name
> > > +#
> > > +# @format: format of the backup file
> > > +#
> > > +# @config-filename: #optional name of a configu
On Thu, Feb 21, 2013 at 06:21:32AM +, Dietmar Maurer wrote:
> > > +/* add configuration file to archive */
> > > +if (has_config_file) {
> > > +char *cdata = NULL;
> > > +gsize clen = 0;
> > > +GError *err = NULL;
> > > +if (!g_file_get_contents(config_fi
> > Another option would be to simply dump
> > to the output fh (pipe), and an
> > external binary saves the data. That way we could move the whole archive
> format related code out of qemu.
>
> That sounds like the NBD option - write the backup to an NBD disk image.
> The NBD server process can
On Wed, Feb 20, 2013 at 10:32:00AM +0100, Dietmar Maurer wrote:
> We use a generic BackupDriver struct to encapsulate all archive format
> related function.
>
> Another option would be to simply dump to
> the output fh (pipe), and an external binary saves the data. That way we
> could move the wh
35 matches
Mail list logo