> Why not simply make a new sub where the socket is not closed instantly but
> instead is closed on demand or add an option to the existing sub which, if
> true,
> requires manual closing of the socket?
Because qmp only allows one connection, so that would block any other command.
__
convert on LVM fails with:
# /usr/bin/qemu-img convert -t writeback -p -C -f host_device -O host_device
/dev/vmdisks/vm-100-disk-1 /dev/vmdisks/vm-101-disk-1
qemu-img: error while writing sector 0: Bad file descriptor
Seems -O host_device is the problem.
Why do we use host_device instead of 'ra
On Tue, 07 May 2013 08:07:04 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> OH, ok, so indeed if it's possible to retrieve them, it could be great. I'll
> try to read the qemu code.
>
Why not simply make a new sub where the socket is not closed instantly
but instead is closed on demand or add an o
>>IMHO, A block job should store the error somewhere, so that we can query it
>>later.
>>(like I do it for the backup task).
OH, ok, so indeed if it's possible to retrieve them, it could be great. I'll
try to read the qemu code.
>>Everything is possible ;-) But we now have a simple, stateless
> >>What for? Just to log event messages?
> no ;), of course,that was just an example. Maybe something like a shared
> memory with last qmp events, to be able to use them from proxmox code
>
> it should great to be able to known if a block job have die and the block job
> error
> detail, if a bad
>>Yes, but we measure incredible bad performance with cfq. We get fsync
>>rates of 250 instead of 3000 on ext4 (factor 12!). So iopriorities makes no
>>sense if you are already 10 time slower?
Sure !
3000 fsync with which hardware? hardware raid with write cache ?
For me it's always use deadl
>>What for? Just to log event messages?
no ;), of course,that was just an example. Maybe something like a shared memory
with last qmp events, to be able to use them from proxmox code
it should great to be able to known if a block job have die and the block job
error detail, if a bad sector exis
> What do you think about this ?
What for? Just to log event messages?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> If I remember, some openvz features works only with cfq ? (ionice, openvz IO
> priorities)
> see:
> http://pve.proxmox.com/pipermail/pve-devel/2012-March/002488.html
Yes, but we measure incredible bad performance with cfq. We get fsync
rates of 250 instead of 3000 on ext4 (factor 12!). So ioprio
On Tue, 07 May 2013 02:21:26 +0200 (CEST)
Alexandre DERUMIER wrote:
> Something like
>
> ---
> process-(http/json?)>|QMP PROXY |
> workers->|queuing request |always
> open> /
>>we close connection immediately, so I guess events arrive later (so we miss
>>them).
So currently, each process/workers/daemons try to access concurrently to same
qmp socket, then need to close it fast to not block other
process/workers/daemons right ?
Maybe can we improve that (not for pro
about qmp events, I have tested with qmp-shell python script (available in QMP/
in qemu.git),
and I can receive the BLOCK_JOB_ERROR && BLOCK_JOB_COMPLETED events.
I had tried to syslog the current qmp response in qmpclient.pm, I got nothing,
except events for vm stop, vm cont and cd eject.
An
it's working now with last git pull.
I'll do tests this afternoon
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 6 Mai 2013 11:23:36
Objet: RE: clone GUI cleanup
Maybe you need to update the qemu-server package?
> --
If I remember, some openvz features works only with cfq ? (ionice, openvz IO
priorities)
see:
http://pve.proxmox.com/pipermail/pve-devel/2012-March/002488.html
Myself, I'm using deadline (because It's works better for my workload with my
san), but I use kvm guests.
- Mail original -
> I had tried to syslog the current qmp response in qmpclient.pm, I got nothing,
> except events for vm stop, vm cont and cd eject.
we close connection immediately, so I guess events arrive later (so we miss
them).
___
pve-devel mailing list
pve-devel@
On Mon, 6 May 2013 12:21:54 +
Martin Maurer wrote:
>
> What do you think about this? Please report your thoughts!
>
I have hosts with SSD using deadline and hosts without SSD using CFQ.
AFAIAK there seems to be no difference running backup jobs with working
ionice or not. For KVM's I have n
I have been using deadline for years, KVM machines seem to perform best
with it.
Eric
On 05/06/2013 08:21 AM, Martin Maurer wrote:
> Hi all,
>
> We want to discuss the changing of the pve default I/O scheduler from CFQ to
> Deadline. I want to collect feedback, pros and cons here.
>
> CFQ:
> - o
Hi all,
We want to discuss the changing of the pve default I/O scheduler from CFQ to
Deadline. I want to collect feedback, pros and cons here.
CFQ:
- only CFQ support priorities (ionice)
Deadline:
- No support for priorities but generally faster for KVM in some (or a lot of?)
systems.
The sch
Am 06.05.2013 13:30, schrieb Dietmar Maurer:
>> Hi yes in this case. But imagine someone is using qemu-img in their on
>> scripts on
>> shell.
>
> qemu-img convert always creates the disk.
>
> The -C option is not documented and only used by pve commands. So what do you
> think
> is the problem
> Hi yes in this case. But imagine someone is using qemu-img in their on
> scripts on
> shell.
qemu-img convert always creates the disk.
The -C option is not documented and only used by pve commands. So what do you
think
is the problem?
___
pve-devel
Am 03.05.2013 08:39, schrieb Dietmar Maurer:
>> great idea - but who knows if the target i really zero init or not? so if
>> somebody
>> generally uses qemu-img and copy on top of an existing disk this is not
>> correct...
>
> Either we or qemu-img creates the file, so we know its zero.
>
Hi y
applied, thanks!
> -Original Message-
> From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
> boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier
> Sent: Montag, 06. Mai 2013 11:21
> To: pve-devel@pve.proxmox.com
> Subject: [pve-devel] qemu-server : drive-mirror : die if stats
Maybe you need to update the qemu-server package?
> -Original Message-
> From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Sent: Montag, 06. Mai 2013 11:10
> To: Dietmar Maurer
> Cc: pve-devel@pve.proxmox.com
> Subject: Re: clone GUI cleanup
>
> >>I just committed some patches for t
If drive have bad sectors, the block job die.
we need to die if stats are empty to avoid this:
transferred: 21440086016 bytes remaining: 34668544 bytes total: 21474754560
bytes progression: 99.84 %
Use of uninitialized value $transferred in subtraction (-) at
/usr/share/perl5/PVE/QemuServer.pm l
see commit
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>I just committed some patches for the clone Dialog. Does that still works for
>>you?
I don't know why, but the submit button is always disabled for me ?
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER (aderum...@odiso.com)"
Cc: pve-devel@pve.proxmox.com
Envoyé:
Ok,
I'll redo tests with my faulty volume to see if it's help.
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 6 Mai 2013 09:27:24
Objet: RE: qemu_drive_mirror
> Seem that the block-job die.
No, I simply get a timeout
> Seem that the block-job die.
No, I simply get a timeout for query-block-jobs
Just uploaded a fix for that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>That works out of the box.
Ok, perfect !
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 6 Mai 2013 09:05:05
Objet: RE: [pve-devel] PVE::Storage::volume_has_feature
Note: A reference to the base image is stored insid
Seem that the block-job die.
Last time I have see that, it was because of a sector error on the volume.
Do you have tried with another volume ?
(does the backup work?)
We should be able to see what exactly happen it with qmp events, but last time
I have tried, I can't get them
Maybe does it
I guess we need to increase timeout for query-block-jobs - will apply a fix for
that.
> -Original Message-
> From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
> boun...@pve.proxmox.com] On Behalf Of Dietmar Maurer
> Sent: Montag, 06. Mai 2013 09:11
> To: Alexandre DERUMIER (aderu
For me drive-mirror is totally unstable, any idea whats wrong?
create full clone of drive virtio0 (local:200/vm-200-disk-1.qcow2)
Formatting
'/var/lib/vz/images/102/vm-102-disk-1.qcow2', fmt=qcow2
size=34359738368 encryption=off cluster_size=65536
preallocation='metadata' lazy_refcounts=off
transf
Note: A reference to the base image is stored inside the qcow2 file
> -Original Message-
> From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Sent: Montag, 06. Mai 2013 09:03
> To: Dietmar Maurer
> Cc: pve-devel@pve.proxmox.com
> Subject: Re: [pve-devel] PVE::Storage::volume_has_featu
> I don't known if
>
> "qemu-img convert vm-200-disk-1.qcow2", will work out of the box or if we
> need to specify the base-xxx somewhere.
That works out of the box.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mai
also some infos here:
http://www.linux-kvm.com/content/be-more-productive-base-images-part-3
"qemu-img convert -O qcow2 -B master-windows2003-base.qcow2
master-windows2003.qcow2 final.qcow2"
- Mail original -
De: "Alexandre DERUMIER"
À: "Dietmar Maurer"
Cc: pve-devel@pve.proxmox.
>>Sorry, I did not get that. Please can you explain more elaborate?
We have a vmid 200, linked clone of vmid 100 with:
base-100-disk-1.raw ---> vm-200-disk-1.qcow2
then we want to full clone the vm 200.
I don't known if
"qemu-img convert vm-200-disk-1.qcow2", will work out of the box or if w
36 matches
Mail list logo