Previously 'free_image' would be executed right away, which is not
the intended behaviour.
Signed-off-by: Fabian Ebner
---
This is a followup to [0] but it has nothing to do with the original patch
so I didn't put a v2.
[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-October/039281.html
Signed-off-by: Fabian Ebner
---
PVE/Storage/ZFSPoolPlugin.pm | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index f66b277..16fb0d6 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Stor
On 10/7/19 11:16 AM, Fabian Ebner wrote:
> This introduces a new locked() mechanism allowing to enclose locked
> sections in a cleaner way. There's only two types of locks namely one
> for state and cron (they are always read together and almost always
> written together) and one for sync.
>
> Sig
On 10/8/19 11:00 AM, Thomas Lamprecht wrote:
> On 10/7/19 11:16 AM, Fabian Ebner wrote:
>> This introduces a new locked() mechanism allowing to enclose locked
>> sections in a cleaner way. There's only two types of locks namely one
>> for state and cron (they are always read together and almost alw
On 10/8/19 10:48 AM, Fabian Ebner wrote:
> Previously 'free_image' would be executed right away, which is not
> the intended behaviour.
>
> Signed-off-by: Fabian Ebner
> ---
>
> This is a followup to [0] but it has nothing to do with the original patch
> so I didn't put a v2.
>
> [0]: https://p
On 10/8/19 10:48 AM, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/Storage/ZFSPoolPlugin.pm | 26 +-
> 1 file changed, 13 insertions(+), 13 deletions(-)
>
applied, thanks!
___
pve-devel mailing list
pve-devel@
On Tue, Oct 08, 2019 at 08:36:57AM +0200, Fabian Grünbichler wrote:
> On October 7, 2019 2:41 pm, Alwin Antreich wrote:
> > Machine states that were created on snapshots with memory could not be
> > restored on rollback. The state volume was not activated so KVM couldn't
> > load the state.
> >
>
On 10/7/19 5:45 PM, Daniel Berteaud wrote:
>
>
> - Le 2 Oct 19, à 18:41, Daniel Berteaud dan...@firewall-services.com a
> écrit :
>
>> - Le 30 Sep 19, à 11:52, Thomas Lamprecht t.lampre...@proxmox.com a
>> écrit :
>>
>>>
>>> Depends on the outcome of above, but effectively we would lik
On October 8, 2019 11:25 am, Alwin Antreich wrote:
> On Tue, Oct 08, 2019 at 08:36:57AM +0200, Fabian Grünbichler wrote:
>> On October 7, 2019 2:41 pm, Alwin Antreich wrote:
>> > Machine states that were created on snapshots with memory could not be
>> > restored on rollback. The state volume was n
- Le 8 Oct 19, à 12:28, Thomas Lamprecht t.lampre...@proxmox.com a écrit :
>
> Thanks for the nice write up and clear reproducer!
>
> It seems that if we cannot use the same backend for all disks we need to
> die when a disk move to a storage backend is request, and that move would
> need to
On 10/8/19 11:09 AM, Thomas Lamprecht wrote:
On 10/8/19 11:00 AM, Thomas Lamprecht wrote:
On 10/7/19 11:16 AM, Fabian Ebner wrote:
This introduces a new locked() mechanism allowing to enclose locked
sections in a cleaner way. There's only two types of locks namely one
for state and cron (they a
Hi All
I want to develop a custom backup option for internal use here, its
probably not very useful to others (yet..) and being a proxmox newb just
wanted to ask
- is it possible to have custom "plugins" that extend the api
- if custom plugins are not an option what could I look at ?
Thanks in a
Hi
Thanks for reply me Lamprecht
I have download the spice webdav from here:
https://www.spice-space.org/download/windows/spice-webdavd/
Where can I get new version of it?
Thanks again
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
I have tried it this too:
https://elmarco.fedorapeople.org/spice-webdavd-x86-0.4.16-9457.msi
But it's seems that no longer available!
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em ter, 8 de out de 2019 às 09:16, Gilberto Nunes
On Tue, Oct 08, 2019 at 12:31:06PM +0200, Fabian Grünbichler wrote:
> On October 8, 2019 11:25 am, Alwin Antreich wrote:
> > On Tue, Oct 08, 2019 at 08:36:57AM +0200, Fabian Grünbichler wrote:
> >> On October 7, 2019 2:41 pm, Alwin Antreich wrote:
> >> > Machine states that were created on snapshot
Hi
Is there any way to add, in next releases, the option to add more than one
SCSI controller?
I don't know if this is possible, but I'll be happy to!
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Thanks to Gilberto Nunes for finding a bug where the VM would not start
with foldersharing enabled and the qemu agent option disabled [0].
The cause was that the device org.spice-space.webdav.0 would not find a
virtio-serial-bus in this situation.
Since we always create a virtio-serial-bus for th
Hi,
on qemu side, they are already 2 controllers (1 for scsi disk 1-7 , and 1 for
scsi disk 7-14)
Exception is virtio-scsi-single, where you have 1 controller by disk. (for
iothread)
what do you want to do exactly ?
- Mail original -
De: "Gilberto Nunes"
À: "pve-devel"
Envoyé: Mardi
I meant the hability to click in Add buton, in web gui, in order to add
more then one SCSI Controller, other than already show in hardware
configuration.
https://pasteboard.co/IB3piA6.png
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
On 10/8/19 8:02 PM, Gilberto Nunes wrote:
> I meant the hability to click in Add buton, in web gui, in order to add
> more then one SCSI Controller, other than already show in hardware
> configuration.
>
what's the use case or gain?
> https://pasteboard.co/IB3piA6.png
returns 404
On 10/8/19 5:56 PM, Aaron Lauterer wrote:
> Thanks to Gilberto Nunes for finding a bug where the VM would not start
> with foldersharing enabled and the qemu agent option disabled [0].
>
> The cause was that the device org.spice-space.webdav.0 would not find a
> virtio-serial-bus in this situation
Hi,
I'm still trying to improve loadbalancing.
Currently we don't stream ksm sharing counter,
I think it could be great to stream it or push it to rrd (with extra rrd ?
change the current memory format ?)
What is the best way to do it ?
As we could have 2 servers with 80% memory usage, but re
On 10/8/19 11:21 AM, Thomas Lamprecht wrote:
On 10/8/19 10:48 AM, Fabian Ebner wrote:
Previously 'free_image' would be executed right away, which is not
the intended behaviour.
Signed-off-by: Fabian Ebner
---
This is a followup to [0] but it has nothing to do with the original patch
so I didn
23 matches
Mail list logo