On October 10, 2019 8:55 am, Fabian Ebner wrote:
> On 10/1/19 12:28 PM, Fabian Grünbichler wrote:
>> On October 1, 2019 12:17 pm, Fabian Ebner wrote:
>>> Seems like 'zfs destroy' can take longer than 5 seconds, see [0].
>>> I changed the timeout to 15 seconds and also changed the default
>>> timeou
A 'waiting' state is introduced and other 'waiting' and 'syncing'
instances of the same job are now detected by moving the check out
from the sync lock.
Signed-off-by: Fabian Ebner
---
pve-zsync | 21 -
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/pve-zsync
There are two new checks that allow disabling a job while
it is 'syncing' or 'waiting'. Previously when sync finished
it would re-enable such a job involuntarily.
Disabling a 'waiting' job causes it to not sync anymore.
Signed-off-by: Fabian Ebner
---
pve-zsync | 10 +-
1 file changed, 9
Previously inside sync we just called update_job directly, now
we make sure to read the latest verison of the job first.
Signed-off-by: Fabian Ebner
---
$job is still used outside of such enclosures in sync_path
but it is only passed along as a variable and we don't
want to hold the cron and sta
This introduces a new locked() mechanism allowing to enclose locked
sections in a cleaner way. There's only two types of locks namely one
for state and cron (they are always read together and almost always
written together) and one for sync.
Signed-off-by: Fabian Ebner
---
Changes from v3:
*
To make it more clear that PVE does not somehow magically injects a
QHA into the VM, but that this can be set if one has installed the
QGA in the VM themself.
Signed-off-by: Thomas Lamprecht
---
www/manager6/form/AgentFeatureSelector.js | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
As reported in bug #2402, a system started with "default_hugepagesz=1G
hugepagesz=1G" does not have a /sys/kernel/mm/hugepages/hugepages-2048kB
directory.
To still allow 1GB hugepages, ignore the missing directory in hugepages_mount
(since it's not needed anyway), and correctly check if the reques
As reported in bug #2402, a system started with "default_hugepagesz=1G
hugepagesz=1G" does not have a /sys/kernel/mm/hugepages/hugepages-2048kB
directory.
To fix, ignore the missing directory in hugepages_mount (since it might
not be needed anyway), and correctly check if the requested hugepage
si
On 10/10/19 11:54 AM, Thomas Lamprecht wrote:
To make it more clear that PVE does not somehow magically injects a
QHA into the VM, but that this can be set if one has installed the
QGA in the VM themself.
good idea to make it clearer, but i think the new text is also not
ideal, since it does n
This should reduce confusion between the old 'set --state stopped' and
the new 'stop' command by making explicit that it is sent as a crm command.
Signed-off-by: Fabian Ebner
---
src/PVE/CLI/ha_manager.pm | 46 +--
1 file changed, 44 insertions(+), 2 deletion
Signed-off-by: Fabian Ebner
---
src/test/test-stop-command1/README | 2 +
src/test/test-stop-command1/cmdlist | 8 +++
src/test/test-stop-command1/hardware_status | 5 ++
src/test/test-stop-command1/log.expect | 69 +
src/test/test-stop-command1/manage
This patch series introduces a new 'stop' command for ha-manager.
The command takes a timeout parameter and in case it is 0, it performs a hard
stop.
The series also includes a test for the new command.
A few changes to how parameters were handled in CRM/LRM were necessary
as well as allowing the
Signed-off-by: Fabian Ebner
---
src/PVE/HA/Env.pm | 6 ++
src/PVE/HA/Env/PVE2.pm | 6 ++
src/PVE/HA/Sim/Env.pm | 6 ++
src/PVE/HA/Sim/Hardware.pm | 14 ++
4 files changed, 32 insertions(+)
diff --git a/src/PVE/HA/Env.pm b/src/PVE/HA/Env.pm
index bb374
Introduces a timeout parameter for shutting a resource down.
If the parameter is 0, we perform a hard stop instead of a shutdown.
Signed-off-by: Fabian Ebner
---
I did not find a way to pass along the parameters from change_service_state
without having an special handling for either target+timeo
Not every command parameter is 'target' anymore, so
it was necessary to modify the parsing of $sd->{cmd}.
Just changing the state to request_stop is not enough,
we need to actually update the service configuration as well.
Signed-off-by: Fabian Ebner
---
src/PVE/HA/Manager.pm | 27
On 10/10/19 12:21 PM, Dominik Csapak wrote:
On 10/10/19 11:54 AM, Thomas Lamprecht wrote:
To make it more clear that PVE does not somehow magically injects a
QHA into the VM, but that this can be set if one has installed the
QGA in the VM themself.
good idea to make it clearer, but i think t
On 10/10/19 12:31 PM, Aaron Lauterer wrote:
>
>
> On 10/10/19 12:21 PM, Dominik Csapak wrote:
>> On 10/10/19 11:54 AM, Thomas Lamprecht wrote:
>>> To make it more clear that PVE does not somehow magically injects a
>>> QHA into the VM, but that this can be set if one has installed the
>>> QGA in
On 10/10/19 10:29 AM, Fabian Grünbichler wrote:
On October 10, 2019 8:55 am, Fabian Ebner wrote:
On 10/1/19 12:28 PM, Fabian Grünbichler wrote:
On October 1, 2019 12:17 pm, Fabian Ebner wrote:
Seems like 'zfs destroy' can take longer than 5 seconds, see [0].
I changed the timeout to 15 seconds
On 10/10/19 1:05 PM, Thomas Lamprecht wrote:
On 10/10/19 12:31 PM, Aaron Lauterer wrote:
On 10/10/19 12:21 PM, Dominik Csapak wrote:
On 10/10/19 11:54 AM, Thomas Lamprecht wrote:
To make it more clear that PVE does not somehow magically injects a
QHA into the VM, but that this can be set if
Consider this... (And this happen to me, recently).
I have to use a VM with Windows XP, in order to have some sort of system
running properly...
First, I install the VM using LSI or whatever...
But now, I need to add a second HDD, but using Virtio and using Virtio
Block...
There's no way to do that
- Le 10 Oct 19, à 14:09, Gilberto Nunes gilberto.nune...@gmail.com a écrit :
> Consider this... (And this happen to me, recently).
> I have to use a VM with Windows XP, in order to have some sort of system
> running properly...
> First, I install the VM using LSI or whatever...
> But now, I
Perhaps an image can talk more than me
https://pasteboard.co/IBkiuYL.png
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui, 10 de out de 2019 às 09:30, Daniel Berteaud <
dan...@firewall-services.com> escreveu:
>
>
> -
- Le 10 Oct 19, à 15:02, Gilberto Nunes gilberto.nune...@gmail.com a écrit :
> Perhaps an image can talk more than me
>
>
> https://pasteboard.co/IBkiuYL.png
Understood, but in the rare cases like this where you would need one VirtIO
SCSI and one LSI SCSI controler, can't you just use Vi
On 10/10/19 12:18 PM, Stefan Reiter wrote:
> As reported in bug #2402, a system started with "default_hugepagesz=1G
> hugepagesz=1G" does not have a /sys/kernel/mm/hugepages/hugepages-2048kB
> directory.
>
> To fix, ignore the missing directory in hugepages_mount (since it might
> not be needed an
Machine states that were created on snapshots with memory could not be
restored on rollback. The state volume was not activated so KVM couldn't
load the state.
This patch removes the path generation on rollback. It uses the vmstate
and de-/activates the state volume in vm_start. This in turn disal
On 10/10/19 1:22 PM, Dominik Csapak wrote:
> On 10/10/19 1:05 PM, Thomas Lamprecht wrote:
>> On 10/10/19 12:31 PM, Aaron Lauterer wrote:
>>> Additionally we could add a hint if enabled saying something like this:
>>> 'Make sure the Qemu Agent is installed in the VM'
>>>
>>> This would make it quite
On 10/10/19 3:58 PM, Alwin Antreich wrote:
> Machine states that were created on snapshots with memory could not be
> restored on rollback. The state volume was not activated so KVM couldn't
> load the state.
>
> This patch removes the path generation on rollback. It uses the vmstate
> and de-/act
27 matches
Mail list logo