Hi Thomas,
I have begin to work on new subnet panel (like ipset),
it's working fine.
I have also changed the api endpoint to
/cluster/sdn/vnets//subnets/
I'll try to send patch next week.
About your problem with ipam, and the gateway.
I have found bug, if you create first the subnet + gatew
On 10.09.20 13:34, Alexandre DERUMIER wrote:
>>> as said, if the other nodes where not using HA, the watchdog-mux had no
>>> client which could expire.
>
> sorry, maybe I have wrong explained it,
> but all my nodes had HA enabled.
>
> I have double check lrm_status json files from my morning back
Adds a new button to the hardware panel labeled 'Reassign disk' and
enables a user to reassign a disk to another VM.
Signed-off-by: Aaron Lauterer
---
v2 -> v3:
* fixed check to omit the current vmid in the target dropdown
* renamed parameter disk to drive_key
* added missing comma
v1 -> v2: fi
This patch series adds the GUI to the recent patch series [0] which
enables the reassignment of disks between VMs.
For this to work, the previous patch series [0] needs to be applied and
installed.
v2 -> v3:
* fixed check if same VMID for dropdown
* renamed disk parameter to drive_key
v1 -> v2:
Sometimes the reset button does not make sense and the isCreate option
does not fit as well because with it, the submit button will be enabled
right away instead of waiting for the form to be valid.
Signed-off-by: Aaron Lauterer
---
v1 -> v2 -> v3: nothing changed
This helps to reuse the PVE.win
Signed-off-by: Aaron Lauterer
---
v2 -> v3: nothing changed
v1 -> v2: fixed linter errors
www/manager6/Utils.js | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index bf9ceda9..19227384 100644
--- a/www/manager6/Ut
Signed-off-by: Aaron Lauterer
---
v1 -> v2 -> v3: nothing changed
www/manager6/qemu/HardwareView.js | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/www/manager6/qemu/HardwareView.js
b/www/manager6/qemu/HardwareView.js
index 40b3fe86..b641317d 100644
--- a/www/mana
Functionality has been added for the following storage types:
* dir based ones
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.
A new intermediate class for directory based
Signed-off-by: Aaron Lauterer
---
rfc -> v1 -> v2 -> v3: nothing changed
src/Utils.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/Utils.js b/src/Utils.js
index 8595cce..af41f33 100644
--- a/src/Utils.js
+++ b/src/Utils.js
@@ -587,6 +587,7 @@ utilities: {
qmigrate: ['VM', gett
This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:
* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
image for
Signed-off-by: Aaron Lauterer
---
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed
PVE/CLI/qm.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 282fa86..7fe25c6 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -913,6 +
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.
The latter can easily cause unexpected behavior s
lxc does not have a debian/changelog (checked also the 1.0.0 tag) by checking
for this file's existence the build resulted in always resetting the submodule
to the currently commited state.
This makes testing of a new tag a bit less comfortable (you need to commit it
before building).
Signed-off-
pbs cannot handle minute only calendar events
(because we support seconds there and hh:mm and mm:ss would be ambiguous)
so we want to eventually remove that in pve (in a future major version)
to have a consistent calendarevent syntax
for this we do 4 things here:
* transform such events when wri
we want to sometimes elminiate the 'minute-only' syntax, so change
the defaults from '*/15' to the semantically identical '*:00/15'
make the replication edit window a bit wider to show the complete
empty text
Signed-off-by: Dominik Csapak
---
www/manager6/grid/Replication.js | 7 ---
1 file
'22' as timespec results in every hour at 22 minutes, so
0:22, 1:22, etc. not at 22:00 like the text suggests
fix the examples to match the text
Signed-off-by: Dominik Csapak
---
pvesr.adoc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pvesr.adoc b/pvesr.adoc
index a1a
systemd and pbs require the hour for schedules. so to be compatible
in the future, transform all schedules that omit the hour to '*:MINSPEC'
(where MINSPEC is the given spec for the minutes)
we do this now, so we can drop the 'minutes only' syntax in the future
sometimes
also adapt the default va
pbs cannot handle minute only calendar events, and we want to get
consistent eventually, so replace all those examples with
the hour added
also replace */x examples with 0/x as they are functually the same
and the latter is also valid in systemd (the former is not)
Signed-off-by: Dominik Csapak
pbs cannot parse minutes alone
(if the hours would be optional it would be ambiguies between hh:mm and mm:ss)
so we adapt the examples here, so that newly created schedules from the
examples include always the hour
also replace */x with 0/x. they are semantically the same, but the
latter is also
this patchset updates lxc to 4.0.4
tested on my system and a virtual ceph-cluster (for ha and migration tests),
created some backups of containers. - everything seems to work fine.
Stoiko Ivanov (2):
update upstream to 4.0.4 and rebase patches
bump version to 4.0.4-1
debian/changelog
Signed-off-by: Stoiko Ivanov
---
...ning-lxc-monitord-as-a-system-daemon.patch | 4 +--
...roup.dir.-monitor-container-containe.patch | 8 +++---
container.namespace-lxc.cgroup.cont.patch | 2 +-
...dd-and-document-cgroup_advanced_isol.patch | 14 +-
...up.dir.-monitor-container-co
Signed-off-by: Stoiko Ivanov
---
debian/changelog | 6 ++
1 file changed, 6 insertions(+)
diff --git a/debian/changelog b/debian/changelog
index e1ab53d..8bca333 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+lxc-pve (4.0.4-1) pve; urgency=medium
+
+ * update to lxc-4
>>as said, if the other nodes where not using HA, the watchdog-mux had no
>>client which could expire.
sorry, maybe I have wrong explained it,
but all my nodes had HA enabled.
I have double check lrm_status json files from my morning backup 2h before the
problem,
they were all in "active" state.
On 10.09.20 10:23, Fabian Grünbichler wrote:
> On September 10, 2020 10:19 am, Thomas Lamprecht wrote:
>> On 10.09.20 10:00, Fabian Grünbichler wrote:
>>> TL;DR: iff we really need this, then I'd put it in a separate API call.
>> We could also just do the "cap heuristic calculation" in the frontend
On September 10, 2020 10:19 am, Thomas Lamprecht wrote:
> On 10.09.20 10:00, Fabian Grünbichler wrote:
>> also, permissions has a return schema already, while it does 'match'
>> from a structural point of view (a two-level deep hash), it is something
>> altogether different semantically.
>
> as
On 10.09.20 06:58, Alexandre DERUMIER wrote:
> Thanks Thomas for the investigations.
>
> I'm still trying to reproduce...
> I think I have some special case here, because the user of the forum with 30
> nodes had corosync cluster split. (Note that I had this bug 6 months ago,when
> shuting down
On 10.09.20 10:00, Fabian Grünbichler wrote:
> On September 9, 2020 9:00 pm, Thomas Lamprecht wrote:
>> On 06.07.20 14:45, Tim Marx wrote:
>>> Signed-off-by: Tim Marx
>>> ---
>>> * no changes
>>
>> Maybe we could merge this into the "/access/permissions" endpoint, maybe
>> with a
>> "heurisitic"
On September 9, 2020 9:00 pm, Thomas Lamprecht wrote:
> On 06.07.20 14:45, Tim Marx wrote:
>> Signed-off-by: Tim Marx
>> ---
>> * no changes
>
> Maybe we could merge this into the "/access/permissions" endpoint, maybe with
> a
> "heurisitic" parameter?
IIRC Dominik wanted to slowly replace the
28 matches
Mail list logo