The same storage needs to be configured on the target node for the
replication to work.
Signed-off-by: Aaron Lauterer
---
pvesr.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pvesr.adoc b/pvesr.adoc
index 9eda008..4fc2de1 100644
--- a/pvesr.adoc
+++ b/pvesr.adoc
@@ -101
Signed-off-by: Aaron Lauterer
---
pvesr.adoc | 6 ++
1 file changed, 6 insertions(+)
diff --git a/pvesr.adoc b/pvesr.adoc
index 4fc2de1..209f306 100644
--- a/pvesr.adoc
+++ b/pvesr.adoc
@@ -181,6 +181,12 @@ A replication job is identified by a cluster-wide unique
ID. This ID is
composed of
besides some smaller things that could be cleaned up in a follow-up,
consider this:
Tested-By: Aaron Lauterer
Reviewed-By: Aaron Lauterer
On 2024-11-05 03:00, Severen Redwood wrote:
At the moment, the `/cluster/nextid` API endpoint will return the lowest
available VM/CT ID, which means tha
beside one small nit that could be cleaned up in a follow-up,
consider this:
Tested-By: Aaron Lauterer
Reviewed-By: Aaron Lauterer
On 2024-11-05 03:00, Severen Redwood wrote:
After a container is destroyed, record that its ID has been used via the
`PVE::UsedVmidList` module so that the `/
beside one small nit that could be cleaned up in a follow-up,
consider this:
Tested-By: Aaron Lauterer
Reviewed-By: Aaron Lauterer
On 2024-11-05 03:00, Severen Redwood wrote:
After a virtual machine is destroyed, record that its ID has been used
via the `PVE::UsedVmidList` module so that t
gave this series another spin in my test cluster and so far it seems to
work as described.
I did a very quick test if we see some noticeable "lag" when destroying
a guest by filling the `used_vmids.list` with IDs from 100..3 and a
stepsize of 2, so close to 15k lines. But I didn't notice tha
noticed this while testing
https://lore.proxmox.com/pve-devel/20241031134629.144893-1-d.k...@proxmox.com
the first patch fixes the already allowed "permission self-service" for
users as the web UI implements it (it always passes the $userid
parameter).
the second patch extends that self-service
even when specifying an explicit userid matching their own.
Signed-off-by: Fabian Grünbichler
---
src/PVE/API2/AccessControl.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/PVE/API2/AccessControl.pm b/src/PVE/API2/AccessControl.pm
index c55a7b3..157a5ee 100644
even if they lack Sys.Audit on /access - since tokens are self-service,
checking whether the ACLs work as expected should also be doable for every
user.
Signed-off-by: Fabian Grünbichler
---
src/PVE/API2/AccessControl.pm | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff -
to avoid the need to mark every package shipping PVE-related perl code as
activating the explicit trigger. the explicit trigger can still be used for
packages that need to reload the API without shipping a perl module
the explicit trigger activation can be dropped in PVE 9.0 in packages that ship
As reported in the community forum [0], containers requiring cgroup v1
would not start anymore, even when systemd.unified_cgroup_hierarchy=0
was set on the kernel commandline. The error message would be:
> cgfsng_setup_limits_legacy: 3442 No such file or directory - Failed to set
> "memory.limit_
Assume a cluster that already has an iSCSI storage A configured. After
adding a new iSCSI storage B with a different target on node 1, B will
only become active on node 1, not on the other nodes. On other nodes,
pvestatd logs 'storage B is not online'. The storage does not become
available even aft
Consider the whole series
Tested-by: Fabian Grünbichler
On October 31, 2024 2:46 pm, Daniel Kral wrote:
> Removes the record ("rec") variable from the TokenView, as it is always
> undefined, because the "Add" button is a ExtJS Button and therefore the
> button handler doesn't pass a third parame
the actual error and path is useful to know when tryin to debug or
figure out what did not work, so warn here if there was an error.
Signed-off-by: Dominik Csapak
---
src/PVE/SysFSTools.pm | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/PVE/SysFSTools.pm b/src/PVE/S
As i feared previously in [0], making it a hard error when encountering
errors during sysfs writes uncovered some situations where our code was
too strict to keep some setups working.
One such case is resetting devices, which is seemingly not necessary
at all times, so this series
* donwgrades th
Since pve-common commit:
eff5957 (sysfstools: file_write: properly catch errors)
this check here fails now when the reset does not work. It turns out
that resetting the device is not always necessary, and we previously
ignored most errors when trying to do so.
To restore that functionality, dow
On Tue, 5 Nov 2024 10:24:21 +0100
Dominik Csapak wrote:
> Since pve-common commit:
>
> eff5957 (sysfstools: file_write: properly catch errors)
>
> this check here fails now when the reset does not work. It turns out
> that resetting the device is not always necessary, and we previously
> igno
On 11/5/24 11:16, Stoiko Ivanov wrote:
On Tue, 5 Nov 2024 10:24:21 +0100
Dominik Csapak wrote:
Since pve-common commit:
eff5957 (sysfstools: file_write: properly catch errors)
this check here fails now when the reset does not work. It turns out
that resetting the device is not always nece
Thanks big-time for the quick fix!
I encountered this at a machine at home with an older GPU (NVIDIA GT1030)
passed through to a VM, which seemingly does not handle resets too well.
with both patches applied the guest starts again w/o error - the tasklog
contains:
```
error writing '1' to '/sys/b
Maximiliano Sandoval writes:
> Maximiliano Sandoval writes:
>
>> Maximiliano Sandoval writes:
>>
>> Ping.
>>
>>> Maximiliano Sandoval writes:
>>>
>>> Ping.
>>>
>>> Maximiliano Sandoval writes:
>
> Ping.
Ping.
___
pve-devel mailing list
pve-deve
20 matches
Mail list logo