On 10/21/19 10:11 PM, Stoiko Ivanov wrote:
> by:
> * running /usr/bin/newaliases (generating /etc/aliases.db)
> * setting the compatibility_level to 2
> ** otherwise a warning was issued with broken aliases.db that the system
>is using the backward compatible setting of $mydestination for
> $r
On 10/21/19 5:31 PM, Thomas Lamprecht wrote:
Thanks to Dietmars patch[0] those VMs can now be backed up
successfully, so remove this aborting check.
[0]:
https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=69cb18950a705b54f438f4659b603b3f52901c2f
Signed-off-by: Thomas Lamprecht
---
While test
found a small typo. lmk if you want a v2, probably better after being
reviewed
> diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js
> index 8b924a49..b31e101a 100644
> --- a/www/manager6/lxc/Resources.js
> +++ b/www/manager6/lxc/Resources.js
> @@ -215,6 +215,31 @@ Ext.defin
for nf_conntrack_max the kernel uses by default the value:
(nf_conntrack_buckets value * 4) and nf_conntrack_buckets
is set to 2^16 for machines with more than 4GB memory, so the
resulting default would be 2^18 == 262144.
As PVE hoists are expected to have more than such a, nowadays rather
small,
On October 21, 2019 12:12 pm, Wolfgang Link wrote:
> comment inline
>
> On 10/18/19 11:22 AM, Fabian Grünbichler wrote:
>> note: the comment here is not just for this patch, but also references
>> stuff that comes in later patches..
>>
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> The d
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
> comment inline
>
> On 10/18/19 11:23 AM, Fabian Grünbichler wrote:
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> ---
>>> src/PVE/ACME.pm| 12
>>> src/PVE/ACME/Challenge.pm | 6 ++
>>> src/PVE/ACME/St
On October 21, 2019 12:12 pm, Wolfgang Link wrote:
> comment inline
>
> On 10/18/19 11:25 AM, Fabian Grünbichler wrote:
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> ---
>>> src/PVE/ACME.pm| 16
>>> src/PVE/ACME/StandAlone.pm | 9 +
>>> 2 files
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
>
> On 10/18/19 11:26 AM, Fabian Grünbichler wrote:
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> This parameter allows to use an alternative Domain
>>> for setup the DNS record.
>>>
>>> This can be useful for security reasons or if the
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
>
> On 10/18/19 11:27 AM, Fabian Grünbichler wrote:
>> I don't understand how this relates to #5 ? it's also seemingly not used
>> anywhere?
>
> This is the alternative for dynamic loading the plugin form the key file
> name
>
> and should be t
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
> comment inline
>
> On 10/18/19 11:27 AM, Fabian Grünbichler wrote:
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> This composer supports two different operations.
>>> pve-setup: this operation adds the DNS TXT record.
>>> pve-teard
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
>
> On 10/18/19 11:28 AM, Fabian Grünbichler wrote:
>> On October 14, 2019 1:08 pm, Wolfgang Link wrote:
>>> ---
>>> src/PVE/ACME/ACME_sh.pm | 7 +++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/src/PVE/ACME/ACME_sh.pm b/src/PVE
On October 21, 2019 12:11 pm, Wolfgang Link wrote:
> comment inline
>
> On 10/18/19 11:28 AM, Fabian Grünbichler wrote:
>> so this got a bit longer than expected - just a high-level feedback, I
>> haven't actually tested anything yet since there are too many open
>> general design questions for th
On 10/22/19 9:26 AM, Dominik Csapak wrote:
> On 10/21/19 5:31 PM, Thomas Lamprecht wrote:
>> Thanks to Dietmars patch[0] those VMs can now be backed up
>> successfully, so remove this aborting check.
>>
>> [0]:
>> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=69cb18950a705b54f438f4659b603b3f5
We cannot activate a path, only volume IDs with activate_volumes
(duh)
fixes commit 5c1d42b7f825fa124ff3701b32f9ecc011bece95
Signed-off-by: Thomas Lamprecht
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7b22
or any other variant of the word 'pending'
note that we can actually allow this snapshot after PVE 7.0, since
pending section and snapshots will be properly namespaced. ([pve:pending] and
[snap:$snapname] or similar)
Signed-off-by: Oguz Bektas
---
PVE/QemuServer.pm | 5 -
1 file changed, 4
to make the pct/qemu config formats more similar, we can namespace the
pending section using the 'pve:' prefix like in pct parser.
the new format is optional in the parser, but default in the writer.
with PVE 7.0, we can make it default in parser too.
Signed-off-by: Oguz Bektas
---
PVE/QemuSer
On 10/22/19 12:12 PM, Oguz Bektas wrote:
or any other variant of the word 'pending'
note that we can actually allow this snapshot after PVE 7.0, since
pending section and snapshots will be properly namespaced. ([pve:pending] and
[snap:$snapname] or similar)
Signed-off-by: Oguz Bektas
---
PVE
On Tue, Oct 22, 2019 at 12:15:35PM +0200, Stefan Reiter wrote:
> On 10/22/19 12:12 PM, Oguz Bektas wrote:
> > or any other variant of the word 'pending'
> >
> > note that we can actually allow this snapshot after PVE 7.0, since
> > pending section and snapshots will be properly namespaced. ([pve:p
fixes a bug where 'detach' caused disks to be destroyed immediately,
because $force parameter was always true since hash is true.
Signed-off-by: Oguz Bektas
---
PVE/QemuServer.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index
Signed-off-by: Fabian Ebner
---
local-zfs.adoc | 44
1 file changed, 44 insertions(+)
diff --git a/local-zfs.adoc b/local-zfs.adoc
index b4fb7db..378cbee 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -431,3 +431,47 @@ See the `encryptionroot`, `
format of pending_delete_hash is changed in guest-common, so we have to
use the new format while looping over the hash.
Signed-off-by: Oguz Bektas
---
src/PVE/LXC/Config.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index
On 10/22/19 12:34 PM, Oguz Bektas wrote:
> fixes a bug where 'detach' caused disks to be destroyed immediately,
> because $force parameter was always true since hash is true.
>
> Signed-off-by: Oguz Bektas
> ---
> PVE/QemuServer.pm | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
On 10/22/19 12:40 PM, Oguz Bektas wrote:
> format of pending_delete_hash is changed in guest-common, so we have to
> use the new format while looping over the hash.
>
> Signed-off-by: Oguz Bektas
> ---
> src/PVE/LXC/Config.pm | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff
Joining a little late into this, but I would vote for an option where we inform
the client that this endpoint needs some sort of re-authentication to be
accessible, similar to what Thomas already proposed. I discussed this a little
bit with Thomas (off-list), but for me the only remaining differ
Do we really want a enable/disable property?
Wouldn't it be enough to delete the token?
> Fabian Grünbichler hat am 17. Oktober 2019 15:14
> geschrieben:
>
>
> and integration for user API endpoints.
>
> Signed-off-by: Fabian Grünbichler
> ---
>
> Notes:
> pveum integration will come i
On Tue, 22 Oct 2019 09:22:24 +0200
Thomas Lamprecht wrote:
> On 10/21/19 10:11 PM, Stoiko Ivanov wrote:
> > by:
> > * running /usr/bin/newaliases (generating /etc/aliases.db)
> > * setting the compatibility_level to 2
> > ** otherwise a warning was issued with broken aliases.db that the system
>
otherwise, having multiple ipconfigX entries, can lead to different
instance-ids on different startups, which is not desired
Signed-off-by: Dominik Csapak
---
2 issues i have with this:
* we have a cyclic dependency between PVE::QemuServer and
PVE::QemuServer::Cloudinit, and this patch increase
On October 22, 2019 1:41 pm, Tim Marx wrote:
> Joining a little late into this, but I would vote for an option where we
> inform the client that this endpoint needs some sort of re-authentication to
> be accessible, similar to what Thomas already proposed. I discussed this a
> little bit with Th
On October 22, 2019 1:44 pm, Tim Marx wrote:
> Do we really want a enable/disable property?
> Wouldn't it be enough to delete the token?
there's a difference though. I might have configured the token on X
systems, but want to temporarily disable it. since the actual token
value is generated on c
On 10/22/19 3:22 PM, Fabian Grünbichler wrote:
> On October 22, 2019 1:44 pm, Tim Marx wrote:
>> Do we really want a enable/disable property?
>> Wouldn't it be enough to delete the token?
>
> there's a difference though. I might have configured the token on X
> systems, but want to temporarily di
On October 22, 2019 3:32 pm, Thomas Lamprecht wrote:
> On 10/22/19 3:22 PM, Fabian Grünbichler wrote:
>> On October 22, 2019 1:44 pm, Tim Marx wrote:
>>> Do we really want a enable/disable property?
>>> Wouldn't it be enough to delete the token?
>>
>> there's a difference though. I might have conf
On 10/22/19 3:50 PM, Fabian Grünbichler wrote:
> On October 22, 2019 3:32 pm, Thomas Lamprecht wrote:
>> On 10/22/19 3:22 PM, Fabian Grünbichler wrote:
>>> On October 22, 2019 1:44 pm, Tim Marx wrote:
Do we really want a enable/disable property?
Wouldn't it be enough to delete the token?
Signed-off-by: Christian Ebner
---
version 5:
* only show checkbox for CT/VM destroy dialog (as suggested)
* added qtip to checkbox
www/manager6/window/SafeDestroy.js | 22 ++
1 file changed, 22 insertions(+)
diff --git a/www/manager6/window/SafeDestroy.js
b/www/mana
Signed-off-by: Alexandre Derumier
---
test/documentation.txt | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/test/documentation.txt b/test/documentation.txt
index 3f70987..8b78d46 100644
--- a/test/documentation.txt
+++ b/test/documentation.txt
@@ -2,3
Split code in different plugins (vnets/zones/controllers) for better maintenance
and readability
Alexandre Derumier (3):
split transport/controllers/vnet to separate plugins
api2 : split vnets/zones/controllers
update documentation.txt
PVE/API2/Network/Makefile | 1 -
Following the pve-network patch series
Alexandre Derumier (1):
add sdn vnets/zones/controllers.cfg
data/PVE/Cluster.pm | 8 ++--
data/src/status.c | 8 ++--
2 files changed, 12 insertions(+), 4 deletions(-)
--
2.20.1
___
pve-devel mailin
Signed-off-by: Alexandre Derumier
---
PVE/API2/Network.pm | 27 +--
1 file changed, 9 insertions(+), 18 deletions(-)
diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
index fa605ba7..5e5cb5fd 100644
--- a/PVE/API2/Network.pm
+++ b/PVE/API2/Network.pm
@@ -18,7 +18,8 @
Signed-off-by: Alexandre Derumier
---
PVE/API2/Nodes.pm | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 9e731e05..fa33ae00 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -52,7 +52,7 @@ use Socket;
my $have_sdn;
Following the pve-network patch series
Alexandre Derumier (3):
pvestatd: fix require PVE::Network::SDN
api2 : reload : use zones/controllers sdn plugins
api2 : nodes : use zones api status
PVE/API2/Network.pm | 27 +--
PVE/API2/Nodes.pm | 7 ---
PVE/S
Signed-off-by: Alexandre Derumier
---
PVE/API2/Network/Makefile | 1 -
PVE/API2/Network/SDN.pm | 310 +++-
PVE/API2/Network/SDN/Controllers.pm | 288 ++
PVE/API2/Network/SDN/Makefile | 4 +-
PVE/API2/N
Signed-off-by: Alexandre Derumier
---
PVE/Service/pvestatd.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index bad1b73d..2723f442 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -39,7 +39,7 @@ use bas
Signed-off-by: Alexandre Derumier
---
PVE/Network/SDN.pm| 297 +-
PVE/Network/SDN/Controllers.pm| 158 ++
.../FaucetPlugin.pm} | 14 +-
.../FrrEvpnPlugin.pm} | 21 +-
PVE/Network
Signed-off-by: Alexandre Derumier
---
data/PVE/Cluster.pm | 8 ++--
data/src/status.c | 8 ++--
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 9cb68d8..3a0a35d 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@
As mentioned in #2408, live-migrating a VM between storages that use
different scsi backends (scsi-hd, scsi-generic, scsi-block) breaks.
To fix, from QEMU 4.1 machine types onward (to not break current
behaviour any more), only use scsi-hd, as in recent versions, there is
almost no difference betw
- Le 22 Oct 19, à 17:25, Stefan Reiter s.rei...@proxmox.com a écrit :
>
> @Daniel Berteaud: You also mentioned using scsi-hd fixes #2335 (which you
> already have submitted a patch for previously) and #2380. Is this correct?
> Just for reference, so we can keep them in sync on the bugtracker
On October 21, 2019 5:31 pm, Thomas Lamprecht wrote:
> Thanks to Dietmars patch[0] those VMs can now be backed up
> successfully, so remove this aborting check.
>
> [0]:
> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=69cb18950a705b54f438f4659b603b3f52901c2f
a bit late to the party, but isn
On 10/23/19 7:33 AM, Fabian Grünbichler wrote:
> On October 21, 2019 5:31 pm, Thomas Lamprecht wrote:
>> Thanks to Dietmars patch[0] those VMs can now be backed up
>> successfully, so remove this aborting check.
>>
>> [0]:
>> https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=69cb18950a705b54f438f
47 matches
Mail list logo