On 26.09.20 07:29, Alexandre DERUMIER wrote:
> I was thinking about another way, where user could also manualing edit
> /etc/pve/sdn/*.cfg files
> (or with some automations tools like puppet,ansible,... to manage their
> network).
>
> I was think about this:
>
> sdn/*.cfg are the pending confi
>>
>>Having two versions, the enacted and a pending, could be enough
>>
>>* if both are the same all is applied
>>* if pending is newer we can show it, but new changes should not further
>> increase the version, they are seen as part of the current pending stuff.
>>* if pending is older, bug but d
here a new hang:
http://odisoweb1.odiso.net/test4/
This time on corosync start.
node1:
-
start corosync : 17:22:02
node2
-
/etc/pve locked 17:22:07
Something new: where doing coredump or bt-full on pmxcfs on node1,
this have unlocked /etc/pve on other nodes
/etc/pve unlocked(wit
On 25.09.20 15:36, Fabian Grünbichler wrote:
>
>> Thomas Lamprecht hat am 25.09.2020 15:23
>> geschrieben:
>>
>>
>> On 25.09.20 14:53, Fabian Grünbichler wrote:
>>> dfsm_send_state_message_full always returns != 0, since it returns
>>> cs_error_t which starts with CS_OK at 1, with values >1 re
> Thomas Lamprecht hat am 25.09.2020 15:23
> geschrieben:
>
>
> On 25.09.20 14:53, Fabian Grünbichler wrote:
> > dfsm_send_state_message_full always returns != 0, since it returns
> > cs_error_t which starts with CS_OK at 1, with values >1 representing
> > errors.
> >
> > Signed-off-by: Fabi
On 25.09.20 14:53, Fabian Grünbichler wrote:
> dfsm_send_state_message_full always returns != 0, since it returns
> cs_error_t which starts with CS_OK at 1, with values >1 representing
> errors.
>
> Signed-off-by: Fabian Grünbichler
> ---
> unfortunately not that cause of Alexandre's shutdown/res
dfsm_send_state_message_full always returns != 0, since it returns
cs_error_t which starts with CS_OK at 1, with values >1 representing
errors.
Signed-off-by: Fabian Grünbichler
---
unfortunately not that cause of Alexandre's shutdown/restart issue, but
might have caused some hangs as well since
On September 25, 2020 11:46 am, Alexandre DERUMIER wrote:
>>>I will rebuild once more modifying the send code a bit to log a lot more
>>>details when sending state messages, it would be great if you could
>>>repeat with that as we are still unable to reproduce the issue.
>
> ok, no problem, I'm ab
Signed-off-by: Alwin Antreich
---
Note: The footnote in the title section broke the link building for that
footnote when used with a variable on the beginning of the url.
The parser seems to look for an http(s) and considers it text
otherwise. But interestingly it worked with the
* use a variable instead of hardcoded url+release name
* ceph migrated to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-us...@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
Signed-off-by: Alwin Antreich
---
pve-storage-cephfs.adoc| 2 +-
pveceph.adoc
>>I will rebuild once more modifying the send code a bit to log a lot more
>>details when sending state messages, it would be great if you could
>>repeat with that as we are still unable to reproduce the issue.
ok, no problem, I'm able to easily reproduce it, I'll do new test when you'll
send the
The subject should probably be 'fix clone_disk failing for nonexistent
cloudinit disk'. Want me to send a v2?
On 9/25/20 10:51 AM, Mira Limbeck wrote:
After migration or a rollback the cloudinit disk might not be allocated, so
volume_size_info() fails. As we override the value anyway for cloudi
On September 25, 2020 9:15 am, Alexandre DERUMIER wrote:
>
> Another hang, this time on corosync stop, coredump available
>
> http://odisoweb1.odiso.net/test3/
>
>
>
> node1
>
> s
On 25.09.20 10:35, Alexandre DERUMIER wrote:
>>> but how do you detect pending changes now?
>
> Well, the feature was mainly to detect pending change after reload.
> if a reload don't have applied correctly on a node, or if a node was down.
>
> I don't known if we want to display to user "pendi
After migration or a rollback the cloudinit disk might not be allocated, so
volume_size_info() fails. As we override the value anyway for cloudinit
and efi disks simply move the volume_size_info() call into the 'else'
branch.
Signed-off-by: Mira Limbeck
---
PVE/QemuServer.pm | 4 +++-
1 file cha
All volumes contained in $vollist are activated. In this case a snapshot
of the volume. For cloudinit disks no snapshots are created so don't add
it to the list of volumes to activate as it otherwise fails with no
logical volume found.
Signed-off-by: Mira Limbeck
---
PVE/API2/Qemu.pm | 1 +
1 fi
also,
for example, when you add a new vnet in a zone,
it was displaying a warning all vnets/zones for pending changes.
as I don't have enough granularity currently (a global version info in
/etc/network/interfaces.d/sdn, or we should have some kind of versioning info
by vnet in /etc/network/in
> Stephan Leemburg hat am 25.09.2020 10:26
> geschrieben:
>
>
> Hi,
>
> For me on a 2 node cluster it shows
>
> LXC Container 19
>
> Online 15
>
> Offline -13
>
> And there are 2 VM's. They show correctly
>
> Virtual Machines 2
>
> Online 1
>
> Offline 1
>
Already fixed, but thanks
>>but how do you detect pending changes now?
Well, the feature was mainly to detect pending change after reload.
if a reload don't have applied correctly on a node, or if a node was down.
I don't known if we want to display to user "pending config" changes, not yet
applied ?
Befor this commit,
Hi,
For me on a 2 node cluster it shows
LXC Container 19
Online 15
Offline -13
And there are 2 VM's. They show correctly
Virtual Machines 2
Online 1
Offline 1
Next to that, when I want to open a console, it says that a spice client
needs to be installed.
Do the developers of the app ha
Hi,
On 21.09.20 18:51, Alexandre Derumier wrote:
> can you update the ifupdown2 mirror to last master ?>
> I have reworked the postinst/preinst to only generate
> /etc/network/interfaces.new
> at first install of ifupdown2 only.
>
> I have added a small patch to allow vlan inside vxlan tunnel,
Hi,
On 9/24/20 11:29 PM, René Jochum wrote:
Hi,
I have atm.:
5 QEMU VM's
3 - Running
2 - Offline
2 LXC Containers both running.
It seems the App Counts "Offline LXC" wrong and in-cooperates QEMU
online or something, it shows:
VM's:
3 Online
2 Offline
LXC's:
2
On 24.09.20 10:40, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier
> ---
> PVE/API2/Network/SDN.pm | 3 +++
> PVE/API2/Network/SDN/Controllers.pm | 6 --
> PVE/API2/Network/SDN/Subnets.pm | 3 ---
> PVE/API2/Network/SDN/Vnets.pm | 3 ---
> PVE/API2/Network/
Another hang, this time on corosync stop, coredump available
http://odisoweb1.odiso.net/test3/
node1
stop corosync : 09:03:10
node2: /etc/pve locked
--
Current time : 09:03:1
On 24.09.20 09:27, Alwin Antreich wrote:
> ceph migrated their documentation to readthedocs with a minor uri change
> https://lists.ceph.io/hyperkitty/list/ceph-us...@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
>
> Signed-off-by: Alwin Antreich
> ---
> pveceph.adoc | 28 ++--
On 24.09.20 15:18, Dominik Csapak wrote:
> if the checkbox is not checked, we set the value of the vmid filter to ''
> but left 'exactMatch' enabled, which means we filter all out where
> the vmid is not ''
>
> what we instead want is to remove also the exactMatch so that we
> get *all* entries ba
26 matches
Mail list logo