Le 09/10/2015 20:27, Gilou a écrit :
> Le 09/10/2015 20:14, Gilou a écrit :
>> Le 09/10/2015 18:36, Gilou a écrit :
>>> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
> So I tried again.. HA doesn't work.
> Both resources are now frozen (?), and they didn't restart... Even after
> 5 minu
Le 09/10/2015 20:14, Gilou a écrit :
> Le 09/10/2015 18:36, Gilou a écrit :
>> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
So I tried again.. HA doesn't work.
Both resources are now frozen (?), and they didn't restart... Even after
5 minutes...
service vm:102 (pve1, freeze)
>>
On Fri, 9 Oct 2015 20:14:07 +0200
Gilou wrote:
>
> OK... Let's try pve3, cold migrate, without ha, enable ha again..
> interesting, now we have:
> # ha-manager status
> quorum OK
> master pve1 (active, Fri Oct 9 20:09:46 2015)
> lrm pve1 (old timestamp - dead?, Fri Oct 9 19:58:57 2015)
> lrm p
Le 09/10/2015 18:36, Gilou a écrit :
> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
>>> So I tried again.. HA doesn't work.
>>> Both resources are now frozen (?), and they didn't restart... Even after
>>> 5 minutes...
>>> service vm:102 (pve1, freeze)
>>> service vm:303 (pve1, freeze)
>>
>> The qu
Le 09/10/2015 19:01, Gilou a écrit :
> Le 09/10/2015 18:43, Thomas Lamprecht a écrit :
>> Whats your cluster setup? Three nodes?
>
> 3 nodes, yes
>
>>
>> Output from:
>> # ha-manager status
>
> root@pve2:~# date
> Fri Oct 9 19:00:32 CEST 2015
> root@pve2:~# ha-manager status
> quorum OK
> maste
Le 09/10/2015 18:43, Thomas Lamprecht a écrit :
> Whats your cluster setup? Three nodes?
3 nodes, yes
>
> Output from:
> # ha-manager status
root@pve2:~# date
Fri Oct 9 19:00:32 CEST 2015
root@pve2:~# ha-manager status
quorum OK
master pve2 (old timestamp - dead?, Fri Oct 9 18:57:03 2015)
lrm
Le 09/10/2015 18:55, Michael Rasmussen a écrit :
> On Fri, 9 Oct 2015 18:17:52 +0200
> Gilou wrote:
>
>>
>> Maybe related, I have a lot of that in the logs:
>> Oct 09 18:14:56 pve2 pve-ha-lrm[1224]: watchdog update failed - Broken pipe
>>
> Are you running your nodes in virtualbox?
>
> When I te
On Fri, 9 Oct 2015 18:17:52 +0200
Gilou wrote:
>
> Maybe related, I have a lot of that in the logs:
> Oct 09 18:14:56 pve2 pve-ha-lrm[1224]: watchdog update failed - Broken pipe
>
Are you running your nodes in virtualbox?
When I test in virtualbox I see the same in the syslog window.
--
Hils
Use the name subkey from the cmap keys by default, if not set
fallback to the ring0_addr.
This fixes some issues when we move the corosync communication to
a different network and use an specific address or an new hostname
for that. Withouth this patch the nodename in the .members special
file chan
Adapt some functions to prefer 'name' subkey over 'ring0_addr'.
Add a function to get a hash representatiion of the totem corosync
config item.
Signed-off-by: Thomas Lamprecht
---
data/PVE/CLI/pvecm.pm | 51 +--
1 file changed, 45 insertions(+), 6
This patches allows to configure RRP (= redundant ring protocol)
at cluster creation time. Also setting ring 0 and 1 addresses when
adding a new node. This helps and fixes some bugs when corosync runs
completely separated on an own network.
Changing rrp configs, or the bindnet addresses automatical
Does not change functionality, makes some code a bit more readable
and lessens code reuse.
Signed-off-by: Thomas Lamprecht
---
data/src/dcdb.c | 6 +++---
data/src/dfsm.c | 21 +
data/src/quorum.c | 23 +++
3 files changed, 23 insertions(+), 27 deleti
Successfully tested the fourth patch, the following test were done:
-) cluster creation with one corosync network (NO RRP)
-) cluster creation with RRP in passive mode on two diffrent networks
-) addition of nodes to the setups two from above
This series has the intention to simplify and solve som
On 10/09/2015 06:36 PM, Gilou wrote:
Le 09/10/2015 18:21, Dietmar Maurer a écrit :
So I tried again.. HA doesn't work.
Both resources are now frozen (?), and they didn't restart... Even after
5 minutes...
service vm:102 (pve1, freeze)
service vm:303 (pve1, freeze)
The question is why they are
Le 09/10/2015 18:21, Dietmar Maurer a écrit :
>> So I tried again.. HA doesn't work.
>> Both resources are now frozen (?), and they didn't restart... Even after
>> 5 minutes...
>> service vm:102 (pve1, freeze)
>> service vm:303 (pve1, freeze)
>
> The question is why they are frozen. The only actio
> So I tried again.. HA doesn't work.
> Both resources are now frozen (?), and they didn't restart... Even after
> 5 minutes...
> service vm:102 (pve1, freeze)
> service vm:303 (pve1, freeze)
The question is why they are frozen. The only action which
puts them to 'freeze' is when you shutdown a n
Le 02/10/2015 11:59, Gilou a écrit :
> Hi,
>
> I just installed PVE 4 Beta2 (43 ?), and played a bit with it.
>
> I do not notice the same bug I had on 3.4 with Windows 2012 rollbacks of
> snapshots: it just works, that is great.
>
> However, I keep on getting an error on different pages: "Too m
fixed, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
src/PVE/LXC/Migrate.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 1a51829..58e4ea2 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -113,6 +113,10 @@ sub phase1 {
PVE::LXC::umount_all($vmid, $self->{st
The correct brand name for rtl8139 chipset cards is Realtek, not
Realtec as it was written before.
Signed-off-by: Alvaro Gonzalez [Andor]
---
www/manager/form/NetworkCardSelector.js | 2 +-
www/manager5/form/NetworkCardSelector.js | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff -
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
https://git.proxmox.com/?p=pve-manager.git;a=commit;h=c7f3e2abe2626c27147be84f4bd09a7c91348297
should sending stats to influxdb (pve 4.0) be working or are there still
some missing pieces?
At least it seems pretty undocumented. :-)
Thanks,
Herman
___
p
Hi all,
Due to the lack of non-anonymous bind, i solved it by building a
replicating ldap instance only bind to localhost on each proxmox node. This
is a pain in the ass and very error prone - especially on schema changes,
which have to be propagated to all nodes.
On Thu, Oct 8, 2015 at 11:57 AM,
23 matches
Mail list logo