On Thu, Sep 22, 2016 at 08:30:18AM +0200, Stefan Priebe - Profihost AG wrote:
>
> Is that one already in PVE?
>
yes, since pve-qemu-kvm 2.6.1-1
https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=6e9e99dd1de7f0ea0376c9c16bb49d2cbffe1267
> Weitergeleitete Nachricht
> Betref
and add some more notes
---
Note: IMHO, after this the wiki article could point to this section instead?
pct.adoc | 23 ---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index f596d99..14e2d37 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -386,16
---
pct.adoc | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index 14e2d37..a9c90db 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -418,8 +418,15 @@ achieve the same result.
Device mount points
^^^
-Similar to bind mounts, device mou
Is that one already in PVE?
Weitergeleitete Nachricht
Betreff: Re: [Qemu-devel] [PATCH] rbd : disable
rbd_cache_writethrough_until_flush for cache=unsafe
Datum: Wed, 21 Sep 2016 18:01:44 -0700
Von: Josh Durgin
An: Alexandre Derumier , qemu-de...@nongnu.org
Kopie (CC): ceph-de..
memory leak in usb_xhci_exit
---
...usb-xhci-fix-memory-leak-in-usb_xhci_exit.patch | 32 ++
debian/patches/series | 1 +
2 files changed, 33 insertions(+)
create mode 100644
debian/patches/extra/CVE-2016-7466-usb-xhci-fix-memory-leak-in-usb_xhci
On 20 September 2016 at 07:43, Alexandre DERUMIER
wrote:
> One thing that I think it could be great,
>
> is to be able to have unique vmid across differents proxmox clusters.
>
> maybe with a letter prefix for example (cluster1: vmid: a100 , cluster2:
> vmid:b100).
>
> Like this, it could be poss
This is a complementary fix for #1105 (Create Linux VM Wizard: use scsi
as default bus/device) and add some logic to the list of controllers
presented in the ControllerSelector combo box
Since we can have IDE, SCSI, Virtio(blk) as a controller during installation,
based on OS detection and persona
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
the package has been removed from the list of lxcfs dependencies
since 0.12-pve1
---
PVE/API2/APT.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/APT.pm b/PVE/API2/APT.pm
index cf63c4c..3672053 100644
--- a/PVE/API2/APT.pm
+++ b/PVE/API2/APT.pm
@@ -534,7 +534,7 @@ _
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c3a53c9..1244c02 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4605,7 +4605,7 @@ sub vm_commandline {
my $cmd = config_to_command($storecfg, $vm
---
Makefile | 1 +
index.adoc | 1 +
pve-admin-guide.adoc | 2 ++
pve-improve.adoc | 37 +
4 files changed, 41 insertions(+)
create mode 100644 pve-improve.adoc
diff --git a/Makefile b/Makefile
index 0acaedf..a8205b0 100644
--- a/
>>OK, multicast traffic may still be hindered when on the same network with
>>heavy users (e.g. VM storage), even if the network itself is not saturated.
>>A second totem ring through the redundant ring protocol (rrp) in passive
>>mode could boost the performance as it almost doubles the speed
On 09/21/2016 10:51 AM, Alexandre DERUMIER wrote:
Note that I have around 1000vms, so I don't known impact of number of
messages/s.
a simple tcpdump give me an average of:
udp/5404: 500packets/s
udp/5405 : 1300 packets/s
- Mail original -
De: "Alexandre Derumier"
À: "pve-devel"
Env
applied all 3 patches, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>Note that I have around 1000vms, so I don't known impact of number of
>>messages/s.
a simple tcpdump give me an average of:
udp/5404: 500packets/s
udp/5405 : 1300 packets/s
- Mail original -
De: "Alexandre Derumier"
À: "pve-devel"
Envoyé: Mercredi 21 Septembre 2016 09:57:42
Objet:
---
Note: removed redundant information about rootfs and mpX, rest is just moving
and adding headings.
pct.adoc | 43 ++-
1 file changed, 26 insertions(+), 17 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index 0678c58..2b72f96 100644
--- a/pct.adoc
+++ b/
---
pct.adoc | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index 2b72f96..40028b7 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -455,6 +455,13 @@ they can contain the following setting:
include::pct-network-opts.adoc[]
+Backup and Restore
+--
---
pct.adoc | 55 +++
1 file changed, 55 insertions(+)
diff --git a/pct.adoc b/pct.adoc
index 40028b7..51b15cc 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -458,9 +458,64 @@ include::pct-network-opts.adoc[]
Backup and Restore
--
>>Exactly. Corosync is much better/faster and can replicate to more nodes. So I
>>would prefer to keep the
>>better technology (corosync), and improve it.
>>
>>Maybe it is even possible to implement a satelite code directly inside
>>pmxcfs...
I agreed too. Better to improve the wheel than recreat
>>I would like to know how long it takes to synchronize data
>>between 3 datacenter (each about 16 nodes). Are there any
>>timing guarantees? And how does it handle quorum if connection
>>between datacenters is broken?
From what I read, the use 1 quorum in each DC.
https://www.consul.io/docs/inte
> Note that for scaling, zookeeper,consul,... have some kind of master nodes for
> the quorum, and client nodes. (same than corosync satelitte).
> I don't think it's technically possible to scale with full mesh masters nodes
> with lot of nodes.
Exactly. Corosync is much better/faster and can repl
>>@Alexandre, you say that with 16 nodes the cluster is quite at is maximum,
>>can I get some more infos from you as I currently do not have the
>>hardware to
>>test this :)
>>
>>Do you use IGMP snooping/queriers?
>>On which network communicates corosync, on an independent? And how fast
>>is it?
> On Wed, 21 Sep 2016 01:45:18 +0200
> Michael Rasmussen wrote:
>
> > https://github.com/hashicorp/consul
> >
> Forgot to mention that consul supports multiple clusters and/or multi
> center clusters out of the box.
I would like to know how long it takes to synchronize data
between 3 datacent
> About corosync scaling,
> I found a discussion about implementation of satellites nodes
>
> http://discuss.corosync.narkive.com/Uh97uGyd/rfc-extending-corosync-to-high-node-counts
Sure, such things can extend the node count of a single cluster. But I am not
100% sure
if that solves all problems
On 09/21/2016 08:50 AM, Alexandre DERUMIER wrote:
Forgot to mention that consul supports multiple clusters and/or multi
center clusters out of the box.
yes, I read the doc yesterday. seem very interesting.
The most work could be to replace pmxcs by consul kv store. I have seen some
consul fuse
> On September 21, 2016 at 1:25 AM Alexandre DERUMIER
> wrote:
>
>
> Another question about my first idea (replace corosync),
>
> is is really difficult to replace corosync by something else ?
>
> Sheepdog storage for example, have support for corosync and zookeeper.
AFAIR they always talk
26 matches
Mail list logo