Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Dietmar Maurer
> On April 21, 2017 at 8:04 AM Alexandre DERUMIER wrote: > > > >>ip=could be an ip of the cluster. > >>(But I think we need to connect first to this ip, and find where the vm is > >>located (in case of vm is moving), and reconnect to the vm node. > >>Don't known how to manage this first ip con

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Dietmar Maurer
> > Maybe it would be easier to define replication on the target VM only? For > > example, > > add something like: > > > > replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage > > > > ??? > > > How about introducing a cluster ip (floating)? This could then also be > used for mana

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Michael Rasmussen
On Fri, 21 Apr 2017 08:04:40 +0200 (CEST) Alexandre DERUMIER wrote: > > Maybe it would be easier to define replication on the target VM only? For > example, > add something like: > > replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage > > ??? > How about introducing a cluster

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
>>ip=could be an ip of the cluster. >>(But I think we need to connect first to this ip, and find where the vm is >>located (in case of vm is moving), and reconnect to the vm node. >>Don't known how to manage this first ip connect ? (do we allow to define >>multiple ips if 1 host is down?) Maybe

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
>>Maybe it would be easier to define replication on the target VM only? For >>example, >>add something like: >> >>replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage , yes, indeed ! (and run the cron on target cluster) ip=could be an ip of the cluster. (But I think we need to co

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Dietmar Maurer
> maybe: > > setup the first replication with an api call to target storage > > - create remotecopy vmid hostnameoftargetnode storeidoftargetnode. > > in target vmid.conf, store something like : sourcevmid :xxx and maybe > sourceclusterid:xxx (could be define somewhere in /etc/pve/... with a

Re: [pve-devel] [RFC qemu-server 0/4] Fix live-migration with local disks for Qemu 2.9

2017-04-20 Thread Alexandre DERUMIER
I have tested your patches, works fine for me. Thanks ! (I have send an extra patch to fix --targetstorage , currently we check avaibility on source node, and if target storeid is not present on source node, it's not working) - Mail original - De: "Fabian Grünbichler" À: "pve-devel" C

[pve-devel] [PATCH] live storage migration : fix check of target storage availability

2017-04-20 Thread Alexandre Derumier
if we define a different target storeid for remote node, and that storage is not available on source node Signed-off-by: Alexandre Derumier --- PVE/API2/Qemu.pm | 7 ++- PVE/QemuMigrate.pm | 6 -- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/AP

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
>>then on source cluster, define group of host(s) of target cluster + storage. >>maybe add login/password or some kind of special token (we could define it >>in target cluster) >>something like: >>remotesharedstorage: targetstoreid, host= targetnode1,targetnode2 , >>token=x(like this

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
also, target vmid can be different than source vmid. so, I think when we initiate the first replication, we generate a new vmid on target cluster, then we store it in source vmid.conf - Mail original - De: "aderumier" À: "dietmar" , "pve-devel" Cc: "datanom.net" Envoyé: Vendredi 21 A

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
@mir > Could a drive-mirror not use a SSH tunnel? yes, we already doing it for local disk live migration. (drive-mirror to nbd target) >>I do not talk about the technical implementation. I am >>just unsure how to store the configuration for the >>remote cluster/storage. I think we can simp

[pve-devel] megaraid interrupts

2017-04-20 Thread Waschbüsch IT-Services GmbH
Hi all, In case this is helpful to anyone else: I just installed PVE 4.4 on a box with a megaraid controller (9261-8i). For some reason, the device's interrupts where not distributed among CPU cores. After digging a little, I found that the version of the megaraid driver that comes with the curr

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Michael Rasmussen
On Thu, 20 Apr 2017 21:44:55 +0200 (CEST) Dietmar Maurer wrote: > > I do not talk about the technical implementation. I am > just unsure how to store the configuration for the > remote cluster/storage. > Does it need to be stored? Could it not be received on demand through /cluster/config --

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Dietmar Maurer
> > But IMHO it is still unclear how to configure multi-cluster > > replication? Ideas welcome ... > > > Could a drive-mirror not use a SSH tunnel? I do not talk about the technical implementation. I am just unsure how to store the configuration for the remote cluster/storage. _

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Michael Rasmussen
On Thu, 20 Apr 2017 19:49:42 +0200 (CEST) Dietmar Maurer wrote: > > But IMHO it is still unclear how to configure multi-cluster > replication? Ideas welcome ... > Could a drive-mirror not use a SSH tunnel? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc h

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Dietmar Maurer
> >>this it planed, but first we will make a VM/CT centric storage > >>replication cluster intern replica. But IMHO it is still unclear how to configure multi-cluster replication? Ideas welcome ... ___ pve-devel mailing list pve-devel@pve.proxmox.com ht

[pve-devel] openvswitch 2.7 stretch, add dpdk support ?

2017-04-20 Thread Alexandre DERUMIER
Hi, Could we add support for dpdk for ovs 2.7 on debian stretch ? dpdk 16 is available on stretch packages. also fedora has just added dpdk support by default, https://mail.openvswitch.org/pipermail/ovs-discuss/2017-February/043797.html and support for ovs 2.7 is no more experimental __

Re: [pve-devel] RFC V2 Storage Replica

2017-04-20 Thread Alexandre DERUMIER
>>Hi Alexandre, >>this it planed, but first we will make a VM/CT centric storage >>replication cluster intern replica. Ok, thanks ! - Mail original - De: "Wolfgang Link" À: "pve-devel" Envoyé: Jeudi 20 Avril 2017 07:42:04 Objet: Re: [pve-devel] RFC V2 Storage Replica Hi Alexandre,

Re: [pve-devel] [RFC_V2 pve-storage 6/8] This patch will include storage asynchronous replication.

2017-04-20 Thread Wolfgang Bumiller
On Wed, Apr 19, 2017 at 11:52:07AM +0200, Wolfgang Link wrote: > It is possible to synchronise a volume to an other node in a defined interval. > So if a node fail there will be an copy of the volumes from a VM > on an other node. > With this copy it is possible to start the VM on this node. > ---

Re: [pve-devel] [RFC_V2 pve-storage 1/8] Include new storage function volume_send.

2017-04-20 Thread Wolfgang Bumiller
On Wed, Apr 19, 2017 at 11:52:02AM +0200, Wolfgang Link wrote: > If the storage backend support import and export > we can send the contend to a remote host. > --- > PVE/Storage.pm | 18 > PVE/Storage/Plugin.pm| 8 > PVE/Storage/ZFSPlugin.pm |

Re: [pve-devel] [RFC_V2 pve-storage 2/8] Include new storage function volume_snapshot_list.

2017-04-20 Thread Wolfgang Bumiller
On Wed, Apr 19, 2017 at 11:52:03AM +0200, Wolfgang Link wrote: > Returns a list of snapshots (youngest snap first) form a given volid. > It is possible to use a prefix to filter the list. > --- > PVE/Storage.pm | 17 + > PVE/Storage/Plugin.pm| 9 + >

Re: [pve-devel] pveceph : disabling cephx auth + debug when we init ceph.conf ?

2017-04-20 Thread Fabian Grünbichler
On Thu, Apr 20, 2017 at 08:02:48AM +0200, Alexandre DERUMIER wrote: > Hi, > > to have good performance with ceph, they are 2 boosts possible: > Maybe we could add them as command option in ceph init ? > > > 1)disabling cephx auth. > > [global] > > auth_cluster_required = none > auth_service_re

[pve-devel] applied: [PATCH common] cpuset: remove max-cpu range check

2017-04-20 Thread Wolfgang Bumiller
applied On Thu, Apr 20, 2017 at 10:50:52AM +0200, Wolfgang Bumiller wrote: > CpuSets usually come from (or a built using) values read > from cgroups anyway. (Eg. for container balancing we only > use ids found in lxc/cpuset.effective_cpus.) > --- > Requires the accompanying pve-manager patch. > >

[pve-devel] applied: [PATCH manager] statd: rebalance: don't use CpuSet::max_cpuids

2017-04-20 Thread Wolfgang Bumiller
Applied with counts fixed up: my $max_cpuid = $allowed_cpus[-1]; # removed +1 here my @cpu_ctcount = (0) x ($max_cpuid+1);# added it here On Thu, Apr 20, 2017 at 10:50:51AM +0200, Wolfgang Bumiller wrote: > We're already limiting CPUs to lxc/cpuset.effective_cpus, > so let's us

[pve-devel] [PATCH manager] fix #1359: change to vm when double clicking in pool member view

2017-04-20 Thread Dominik Csapak
like in the resource grid Signed-off-by: Dominik Csapak --- www/manager6/grid/PoolMembers.js | 4 1 file changed, 4 insertions(+) diff --git a/www/manager6/grid/PoolMembers.js b/www/manager6/grid/PoolMembers.js index ce97b59e..5d020f7b 100644 --- a/www/manager6/grid/PoolMembers.js +++ b/ww

[pve-devel] [PATCH common] cpuset: remove max-cpu range check

2017-04-20 Thread Wolfgang Bumiller
CpuSets usually come from (or a built using) values read from cgroups anyway. (Eg. for container balancing we only use ids found in lxc/cpuset.effective_cpus.) --- Requires the accompanying pve-manager patch. src/PVE/CpuSet.pm | 21 + 1 file changed, 1 insertion(+), 20 deletio

[pve-devel] [PATCH manager] statd: rebalance: don't use CpuSet::max_cpuids

2017-04-20 Thread Wolfgang Bumiller
We're already limiting CPUs to lxc/cpuset.effective_cpus, so let's use the highest cpuid from that set as a maximum to initialize the container count array. --- PVE/Service/pvestatd.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestat