> On April 21, 2017 at 8:04 AM Alexandre DERUMIER wrote:
>
>
> >>ip=could be an ip of the cluster.
> >>(But I think we need to connect first to this ip, and find where the vm is
> >>located (in case of vm is moving), and reconnect to the vm node.
> >>Don't known how to manage this first ip con
> > Maybe it would be easier to define replication on the target VM only? For
> > example,
> > add something like:
> >
> > replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage
> >
> > ???
> >
> How about introducing a cluster ip (floating)? This could then also be
> used for mana
On Fri, 21 Apr 2017 08:04:40 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> Maybe it would be easier to define replication on the target VM only? For
> example,
> add something like:
>
> replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage
>
> ???
>
How about introducing a cluster
>>ip=could be an ip of the cluster.
>>(But I think we need to connect first to this ip, and find where the vm is
>>located (in case of vm is moving), and reconnect to the vm node.
>>Don't known how to manage this first ip connect ? (do we allow to define
>>multiple ips if 1 host is down?)
Maybe
>>Maybe it would be easier to define replication on the target VM only? For
>>example,
>>add something like:
>>
>>replication-source: ip=1.2.3.4,sourcevmid=123,storage=mystorage
, yes, indeed ! (and run the cron on target cluster)
ip=could be an ip of the cluster.
(But I think we need to co
> maybe:
>
> setup the first replication with an api call to target storage
>
> - create remotecopy vmid hostnameoftargetnode storeidoftargetnode.
>
> in target vmid.conf, store something like : sourcevmid :xxx and maybe
> sourceclusterid:xxx (could be define somewhere in /etc/pve/... with a
I have tested your patches, works fine for me. Thanks !
(I have send an extra patch to fix --targetstorage , currently we check
avaibility on source node, and if target storeid is not present on source node,
it's not working)
- Mail original -
De: "Fabian Grünbichler"
À: "pve-devel"
C
if we define a different target storeid for remote node,
and that storage is not available on source node
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 7 ++-
PVE/QemuMigrate.pm | 6 --
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/AP
>>then on source cluster, define group of host(s) of target cluster + storage.
>>maybe add login/password or some kind of special token (we could define it
>>in target cluster)
>>something like:
>>remotesharedstorage: targetstoreid, host= targetnode1,targetnode2 ,
>>token=x(like this
also, target vmid can be different than source vmid.
so, I think when we initiate the first replication,
we generate a new vmid on target cluster,
then we store it in source vmid.conf
- Mail original -
De: "aderumier"
À: "dietmar" , "pve-devel"
Cc: "datanom.net"
Envoyé: Vendredi 21 A
@mir
> Could a drive-mirror not use a SSH tunnel?
yes, we already doing it for local disk live migration. (drive-mirror to nbd
target)
>>I do not talk about the technical implementation. I am
>>just unsure how to store the configuration for the
>>remote cluster/storage.
I think we can simp
Hi all,
In case this is helpful to anyone else:
I just installed PVE 4.4 on a box with a megaraid controller (9261-8i).
For some reason, the device's interrupts where not distributed among CPU cores.
After digging a little, I found that the version of the megaraid driver that
comes with the curr
On Thu, 20 Apr 2017 21:44:55 +0200 (CEST)
Dietmar Maurer wrote:
>
> I do not talk about the technical implementation. I am
> just unsure how to store the configuration for the
> remote cluster/storage.
>
Does it need to be stored? Could it not be received on demand
through /cluster/config
--
> > But IMHO it is still unclear how to configure multi-cluster
> > replication? Ideas welcome ...
> >
> Could a drive-mirror not use a SSH tunnel?
I do not talk about the technical implementation. I am
just unsure how to store the configuration for the
remote cluster/storage.
_
On Thu, 20 Apr 2017 19:49:42 +0200 (CEST)
Dietmar Maurer wrote:
>
> But IMHO it is still unclear how to configure multi-cluster
> replication? Ideas welcome ...
>
Could a drive-mirror not use a SSH tunnel?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
h
> >>this it planed, but first we will make a VM/CT centric storage
> >>replication cluster intern replica.
But IMHO it is still unclear how to configure multi-cluster
replication? Ideas welcome ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
ht
Hi,
Could we add support for dpdk for ovs 2.7 on debian stretch ?
dpdk 16 is available on stretch packages.
also fedora has just added dpdk support by default,
https://mail.openvswitch.org/pipermail/ovs-discuss/2017-February/043797.html
and support for ovs 2.7 is no more experimental
__
>>Hi Alexandre,
>>this it planed, but first we will make a VM/CT centric storage
>>replication cluster intern replica.
Ok, thanks !
- Mail original -
De: "Wolfgang Link"
À: "pve-devel"
Envoyé: Jeudi 20 Avril 2017 07:42:04
Objet: Re: [pve-devel] RFC V2 Storage Replica
Hi Alexandre,
On Wed, Apr 19, 2017 at 11:52:07AM +0200, Wolfgang Link wrote:
> It is possible to synchronise a volume to an other node in a defined interval.
> So if a node fail there will be an copy of the volumes from a VM
> on an other node.
> With this copy it is possible to start the VM on this node.
> ---
On Wed, Apr 19, 2017 at 11:52:02AM +0200, Wolfgang Link wrote:
> If the storage backend support import and export
> we can send the contend to a remote host.
> ---
> PVE/Storage.pm | 18
> PVE/Storage/Plugin.pm| 8
> PVE/Storage/ZFSPlugin.pm |
On Wed, Apr 19, 2017 at 11:52:03AM +0200, Wolfgang Link wrote:
> Returns a list of snapshots (youngest snap first) form a given volid.
> It is possible to use a prefix to filter the list.
> ---
> PVE/Storage.pm | 17 +
> PVE/Storage/Plugin.pm| 9 +
>
On Thu, Apr 20, 2017 at 08:02:48AM +0200, Alexandre DERUMIER wrote:
> Hi,
>
> to have good performance with ceph, they are 2 boosts possible:
> Maybe we could add them as command option in ceph init ?
>
>
> 1)disabling cephx auth.
>
> [global]
>
> auth_cluster_required = none
> auth_service_re
applied
On Thu, Apr 20, 2017 at 10:50:52AM +0200, Wolfgang Bumiller wrote:
> CpuSets usually come from (or a built using) values read
> from cgroups anyway. (Eg. for container balancing we only
> use ids found in lxc/cpuset.effective_cpus.)
> ---
> Requires the accompanying pve-manager patch.
>
>
Applied with counts fixed up:
my $max_cpuid = $allowed_cpus[-1]; # removed +1 here
my @cpu_ctcount = (0) x ($max_cpuid+1);# added it here
On Thu, Apr 20, 2017 at 10:50:51AM +0200, Wolfgang Bumiller wrote:
> We're already limiting CPUs to lxc/cpuset.effective_cpus,
> so let's us
like in the resource grid
Signed-off-by: Dominik Csapak
---
www/manager6/grid/PoolMembers.js | 4
1 file changed, 4 insertions(+)
diff --git a/www/manager6/grid/PoolMembers.js b/www/manager6/grid/PoolMembers.js
index ce97b59e..5d020f7b 100644
--- a/www/manager6/grid/PoolMembers.js
+++ b/ww
CpuSets usually come from (or a built using) values read
from cgroups anyway. (Eg. for container balancing we only
use ids found in lxc/cpuset.effective_cpus.)
---
Requires the accompanying pve-manager patch.
src/PVE/CpuSet.pm | 21 +
1 file changed, 1 insertion(+), 20 deletio
We're already limiting CPUs to lxc/cpuset.effective_cpus,
so let's use the highest cpuid from that set as a maximum to
initialize the container count array.
---
PVE/Service/pvestatd.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestat
27 matches
Mail list logo