drop CLVM from our test matrix.
A ScaleIO driver would be the perfect solution but I guess that won't happen...
to time consuming :(
Ceph looks promising right now - less performance as ScaleIO but well supported
and integrated in ACS.
All the best,
Fl
We run Ceph and we've been very happy with it.
From: Engelmann Florian
Sent: Tuesday, April 4, 2017 10:28 AM
To: [email protected]
Subject: Re: KVM + CLVM and Snapshots
Hi,
thank you both for your feedback! So we will drop CLVM from our test m
Hi,
thank you both for your feedback! So we will drop CLVM from our test matrix.
A ScaleIO driver would be the perfect solution but I guess that won't happen...
to time consuming :(
Ceph looks promising right now - less performance as ScaleIO but well supported
and integrated in ACS.
Al
Florian,
It does work, but can be challenging when it comes to stability. We still have
2 zones build with CLVM and we're activity migrating away from it. The major
problem is that it complicates host configurations and you really need a quorum
disk in order to fence correctly. It
Hi Florian,
I’ve spent quite a bit of time getting CLVM to work with KVM (in this case on
CentOS6+7) – and my experience is it’s not fit for purpose, suffering a lot of
stability issues. I do know there are others in the community who may have got
this to work in the past, so interested to
Hi,
as the cloudstack documentation explains CLVM is not supported with Cloudstack:
"The use of the Cluster Logical Volume Manager (CLVM) for KVM is not officially
supported with CloudStack."
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.9/storage_setup.html
ess i need to activate it first ‘cause it’s CLVM. :-)
>
> lvchange -ay and then deactivate it with lvchange -an.
>
> Will try now. Hopefully someone somewhere sometime might find this post
> useful.
>
> Regards,
> F.
>
>
>
> On 09 Dec 2015, at 13:56, Fran
I guess i need to activate it first ‘cause it’s CLVM. :-)
lvchange -ay and then deactivate it with lvchange -an.
Will try now. Hopefully someone somewhere sometime might find this post useful.
Regards,
F.
On 09 Dec 2015, at 13:56, France wrote:
> Hi guys,
>
> I want to check
Hi guys,
I want to check corrupted VHD on a XenServer cluster with LVM over ISCSI.
This is the VHD I want to check:
[root@x3 ~]# lvs | grep 1a240d45-ee0a-4c30-809b-3114dfaf85ba
VHD-1a240d45-ee0a-4c30-809b-3114dfaf85ba
VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586 -ri--- 136.00M
H
Jeremy,
Secondary Storage has to be NFS so the easiest way to achieve this would be to
create a VM which can present the CLVM storage via NFS.
Regards
Geoff Higginbottom
CTO / Cloud Architect
D: +44 20 3603 0542 | S: +44 20 3603 0540
| M: +447968161581
[email protected]
I was curious on if I could use CLVM as secondary storage.
I want to use the same clustered storage solution that I am using for Primary.
How would I go about adding that option into the couldstack UI.
Is it possible to do it via CloudMonkey? Could I manually add it via the DB?
jeremy
Hi
I'm trying to setup CloudStack with KVM-HA with iSCSI Shared and CLVM
option.
As per the document, its accepting CLVM as primary storage.
However to configure CLVM in KVM (in CentOS 6.x), searching quickly not
giving proper results.
For CLVM I hope cman (or HA Cluster is must fo
Hi,
Could any one help explain the difference of use ShareMount/OCFS2 and CLVM as
primary storage for Cloudstack clusters?
For ShareMount/OCFS2 mode, when there has problem for OCFS2 on a host,
Cloudstack Manager does not know the situation and the VM on this host can not
be migrated to other
Hi all,
Our team are building Cloudstack with ISCSI/CLVM as primary storage.
In this environment a virtual machine cannot be set with HA option, option
stay dimmed.
I know that HA for CLVM storage is not officially supported by Cloudstack
documentation, but my question would be has anybody
Hi devs,
I think I have found a bug affecting a primary storage migration if
the source and destination volume are on a CLVM storage.
So far I could follow the trail at :
- secondary storage is mounted on the source host with a new template id
- cp is issued to copy /dev/volgrpoup/uuid CLVM LV
On 28.04.2014 23:58, Salvatore Sciacco wrote:
Hello Lucian,
did you have any chance to try to reproduce my setup? :-)
Best,
Sorry Salvatore, have not had the time to do it, I'll try to make some
time today and test.
Lucian
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.n
Hello Lucian,
did you have any chance to try to reproduce my setup? :-)
Best,
S.
2014-04-20 15:26 GMT+02:00 Nux! :
> On 20.04.2014 13:24, Salvatore Sciacco wrote:
>
>> 2014-04-20 12:31 GMT+02:00 Nux! :
>>
>> It looks like a bug, "qemu-img convert" should be used instead of "cp
>>> -f",
>>> a
Does any one know how to setup CLVM and use it for primary storage on
centos6? I have a iscsi SAN attached to my hyperviser (kvm) hosts and
wanted to use that LUN as primary storage and for kvm hyperviser i think i
have to use some kind of clustered vol mechanism. Please share if any one
have done
Does any one configured clvm for primary storage? can you please help me to
configure clvm with iscsi LUNs?
Ram
On Sun, Apr 20, 2014 at 2:57 AM, Salvatore Sciacco wrote:
> ACS version: 4.2.1
> Hypervisors: KVM
> Storage pool type: CLVM
>
> Since we upgraded from 4.1 to 4.2.1 mo
Thanks!
:-)
I suppose there isn't much people running different CLVM pools in the same
zone...
S.
2014-04-20 15:26 GMT+02:00 Nux! :
> On 20.04.2014 13:24, Salvatore Sciacco wrote:
>
>> 2014-04-20 12:31 GMT+02:00 Nux! :
>>
>> It looks like a bug, "qemu-img
On 20.04.2014 13:24, Salvatore Sciacco wrote:
2014-04-20 12:31 GMT+02:00 Nux! :
It looks like a bug, "qemu-img convert" should be used instead of "cp
-f",
among others.
I suppose that some code was added to do a simple copy when format is
the
same, this wasn't the case with 4.1.1 version.
2014-04-20 12:31 GMT+02:00 Nux! :
> It looks like a bug, "qemu-img convert" should be used instead of "cp -f",
> among others.
>
I suppose that some code was added to do a simple copy when format is the
same, this wasn't the case with 4.1.1 version.
>
Do you mind opening an issue in https://is
On 20.04.2014 10:57, Salvatore Sciacco wrote:
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different
primary
storage pool fail. I've enabled debug on the agents side and I think
there
is a problem with the format
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different primary
storage pool fail. I've enabled debug on the agents side and I think there
is a problem with the format type conversion
Volume on database has format
You could use ocfs.
- Chris
Sent from my iPhone
> On 09 Jan 2014, at 00:59, Nux! wrote:
>
>> On 08.01.2014 22:50, Marcus Sorensen wrote:
>> You'd create a sharedmountpoint style primary storage, which would
>> host qcow2 files. You can do this via iscsi, fibrechannel, or any
>> other SAN tech.
On 08.01.2014 22:50, Marcus Sorensen wrote:
You'd create a sharedmountpoint style primary storage, which would
host qcow2 files. You can do this via iscsi, fibrechannel, or any
other SAN tech.
Hi Marcus!
This would work with 1 hypervisor, but with 2+ you need a cluster-aware
filesystem. Recom
to qcow2 file on secondary storage". So there is no real LVM snapshot,
> and if there were, it wouldn't be copied internally.
>
> On Wed, Jan 8, 2014 at 3:47 PM, Nux! wrote:
>> Hi,
>>
>> I've just watched Marcus Sorensen's presentation on CLVM on yout
ust watched Marcus Sorensen's presentation on CLVM on youtube and he
> was mentioning that migrating a VM with snapshots will make the snapshots
> disappear.
> Can anyone testify if this is still the case?
> Since at it, are there any alternative ways of using a multipathed iSCSI lun
&
Hi,
I've just watched Marcus Sorensen's presentation on CLVM on youtube and
he was mentioning that migrating a VM with snapshots will make the
snapshots disappear.
Can anyone testify if this is still the case?
Since at it, are there any alternative ways of using a multipathed
iSCS
I'm trying to join a new node into an existing 5 node CLVM cluster but I
just can't get it to work.
When ever I add a new node (I put into the cluster.conf and reloaded with
cman_tool version -r -S) I end up with situations like the new node wants
to gain the quorum and starts to
lume group, but... Still the same
error. That was on CS 4.1. This does work, when I create this volume
specific on the nfs share. But not on the CLVM share
We have also a test env. With the CS 4.2 on and same issue. But here I got
the following error:
"Unable to find storage pool when create
Le 13/04/2013 23:37, Milamber a ecrit :
Le 13/04/2013 23:29, Milamber a ecrit :
Hello,
I've a issue with a CloudStack 4.1 installation on CentOS 6.4 with
the hypervisor KVM.
The primary storage is CLVM on iSCSI array (Dell Equallogic) on 3
nodes (hosts) with 1 Cloudmgr server.
Le 13/04/2013 23:29, Milamber a ecrit :
Hello,
I've a issue with a CloudStack 4.1 installation on CentOS 6.4 with the
hypervisor KVM.
The primary storage is CLVM on iSCSI array (Dell Equallogic) on 3
nodes (hosts) with 1 Cloudmgr server.
I've installed and configure a
Hello,
I've a issue with a CloudStack 4.1 installation on CentOS 6.4 with the
hypervisor KVM.
The primary storage is CLVM on iSCSI array (Dell Equallogic) on 3 nodes
(hosts) with 1 Cloudmgr server.
I've installed and configure a CLVM with locking_type = 3 and
wait_for_locks
34 matches
Mail list logo