Hi, Andrija
I think Ceph has a versioning convention that x.2.z means stable release for
production. http://docs.ceph.com/docs/master/releases/schedule/
x.0.z - development releases (for early testers and the brave at heart)
x.1.z - release candidates (for test clusters, brave users)
x.2.z - s
Never mentioned anything on stability :) and as usual, if user wants to
upgrade, in general, this is OK, but you risk the consequences of
compatibility with external software (ACS) as Li does.
LTS comment was due to a couple of slides from latest Cephalocon, I didn't
actually follow the release not
Hi Wido
I filled in the CLOUDSTACK is the following KEY
[root@cn01-nodeb ~]# ceph auth get client.cloudstack
exported keyring for client.cloudstack
[client.cloudstack]
key = AQDTh7pcIJjNIhAAwk8jtxilJWXQR7osJRFMLw==
caps mon = "allow r"
caps osd = "allow rwx pool=rbd"
发件人: Wido
On 5/28/19 6:16 AM, li jerry wrote:
> Hello guys
>
> we’ve deployed an environment with CloudStack 4.11.2 and KVM(CentOS7.6), and
> Ceph 13.2.5 is deployed as the primary storage.
> We found some issues with the HA solution, and we are here to ask for you
> suggestions.
>
> We’ve both enable
On 5/28/19 1:48 PM, li jerry wrote:
> Hi Wido
>
>
>
> I filled in the CLOUDSTACK is the following KEY
>
>
>
> [root@cn01-nodeb ~]# ceph auth get client.cloudstack
>
> exported keyring for client.cloudstack
>
> [client.cloudstack]
>
> key = AQDTh7pcIJjNIhAAwk8jtxilJWXQR7osJRFMLw=
Thx Wido!
On Tue, 28 May 2019 at 13:51, Wido den Hollander wrote:
>
>
> On 5/28/19 1:48 PM, li jerry wrote:
> > Hi Wido
> >
> >
> >
> > I filled in the CLOUDSTACK is the following KEY
> >
> >
> >
> > [root@cn01-nodeb ~]# ceph auth get client.cloudstack
> >
> > exported keyring for client.cloudst
Thx Wido!
after the ceph admin node executed the following command, my problem was solved.
[root@cn01-nodea ~]# ceph auth caps client.cloudstack mon 'allow profile rbd'
osd 'allow profile rbd pool=rbd'
发件人: Wido den Hollander
发送时间: Tuesday, May 28, 201