I have found that it's better to follow this links from the
documentation not from the Ceph Blog:
http://docs.ceph.com/docs/nautilus/releases/nautilus/
Here the links are working.
On 23/5/19 10:56, Andres Rojas Guerrero wrote:
> Hi all, I have followed the Ceph documentation in
Hi all, I have followed the Ceph documentation in order to update from
Mimic to Nautilus:
https://ceph.com/releases/v14-2-0-nautilus-released/
The process gone well but I have seen that two links with important
information doesn't work:
"v2 network protocol"
"Updating ceph.conf and mon_host"
h
network to access to Ceph resources.
Regards!
On 22/3/19 11:32, Andres Rojas Guerrero wrote:
> Hi, thank's for the answer, we have seen that the client only have OSD
> map for first public network ...
>
> # cat
> /sys/kernel/debug/ceph/88f62260-b8de-499f-b6fe-5eb66a967083.clien
Hi, thank's for the answer, we have seen that the client only have OSD
map for first public network ...
# cat
/sys/kernel/debug/ceph/88f62260-b8de-499f-b6fe-5eb66a967083.client360894/osdmap
you say that the cepfs clients have the same view of the cluster that
have de MON's, that's mean that MON'
Ok, thank you for the answer, yes, we have note that we need to add a
frontend and backend in haproxy to allow the access to mds.
Otherwise, we test another architecture, more simple to understand who
cpeh works at this level), replacing the haproxy with a simple gateway.
Now the clients contact w
Hi all, we have deployed a Ceph cluster configured with two public networks:
[global]
cluster network = 10.100.188.0/23
fsid = 88f62260-b8de-499f-b6fe-5eb66a967083
mon host = 10.100.190.9,10.100.190.10,10.100.190.11
mon initial members = mon1,mon2,mon3
osd_pool_default_pg_num = 4096
public netwo
Hi all, I have another newbie question, we are trying to deploy a ceph
cluster mimic with bluestore with the wal a db data in a SSD disks.
For this we are using ceph-ansible approach, we have seen that
ceph-ansible has a playbook in order to create lvm structure
(lv-create.yml) but it's seems only