Hello,
The way Wido explained is the correct way. I won't deny, however, last
year we had problems with our SSD disks and they did not perform well.
So we decided to replace all disks. As the replacement done by Ceph
caused highload/downtime on the clients (which was the reason we wanted
to repl
I don't have the specific crash info, but I have seen crashes with tgt when
the ceph cluster was slow to respond to IO.
It was things like this that pushed me to using another iSCSI to Ceph
solution (FreeNAS running in KVM Linux hypervisor).
Jake
On Fri, Dec 16, 2016 at 9:16 PM ZHONG wrote:
>
I've tested this on the latest Kraken RC (installed on RHEL from the el7
repo) and it seemed promising at first but the OSDs still gradually consume
all available memory until OOM killed, they just do so slower. It takes
them a couple of hours to go from 500M each to >2G each. After they're
restart
tracker.ceph.com is having issues. I'm looking at it.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You don't need to disconnect any clients from the RADOS cluster.
Tiering configuration should be transparent to Ceph clients.
On Fri, Dec 16, 2016 at 5:57 PM, JiaJia Zhong wrote:
> hi skinjo,
> forgot to ask that if it's necessary to disconnect all the client before
> doing set-overlay ? we didn'
Thank you for your reply。
> 在 2016年12月17日,22:21,Jake Young 写道:
>
> FreeNAS running in KVM Linux hypervisor
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com