Re: [ceph-users] Power Failure

2017-05-02 Thread Tomáš Kukrál
Hi, It really depends on type of power failure ... Normal poweroff of the cluster is fine ... I've been managing large cluster and we were forced to do total poweroff twice a year. It was working fine: we just safely unmounted all clients, then set noout flag and powered machines down. Pow

Re: [ceph-users] Monitoring Overhead

2016-10-25 Thread Tomáš Kukrál
Hi Ashley, feel free to use/fork/copy my ceph_watch project https://github.com/tomkukral/ceph_watch It wraps stdout of `ceph -w` and exports these information to Prometheus node_exporter. Regards, Tom On 10-24 03:10, Ashley Merrick wrote: Hello, This may come across as a simple question but

Re: [ceph-users] Ceph orchestration tool

2016-10-11 Thread Tomáš Kukrál
Hi, I wanted to have more control over the configuration than provided by ceph-deploy and tried Ceph-ansible https://github.com/ceph/ceph-ansible. However, it was too complicated and i have created ceph-ansible-simple https://github.com/tomkukral/ceph-ansible-simple Feel free to use it and le

Re: [ceph-users] Ceph mirrors wanted!

2016-02-07 Thread Tomáš Kukrál
Hi, We can build new mirror in Czech republic, Would it help even there are mirrors already in Netherlands and (Sweden)? tom On 01-30 15:01, Wido den Hollander wrote: > Hi, > > My PR was merged with a script to mirror Ceph properly: > https://github.com/ceph/ceph/tree/master/mirroring > > Cur

Re: [ceph-users] double rebalance when removing osd

2016-01-12 Thread Tomáš Kukrál
Hi, I don't recommend to set weight to zero, because you may see MAX_AVAIL=0 in `ceph df` due to #13840. http://tracker.ceph.com/issues/13840 Any small & non-zero value is fine. Tom On 01-12 09:01, Rafael Lopez wrote: > I removed some osds from a host yesterday using the reweight method and it