[ceph-users] Teuthology: Need some input on how to add osd after cluster setup is done using Teuthology

2014-07-02 Thread Shambhu Rajak
ask or any method available in ceph.py. Thanks & Regards, Shambhu Rajak PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the inte

[ceph-users] Ceph giant installation fails on rhel 7.0

2015-06-11 Thread Shambhu Rajak
I am trying to install ceph gaint on rhel 7.0, while installing ceph-common-0.87.2-0.el7.x86_64.rpm I am getting following dependency packages]$ sudo yum install ceph-common-0.87.2-0.el7.x86_64.rpm Loaded plugins: amazon-id, priorities, rhui-lb Examining ceph-common-0.87.2-0.el7.x86_64.rpm: 1:cep

[ceph-users] Cache pool for Openstack(Nova & Glance)

2017-07-25 Thread Shambhu Rajak
Hi Cephers, I like some advice whether or not shall I use cache pool in writeback mode for openstack (Nova & Glance). So far I have two approaches: 1. Create SSD pools for Nova and Glance, each osd node having mix of SSD & HDD 2. Create cache pool(cache tier) of fast SSD and use it

[ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-18 Thread Shambhu Rajak
there anything else that are available that could be much easier to use and give production deployment. Thanks, Shambhu Rajak ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-18 Thread Shambhu Rajak
HI Wes Since I want a production deployment, full-fledged management would be necessary for administrating, maintaining, could you suggest on this lines. Thanks, Shambhu From: Wes Dillingham [mailto:wes_dilling...@harvard.edu] Sent: Thursday, May 18, 2017 6:08 PM To: Shambhu Rajak Cc: ceph-users

Re: [ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-18 Thread Shambhu Rajak
Let me explore the code to my needs. Thanks Chris Regards, Shambhu From: Bitskrieg [mailto:bitskr...@bitskrieg.net] Sent: Thursday, May 18, 2017 6:40 PM To: Shambhu Rajak; wes_dilling...@harvard.edu Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Available tools for deploying ceph cluster

[ceph-users] Some monitors have still not reached quorum

2017-05-22 Thread Shambhu Rajak
w to solve this ? Thanks, Shambhu Rajak ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Some monitors have still not reached quorum

2017-05-22 Thread Shambhu Rajak
, tries left: 4 [ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying [shambhucephnode2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.shambhucephnode2.asok mon_status [ceph_deploy.mon][WARNIN] mon.shambhucephnode2 monitor is not yet in quorum, trie

Re: [ceph-users] Some monitors have still not reached quorum

2017-05-23 Thread Shambhu Rajak
Hi Alfredo, This is solved, all the listening ports were disabled in my setup, after allowing the monitor/osd ports. Thanks, Shambhu -Original Message- From: Shambhu Rajak Sent: Tuesday, May 23, 2017 10:33 AM To: 'Alfredo Deza' Cc: ceph-users@lists.ceph.com Subject: RE: [

[ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread Shambhu Rajak
P mod_unload modversions signer: Magrathea: Glacier signing key sig_key:51:D5:D7:73:F1:07:BA:1B:C0:9D:33:68:38:C4:3C:DE:74:9E:4E:05 sig_hashalgo: sha512 Thanks, Shambhu Rajak ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd map fails, ceph release jewel

2017-06-01 Thread Shambhu Rajak
Thanks David, I upgraded the kernel version and the rbd map worked. Regards, Shambhu From: David Turner [mailto:drakonst...@gmail.com] Sent: Wednesday, May 31, 2017 9:35 PM To: Shambhu Rajak; ceph-users@lists.ceph.com Subject: Re: [ceph-users] rbd map fails, ceph release jewel You are trying to