As it is a lab environment, can i install the setup in a way to achieve
less redundancy (replication factor) and more capacity?

How can i achieve that?




On Wed, Aug 17, 2016 at 7:47 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
wrote:

> Hello,
>
> Awaiting any suggestion please!
>
>
>
>
> Regards
>
> On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>
>> Hello Brian,
>>
>> Thanks for your response!
>>
>> Can you please elaborate on this.
>>
>> Do you mean i must use
>>
>> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>>
>> This is going to be a lab environment. Can you please suggest to have
>> best possible design for my lab environment.
>>
>>
>>
>> On Wed, Aug 17, 2016 at 9:54 AM, Brian :: <b...@iptel.co> wrote:
>>
>>> You're going to see pretty slow performance on a cluster this size
>>> with spinning disks...
>>>
>>> Ceph scales very very well but at this type of size cluster it can be
>>> challenging to get nice throughput and iops..
>>>
>>> for something small like this either use all ssd osds or consider
>>> having more spinning osds per node backed by nvme or ssd journals..
>>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
>>> wrote:
>>> > Dear Ceph Users,
>>> >
>>> > Can you please address my scenario and suggest me a solution.
>>> >
>>> > Regards
>>> > Gaurav Goyal
>>> >
>>> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <
>>> er.gauravgo...@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hello
>>> >>
>>> >>
>>> >> I need your help to redesign my ceph storage network.
>>> >>
>>> >> As suggested in earlier discussions, i must not use SAN storage. So we
>>> >> have decided to removed it.
>>> >>
>>> >> Now we are ordering Local HDDs.
>>> >>
>>> >> My Network would be
>>> >>
>>> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2-->
>>> Compute2 -->
>>> >> Local Disk 600GB Host 3 --> Compute2
>>> >>
>>> >> Is it right setup for ceph network? For Host1 and Host2 , we are
>>> using 1
>>> >> 600GB disk for basic filesystem.
>>> >>
>>> >> Should we use same size storage disks for ceph environment or i can
>>> order
>>> >> Disks in size of 2TB for ceph cluster?
>>> >>
>>> >> Making it
>>> >>
>>> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>> >>
>>> >> 12TB in total. replication factor 2 should make it 6 TB?
>>> >>
>>> >>
>>> >> Regards
>>> >>
>>> >> Gaurav Goyal
>>> >>
>>> >>
>>> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>>> bkris...@walmartlabs.com>
>>> >> wrote:
>>> >>>
>>> >>> Hi Gaurav,
>>> >>>
>>> >>> There are several ways to do it depending on how you deployed your
>>> ceph
>>> >>> cluster. Easiest way to do it is using ceph-ansible with
>>> purge-cluster yaml
>>> >>> ready made to wipe off CEPH.
>>> >>>
>>> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>> >>>
>>> >>> You may need to configure ansible inventory with ceph hosts.
>>> >>>
>>> >>> Else if you want to purge manually, you can do it using:
>>> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>> >>>
>>> >>>
>>> >>> Thanks
>>> >>> Bharath
>>> >>>
>>> >>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
>>> Gaurav
>>> >>> Goyal <er.gauravgo...@gmail.com>
>>> >>> Date: Thursday, August 4, 2016 at 8:19 AM
>>> >>> To: David Turner <david.tur...@storagecraft.com>
>>> >>> Cc: ceph-users <ceph-users@lists.ceph.com>
>>> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN
>>> storage to
>>> >>> Local Disks
>>> >>>
>>> >>> Please suggest a procedure for this uninstallation process?
>>> >>>
>>> >>>
>>> >>> Regards
>>> >>> Gaurav Goyal
>>> >>>
>>> >>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
>>> >>> <er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:
>>> >>>
>>> >>> Thanks for your  prompt
>>> >>> response!
>>> >>>
>>> >>> Situation is bit different now. Customer want us to remove the ceph
>>> >>> storage configuration from scratch. Let is openstack system work
>>> without
>>> >>> ceph. Later on install ceph with local disks.
>>> >>>
>>> >>> So I need to know a procedure to uninstall ceph and unconfigure it
>>> from
>>> >>> openstack.
>>> >>>
>>> >>> Regards
>>> >>> Gaurav Goyal
>>> >>> On 03-Aug-2016 4:59 pm, "David Turner"
>>> >>> <david.tur...@storagecraft.com<mailto:david.tur...@storagecraft.com>>
>>> wrote:
>>> >>> If I'm understanding your question correctly that you're asking how
>>> to
>>> >>> actually remove the SAN osds from ceph, then it doesn't matter what
>>> is using
>>> >>> the storage (ie openstack, cephfs, krbd, etc) as the steps are the
>>> same.
>>> >>>
>>> >>> I'm going to assume that you've already added the new storage/osds
>>> to the
>>> >>> cluster, weighted the SAN osds to 0.0 and that the backfilling has
>>> finished.
>>> >>> If that is true, then your disk used space on the SAN's should be
>>> basically
>>> >>> empty while the new osds on the local disks should have a fair
>>> amount of
>>> >>> data.  If that is the case, then for every SAN osd, you just run the
>>> >>> following commands replacing OSD_ID with the osd's id:
>>> >>>
>>> >>> # On the server with the osd being removed
>>> >>> sudo stop ceph-osd id=OSD_ID
>>> >>> ceph osd down OSD_ID
>>> >>> ceph osd out OSD_ID
>>> >>> ceph osd crush remove osd.OSD_ID
>>> >>> ceph auth del osd.OSD_ID
>>> >>> ceph osd rm OSD_ID
>>> >>>
>>> >>> Test running those commands on a test osd and if you had set the
>>> weight
>>> >>> of the osd to 0.0 previously and if the backfilling had finished,
>>> then what
>>> >>> you should see is that your cluster has 1 less osd than it used to,
>>> and no
>>> >>> pgs should be backfilling.
>>> >>>
>>> >>> HOWEVER, if my assumptions above are incorrect, please provide the
>>> output
>>> >>> of the following commands and try to clarify your question.
>>> >>>
>>> >>> ceph status
>>> >>> ceph osd tree
>>> >>>
>>> >>> I hope this helps.
>>> >>>
>>> >>> > Hello David,
>>> >>> >
>>> >>> > Can you help me with steps/Procedure to uninstall Ceph storage from
>>> >>> > openstack environment?
>>> >>> >
>>> >>> >
>>> >>> > Regards
>>> >>> > Gaurav Goyal
>>> >>> ________________________________
>>> >>> [cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>
>>> >>>
>>> >>> David Turner | Cloud Operations Engineer | StorageCraft Technology
>>> >>> Corporation<https://storagecraft.com>
>>> >>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>>> >>> Office: 801.871.2760 | Mobile: 385.224.2943<tel:385.224.2943>
>>> >>>
>>> >>> ________________________________
>>> >>> If you are not the intended recipient of this message or received it
>>> >>> erroneously, please notify the sender and delete it, together with
>>> any
>>> >>> attachments, and be advised that any dissemination or copying of this
>>> >>> message is prohibited.
>>> >>>
>>> >>> ________________________________
>>> >>>
>>> >>
>>> >
>>> >
>>> > _______________________________________________
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to