>> >> 12TB in total. replication factor 2 should make it 6 TB?
>>> >>
>>> >>
>>> >> Regards
>>> >>
>>> >> Gaurav Goyal
>>> >>
>>> >>
>>> >> On Thu, Aug 4, 2016 at
t;> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>> bkris...@walmartlabs.com>
>> >> wrote:
>> >>>
>> >>> Hi Gaurav,
>> >>>
>> >>> There are several ways to do it depending on how you deployed your
>> ceph
>>
;>
> >>> There are several ways to do it depending on how you deployed your ceph
> >>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster
> yaml
> >>> ready made to wipe off CEPH.
> >>>
> >>> https://github.com/ceph/cep
inventory with ceph hosts.
>>>
>>> Else if you want to purge manually, you can do it using:
>>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>>
>>>
>>> Thanks
>>> Bharath
>>>
>>> From: ceph-use
gt; Else if you want to purge manually, you can do it using:
>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>
>>
>> Thanks
>> Bharath
>>
>> From: ceph-users on behalf of Gaurav
>> Goyal
>> Date: Thursday, August 4,
; http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>
>
> Thanks
> Bharath
>
> From: ceph-users on behalf of Gaurav
> Goyal
> Date: Thursday, August 4, 2016 at 8:19 AM
> To: David Turner
> Cc: ceph-users
> Subject: Re: [ceph-users] Fwd: Ceph St
: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local
Disks
Please suggest a procedure for this uninstallation process?
Regards
Gaurav Goyal
On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
mailto:er.gauravgo...@gmail.com>> wrote:
Thanks for your prompt
response!
Situat
Please suggest a procedure for this uninstallation process?
Regards
Gaurav Goyal
On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
wrote:
> Thanks for your prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from scratch. Let is openst
Thanks for your prompt
response!
Situation is bit different now. Customer want us to remove the ceph storage
configuration from scratch. Let is openstack system work without ceph.
Later on install ceph with local disks.
So I need to know a procedure to uninstall ceph and unconfigure it from
open
If I'm understanding your question correctly that you're asking how to actually
remove the SAN osds from ceph, then it doesn't matter what is using the storage
(ie openstack, cephfs, krbd, etc) as the steps are the same.
I'm going to assume that you've already added the new storage/osds to the
Hello David,
Can you help me with steps/Procedure to uninstall Ceph storage from
openstack environment?
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:57 AM, Gaurav Goyal
wrote:
> Hello David,
>
> Thanks a lot for detailed information!
>
> This is going to help me.
>
>
> Regards
> Gaurav Goya
Hello David,
Thanks a lot for detailed information!
This is going to help me.
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:46 AM, David Turner wrote:
> I'm going to assume you know how to add and remove storage
> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The
> onl
I'm going to assume you know how to add and remove storage
http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The only
other part of this process is reweighting the crush map for the old osds to a
new weight of 0.0 http://docs.ceph.com/docs/master/rados/operations/crush-map/.
I
Just add the new storage and weight the old storage to 0.0 so all data will
move off of the old storage to the new storage. It's not unique to migrating
from SANs to Local Disks. You would do the same any time you wanted to migrate
to newer servers and retire old servers. After the backfillin
Hi David,
Thanks for your comments!
Can you please help to share the procedure/Document if available?
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:24 AM, David Turner wrote:
> Just add the new storage and weight the old storage to 0.0 so all data
> will move off of the old storage to the new
15 matches
Mail list logo