In a past life I had a bunch of SAN gear dumped in my lap, it was spec’d by 
someone else misintepreting vague specs.  It was SAN gear with an AoE driver.  
I wasn’t using Ceph, but sending it back and getting a proper solution wasn’t 
an option.  Ended up using SAN gear as a NAS with a single client, effectively 
DAS.

It was a nightmare:  cabling, monitoring, maintenance.

This could be done, but as Kai says latency would be an issue.  One would also 
need to pay *very* close attention to mappings and failure domains, there is 
considerable opportunity here to shoot oneself in the foot when a component has 
issues.






> Just my quick two cents here.
> 
> Technically this is possible which doesn't mean it's a good idea. I wouldn't 
> use such a setup in a productive environment. I don't think they'll really 
> save a lot and adding the latency etc on top not sure this is what they're 
> really looking for.
> 
> Just for testing and for giving it a try, sure but for the rest I would go 
> with a clear "no" instead of encouraging them to do that. 
> 
> Maybe you can tell use more about their use-case? What are they looking for, 
> how large should this get, access protocols etc. 
> 
> Kai
> 
> On 22.08.19 17:12, Brett Chancellor wrote:
>> It's certainly possible. It makes things a little more complex though. Some 
>> questions you may want to consider during the design..
>> - Is the customer aware this won't preserve any data on the luns they are 
>> hoping to reuse.
>> - Is the plan to eventually replace the SAN with JBOD, in the same systems? 
>> If so you may want to make your luns look like the eventual drive size and 
>> count.
>> - Is the plan to use a few systems with SAN and add standalone systems 
>> later? Then you need to calculate expected speeds and divide between failure 
>> domains.
>> - Is the plan to use a couple of hosts with SAN to save money, and have the 
>> rest be traditional Ceph storage? If so consider putting the SAN hosts all 
>> in one failure domain.
>> - Depending on the SAN you may consider aligning your failure domains to 
>> different arrays, switches, or even array directors.
>> - Remember to take the hosts network speed into consideration when 
>> calculating how many luns to put on each host.
>> 
>> Hope that helps.
>> 
>> -Brett
>> 
>> On Thu, Aug 22, 2019, 4:14 AM Mohsen Mottaghi <mohsenmotta...@outlook.com> 
>> wrote:
>> Hi
>> 
>> 
>> Yesterday one of our customers asked us a strange request.  He asked us to 
>> use SAN as the Ceph storage space to add the SAN storages it currently has 
>> to the cluster and reduce other disk purchase costs.
>> 
>> 
>> Anybody know can we do this or not?! And if this is possible how we should 
>> start to architect this Strange Ceph?! Is it good or not?!
>> 
>>  
>> Thanks for your help.
>> 
>> Mohsen Mottaghi
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- 
>> ceph-users@ceph.io
>> 
>> To unsubscribe send an email to 
>> ceph-users-le...@ceph.io
> -- 
> SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg
> GF:Geschäftsführer: Felix Imendörffer, (HRB 247165, AG München)
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to