Yeah that's what I said at first but they want to keep everything managed
inside the OpenStack ecosystem, so I guess they'll be keen to test Manila
integration!
On Friday, May 22, 2015, Gregory Farnum wrote:
> If you guys have stuff running on Hadoop, you might consider testing
> out CephFS too.
If you guys have stuff running on Hadoop, you might consider testing
out CephFS too. Hadoop is a predictable workload that we haven't seen
break at all in several years and the bindings handle data locality
and such properly. :)
-Greg
On Thu, May 21, 2015 at 11:24 PM, Wang, Warren
wrote:
>
> On 5
On 5/21/15, 5:04 AM, "Blair Bethwaite" wrote:
>Hi Warren,
>
>On 20 May 2015 at 23:23, Wang, Warren
>wrote:
>> We¹ve contemplated doing something like that, but we also realized that
>> it would result in manual work in Ceph everytime we lose a drive or
>> server,
>> and a pretty bad experience
Hi Warren,
On 20 May 2015 at 23:23, Wang, Warren wrote:
> We¹ve contemplated doing something like that, but we also realized that
> it would result in manual work in Ceph everytime we lose a drive or
> server,
> and a pretty bad experience for the customer when we have to do
> maintenance.
Yeah
We¹ve contemplated doing something like that, but we also realized that
it would result in manual work in Ceph everytime we lose a drive or
server,
and a pretty bad experience for the customer when we have to do
maintenance.
We also kicked around the idea of leveraging the notion of a Hadoop rack
Hi Warren,
Following our brief chat after the Ceph Ops session at the Vancouver
summit today, I added a few more notes to the etherpad
(https://etherpad.openstack.org/p/YVR-ops-ceph).
I wonder whether you'd considered setting up crush layouts so you can
have multiple cinder AZs or volume-types th