> On Jan 4, 2017, at 10:12 AM, Orit Wasserman wrote:
>
> On Wed, Jan 4, 2017 at 7:08 PM, Brian Andrus
> wrote:
>> Regardless of whether it worked before, have you verified your RadosGWs have
>> write access to monitors? They will need it if you want the RadosGW to
>> create its own pools.
>>
I think your best approach would be to create a smaller RBD pool and then
migrate the 10% of RBD’s that will remain RBD’s into this and then use the old
pool for just CephFS.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David
Turner
Sent: 07 January 2017 23:55
To:
There are very few configuration settings passed between Cinder and
Nova when attaching a volume. I think the only real possibility
(untested) would be to configure two Cinder backends against the same
Ceph cluster using two different auth user ids -- one for cache
enabled and another for cache dis
Why would you still be using journals when running fully OSDs on SSDs?
When using a journal the data is first written to a journal, and then that
same data is (later on) written again to disk.
This in the assumption that the time to write the journal is only a
fraction of the time it costs to wr
On 7-1-2017 15:03, Lionel Bouton wrote:
> Le 07/01/2017 à 14:11, kevin parrikar a écrit :
>> Thanks for your valuable input.
>> We were using these SSD in our NAS box(synology) and it was giving
>> 13k iops for our fileserver in raid1.We had a few spare disks which we
>> added to our ceph nodes ho