Hi, first off: long time reader, first time poster :)..
I have a 4 node ceph cluster (~12TB in total) and an openstack cloud
(juno) running.
Everything we have is Suse based and ceph 0.80.8
Now, the cluster works fine.. :
cluster 54636e1e-aeb2-47a3-8cc6-684685264b63
health HEALTH_OK
Hi eric, thanks for the reply.
As far as I can tell client.glance already already has all the rights
needed to the images pool?
//f
> Glance needs some additional permissions including write access to the
> pool
> you want to add images to. See the docs at:
>
> http://ceph.com/docs/master/rbd/rbd-
Hi, I have a bit of a problem. I have a fully functioning cep cluster. Each
server has an SSD drive that we would like to use as a cache pool and 6 1.7TB
data drives that we would like to put as erasure coded drive.
Yes I would ike to put a cache pool overlaid to the Erasure pool.
Cep version
Hi, we are just testing our new ceph cluster and to optimise our spinning disks
we created an erasure coded pool and a SSD cache pool.
We modified the crush map to make an sad pool as easy server contains 1 ssd
drive and 5 spinning drives.
Stress testing the cluster in terms of read performance
r this task.
>> I think that 200-300MB/s is actually not bad (without knowing anything about
>> the hardware setup, as you didn’t give details…) coming from those drives,
>> but expect to replace them soon.
>>
>>> On 11 Dec 2015, at 13:44, Florian Rommel
Hi all, after basically throwing away the SSDs we had because of very poor
journal write performance, I tested our test systems with spindle drives only.
The results are quite horrifying and i get the distinct feeling that i am doing
something wrong somewhere.
So read performance is great, giving
Ah, totally forgot the additional details :)
OS is SUSE Enterprise Linux 12.0 with all patches,
Ceph version 0.94.3
4 node cluster with 2x 10GBe networking, one for cluster and one for public
network, 1 additional server purely as an admin server.
Test machine is also 10gbe connected
ceph.conf
rives are doing
>
> On Wed, Dec 23, 2015 at 4:35 PM Florian Rommel
> mailto:florian.rom...@datalounges.com>>
> wrote:
> Ah, totally forgot the additional details :)
>
> OS is SUSE Enterprise Linux 12.0 with all patches,
> Ceph version 0.94.3
> 4 node cluster with 2x
he resources are saturating
>
> Thanks & Regards
> Somnath
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of Tyler Bishop
> Sent: Saturday, December 26, 2015 8:38 AM
> To: Florian Rommel
Ok, weird problem,(s) if you want to call it that..
So i run a 10 OSD Ceph cluster on 4 hosts with SSDs (Intel DC3700) as journals.
I have a lot of mixed workloads running and the linux machines seem to get
somehow corrupted in a weird way and the performance kind of sucks.
First off:
All hosts
10 matches
Mail list logo