Re: [ceph-users] Upgrade osd ceph version

2017-03-05 Thread Sean Redmond
Hi, You should upgrade them all to the latest point release if you don't want to upgrade to the latest major release. Start with the mons, then the osds. Thanks On 3 Mar 2017 18:05, "Curt Beason" wrote: > Hello, > > So this is going to be a noob question probably. I read the > documentation,

[ceph-users] rgw. multizone installation. Many admins requests to each other

2017-03-05 Thread K K
Hello, all! I have successfully create 2 zone cluster(se and se2). But my radosgw machines are sending many GET /admin/log requests to each other after I put 10k items to cluster via radosgw. It's look like: 2017-03-03 17:31:17.897872 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24 - - [03

[ceph-users] can a OSD affect performance from pool X when blocking/slow requests PGs from pool Y ?

2017-03-05 Thread Alejandro Comisario
Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact). This weekend we'be experienced a huge outage from our customers vms (located on pool CUSTOMERS, replica size 3 ) when lots of OSD's started to slow request/block PG's on pool PRIVATE ( replica size 1 ) basically all PG's blocked wh

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-05 Thread jiajia zhong
we are using mixed too, intel PCIE 400G SSD * 8 for metadata pool and tier caching pool for our cephfs. *plus:* *'osd crush update on start = false*' as Vladimir replied. 2017-03-03 20:33 GMT+08:00 Дробышевский, Владимир : > Hi, Matteo! > > Yes, I'm using mixed cluster in production but it's

[ceph-users] Error with ceph to cloudstack integration.

2017-03-05 Thread frank
Hi, We have setup a ceph server and cloudstack server. All the osds are up with ceph status currently OK. [root@admin-ceph ~]# ceph status cluster ebac75fc-e631-4c9f-a310-880cbcdd1d25 health HEALTH_OK monmap e1: 1 mons at {mon1=10.10.48.7:6789/0} election epoch

Re: [ceph-users] Error with ceph to cloudstack integration.

2017-03-05 Thread Wido den Hollander
> Op 6 maart 2017 om 6:26 schreef frank : > > > Hi, > > We have setup a ceph server and cloudstack server. All the osds are up > with ceph status currently OK. > > > > [root@admin-ceph ~]# ceph status > cluster ebac75fc-e631-4c9f-a310-880cbcdd1d25 > health HEALTH_OK >