[ceph-users] Filestore update script?

2016-06-07 Thread WRIGHT, JON R (JON R)
I'm trying to recover an OSD after running xfs_repair on the disk. It seems to be ok now. There is a log message that includes the following: "Please run the FileStore update script before starting the OSD, or set filestore_update_to to 4" What is the FileStore update script? Google search d

Re: [ceph-users] Filestore update script?

2016-06-08 Thread WRIGHT, JON R (JON R)
Wido, Thanks for that advice, and I'll follow it. To your knowledge, is there a FileStore Update script around somewhere? Jon On 6/8/2016 3:11 AM, Wido den Hollander wrote: Op 7 juni 2016 om 23:08 schreef "WRIGHT, JON R (JON R)" : I'm trying to recover an OSD after r

[ceph-users] jewel blocked requests

2016-09-12 Thread WRIGHT, JON R (JON R)
Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN because of 'blocked requests > 32 sec'. Seems to be related to writes. Has anyone else seen this? Or can anyone suggest what the problem might be? Thanks! ___ ceph-users mailing

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
Yes, I do have old clients running. The clients are all vms. Is it typical that vm clients have to be rebuilt after a ceph upgrade? Thanks, Jon On 9/12/2016 4:05 PM, Wido den Hollander wrote: Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)" : Since upgrading to

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
eptember 2016 om 18:47 schreef "WRIGHT, JON R (JON R)" mailto:jonrodwri...@gmail.com>>: > > > Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN > because of 'blocked requests > 32 sec'. Seems to be related to w

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
VM Client OS: ubuntu 14.04 Openstack: kilo libvirt: 1.2.12 nova-compute-kvm: 1:2015.1.4-0ubuntu2 Jon On 9/13/2016 11:17 AM, Wido den Hollander wrote: Op 13 september 2016 om 15:58 schreef "WRIGHT, JON R (JON R)" : Yes, I do have old clients running. The clients are all v

Re: [ceph-users] jewel blocked requests

2016-09-19 Thread WRIGHT, JON R (JON R)
d and are replacing a disk, and I think the blocked requests may have all been associated with PGs that included the bad OSD/disk. Would this make sense? Jon On 9/15/2016 3:49 AM, Wido den Hollander wrote: Op 13 september 2016 om 18:54 schreef "WRIGHT, JON R (JON R)" : VM Cl

Re: [ceph-users] [EXTERNAL] Re: jewel blocked requests

2016-09-22 Thread WRIGHT, JON R (JON R)
rate. Most of the current messages are associated with two hosts. Jon On 9/19/2016 7:45 PM, Will.Boege wrote: Sorry make that 'ceph tell osd.* version' On Sep 19, 2016, at 2:55 PM, WRIGHT, JON R (JON R) wrote: When you say client, we're actually doing everything throu

[ceph-users] monitors at 100%; cluster out of service

2017-02-28 Thread WRIGHT, JON R (JON R)
I currently have a situation where the monitors are running at 100% CPU, and can't run any commands because authentication times out after 300 seconds. I stopped the leader, and the resulting election picked a new leader, but that monitor shows exactly the same behavor. Now both monitors *th

[ceph-users] hb in and hb out from pg dump

2016-02-04 Thread WRIGHT, JON R (JON R)
New ceph user, so a basic question :) I have a newly setup Ceph cluster. Seems to be working ok. But . . . I'm looking at the output of ceph pg dump, and I see that in the osdstat list at the bottom of the output, there are empty brackets [] in the 'hb out' column for all of the OSDs. It

[ceph-users] pg dump question

2016-02-04 Thread WRIGHT, JON R (JON R)
New ceph user, so a basic question I have a newly setup Ceph cluster. Seems to be working ok. But . . . I'm looking at the output of ceph pg dump, and I see that in the osdstat list at the bottom of the output, there are empty brackets [] in the 'hb out' column for all of the OSDs. It seem

[ceph-users] erasure code backing pool, replication cache, and openstack

2016-02-09 Thread WRIGHT, JON R (JON R)
New user. :) I'm interested in exploring how to use an erasure coded pool as block storage for Openstack. Instructions are on this page. http://docs.ceph.com/docs/master/rados/operations/erasure-code/ Of course, it says "It is not possible to create an RBD image on an erasure coded pool b