source can be found here https://github.com/Mosibi/ceph_usage
--
With regards,
Richard Arends.
Snow BV / http://snow.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
: rados --pool testpool rm testfile.0001
: ...
This works for me, thanks!
: Please open a tracker for this so it can be investigated further.
Done: http://tracker.ceph.com/issues/20233
-Yenya
--
With regards,
Richa
ional career, but a new opportunity has come
along that I just couldn't pass up.
Thanks for what you have done for the Ceph community.
--
With regards,
Richard Arends.
Snow BV / http://snow.nl
___
ceph-users mailing list
ceph-users@lists.cep
--
With regards,
Richard Arends.
Snow BV / http://snow.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
'--osd_scrub_begin_hour 0'
ceph tell osd.* injectargs '--osd_scrub_end_hour 24'
I always change the config settings in our config management system
(puppet) first, and after that, inject them with 'ceph tell'
--
With regards,
Richard Arends.
Snow BV / http://snow.n
running with root user
permissions. Then stop the daemons, do the chown again, but then only on
the changed files (find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0
chown ceph:ceph) and start the Ceph daemons with setuser and setgroup
set to ceph
--
With regards,
Richard Arends.
Snow BV / http
know this option. Good tip!
--
With regards,
Richard Arends.
Snow BV / http://snow.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 01/17/2017 09:21 PM, David Turner wrote:
Looking through the additional osd config options for scrubbing show a
couple options that can prevent a PG from scrubbing immediately.
osd_scrub_during_recovery - default true - If false, no new scrub can
be scheduled while their is active recover
On 01/17/2017 05:15 PM, David Turner wrote:
David,
All OSDs with a copy of the PG need to not be involved in any scrub
for the scrub to start immediately. It is not just the primary OSD
but all secondary OSDs as well for a scrub to be able to run on a PG.
I thought of that and checked if th
On 01/17/2017 04:09 PM, David Turner wrote:
Hi,
You want to look into the settings osd_max_scrubs which indicates how
many different scrub operations an OSD can be involved in at once
That's still the default, thus 1. A pg that i wanted to deep-scrub this
afternoon, should be done by an OSD
Hi,
When i start a deep scrub on a PG by hand 'ceph pg deep-scrub 1.18d5',
sometimes the deep scrub is executed direct after the command is
entered, but often it's not there is a lot of time between starting and
executing. For example:
2017-01-17 05:25:31.786 session 01162017 :: Starting d
Hi,
Lately i am doing a lot of data migration to ceph, using rados with the
--striper option, and sometimes an upload to a ceph pool got interrupted
resulting in a corrupt object that cannot be re-uploaded or removed
using 'rados --striper rm'. Trying that, will result in a error message
like
On 03/03/2016 09:04 PM, Philip S. Hempel wrote:
On 03/03/2016 03:00 PM, Richard Arends wrote:
Do you have more info before and after this message?
There is about 40 lines above like this this is the last few lines
-8> 2016-03-03 14:47:54.244421 7f5b57c01840 5 osd.34 pg_epoch:
89
On 03/03/2016 08:32 PM, Philip S. Hempel wrote:
On 03/03/2016 01:49 PM, Richard Arends wrote:
On 03/03/2016 07:21 PM, Philip S. Hempel wrote:
Philip,
Sorry, can't help you with the segfault. What i would do, is set
debug options in ceph.conf and start the OSD, maybe that extra debug
On 03/03/2016 07:21 PM, Philip S. Hempel wrote:
Philip,
Sorry, can't help you with the segfault. What i would do, is set debug
options in ceph.conf and start the OSD, maybe that extra debug info will
give something you can work with.
On 03/03/2016 01:15 PM, Richard Arends wrote:
On 03/03/2016 06:56 PM, Philip Hempel wrote:
I did the import after using the objectool to remove the pg and that
osd (34) segfaults now.
Segfault output is not my cup of tea, but is that exact the same
segfault as you posted earlier?
--
Regards,
Richard.
_
On 03/03/2016 06:40 PM, Philip Hempel wrote:
osd 45. But that import causes a segfault on the osd
Did that OSD allready had info (files) for that PG?
---
Regards,
Richard.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
On 03/03/2016 06:12 PM, Philip Hempel wrote:
Philip,
I forgot to CC the list, now i did...
To export the data I used ceph-objectstore-tool to do this using the
export command.
I am trying to repair a cluster that has 74 pgs that are down, I have
seen that the pgs in question are presently w
18 matches
Mail list logo