Hi Maarten,
On Tue, Jul 4, 2017 at 9:46 PM, Maarten De Quick
wrote:
> Hi,
>
> Background: We're having issues with our index pool (slow requests / time
> outs causes crashing of an OSD and a recovery -> application issues). We
> know we have very big buckets (eg. bucket of 77 million objects wit
refer to http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
I recalled we encoutered the same issue after upgrading to Jewel :(.
2017-07-05 11:21 GMT+08:00 许雪寒 :
> Hi, everyone.
>
> Recently, we upgraded one of clusters from Hammer to Jewel. However, after
> upgrading one of our m
Hello,
I noticed the same behaviour in our cluster.
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)
cluster 0a9f2d69-5905-4369-81ae-e36e4a791831
health HEALTH_WARN
1 pgs backfill_toofull
4366 pgs backfill_wait
11 pgs backfillin
Hi, everyone.
Recently, we upgraded one of clusters from Hammer to Jewel. However, after
upgrading one of our monitors cannot finish the bootstrap procedure and stuck
in “synchronizing”. Does anyone has any clue about this?
Thank you☺
___
ceph-users
On a test cluster with 994GB used, via collectd I get in influxdb an
incorrect 9.3362651136e+10 (93GB) reported and this should be 933GB (or
actually 994GB). Cluster.osdBytes is reported correctly
3.3005833027584e+13 (30TB)
cluster:
health: HEALTH_OK
services:
mon: 3 daemons, q
Hi!
After getting some other stuff done, I finally got around to continuing here.
I set up a whole new cluster with ceph-deploy, but adding the first OSD fails:
ceph-deploy osd create --bluestore ${HOST}:/dev/sdc --block-wal
/dev/cl/ceph-waldb-sdc --block-db /dev/cl/ceph-waldb-sdc
.
.
.
[WARNIN
Hi,
Background: We're having issues with our index pool (slow requests / time
outs causes crashing of an OSD and a recovery -> application issues). We
know we have very big buckets (eg. bucket of 77 million objects with only
16 shards) that need a reshard so we were looking at the resharding proce
Le 04/07/2017 à 19:00, Jack a écrit :
> You may just upgrade to Luminous, then replace filestore by bluestore
You don't just "replace" filestore by bluestore on a production cluster
: you transition over several weeks/months from the first to the second.
The two must be rock stable and have predic
You may just upgrade to Luminous, then replace filestore by bluestore
Don't be scared, as Sage said:
> The only good(ish) news is that we aren't touching FileStore if we can
> help it, so it less likely to regress than other things. And we'll
> continue testing filestore+btrfs on jewel for some
Le 30/06/2017 à 18:48, Sage Weil a écrit :
> On Fri, 30 Jun 2017, Lenz Grimmer wrote:
>> Hi Sage,
>>
>> On 06/30/2017 05:21 AM, Sage Weil wrote:
>>
>>> The easiest thing is to
>>>
>>> 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
>>> against btrfs for a long time and are
Hello,
You are running 10.0.2.5 or 10.2.5?
If you are running 10.2 you can can read this documentation
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent
'rados list-inconsistent-obj' will give you the reason of this scrub error.
And I would not use
Am 02.07.2017 um 13:23 schrieb Hauke Homburg:
> Hello,
>
> Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2
> and ceph 10.0.2.5. All OSD running in a RAID6.
> In this Cluster i have Deep Scrub Error:
> /var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1
> log_cha
On 07/04/2017 06:57 AM, Z Will wrote:
Hi:
I am testing ceph-mon brain split . I have read the code . If I
understand it right , I know it won't be brain split. But I think
there is still another problem. My ceph version is 0.94.10. And here
is my test detail :
3 ceph-mons , there ranks are 0,
Awesome, that did it.
I consider creating a separate Bareos device with striping, testing there, and
then fading out the old non-striped pool... Maybe that would also fix the
suboptimal throughput...
But from the Ceph side of things, it looks like I'm good now.
Thanks again :)
Cheers,
Martin
On Tue, Jul 4, 2017 at 1:49 PM, Florent B wrote:
> Hi everyone,
>
> I would like to remove a MDS from map. How to do this ?
>
> # ceph mds rm mds.$ID;
> Invalid command: mds.host1 doesn't represent an int
> mds rm : remove nonactive mds
> Error EINVAL: invalid command
Avoid "mds rm" (you never
2017-07-04 12:10 GMT+00:00 Martin Emrich :
...
> So as striping is not backwards-compatible (and this pools is indeed for
> backup/archival purposes where large objects are no problem):
>
> How can I restore the behaviour of jewel (allowing 50GB objects)?
>
> The only option I found was "osd max w
Hi!
I dug deeper, and apparently striping ist not backwards-compatible to
"non-striping":
* "rados ls --stripe" lists only objects where striping was used to write them
in the first place.
* If I enable striping in Bareos (tried different values for stripe_unit and
stripe_count), it crashes he
Hi,
thanks for the explanation! I am just now diving into the C code of Bareos, it
seems there is already code in there to use libradosstriper, I just would have
to turn it on ;)
But there are two parameters (stripe_unit and stripe_count), but there are no
default values.
What would be sane d
Okie noticed their is a new command to set these.
Tried these and still showing as 0 and error on full ratio out of order "ceph
osd set-{full,nearfull,backfillfull}-ratio"
,Ashley
From: ceph-users on behalf of Ashley
Merrick
Sent: 04 July 2017 05:55:10
To:
19 matches
Mail list logo