Hi,
I also tried:
$ ceph mon ok-to-stop all
No luck again. It seems Ceph ignores this.
Other Ceph cluster which has 9 nodes (and 3 mons) successfully upgraded.
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Why do you think the performance is slow? Compared to what?
I have already mentioned here and on a ceph day, when are we (ceph
community+redhat) going to put some benchmark tests out there. So people
can compare and know what to expect. It would be also nice than in
release notes to mention
Hello,
Thank you very much.
I was a bit worried about all the other messages, especially those two from a
different container (started before the right one?):
Jun 03 08:22:23 testnode1 bash[3169]: rados_connect: -13
Jun 03 08:22:23 testnode1 bash[3169]: Can't connect to cluster: -13
Neverth
rados_connect() is used by the recovery and/or grace code. It's
configured separately from CephFS, so it's errors are unrelated to
CephFS issues.
Daniel
On 6/3/20 8:54 AM, Simon Sutter wrote:
Hello,
Thank you very much.
I was a bit worried about all the other messages, especially those tw
Hello,
I think I missunderstood the internal / public network concepts in the docs
https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/.
Now there are two questions:
- Is it somehow possible to bind the MON daemon to 0.0.0.0?
I tried it with manually add the ip in /var/li
Hi,
I've been using pg-upmap items both in the ceph balancer and by hand
running osdmaptool for a while now (on Ceph 12.2.13).
But I've noticed a side effect of up-map-items which can sometimes lead to
some unnecessary data movement.
My understanding is that the ceph osdmap keeps track of upmap-
The last two days we've experienced a couple short outages shortly after
setting both 'noscrub' and 'nodeep-scrub' on one of our largest Ceph clusters
(~2,200 OSDs). This cluster is running Nautilus (14.2.6) and setting/unsetting
these flags has been done many times in the past without a problem.
Helo,
I have a live ceph cluster, and I’m in the need of modifying the bucket
hierarchy. I am currently using the default crush rule (ie. keep each replica
on a different host). My need is to add a “chassis” level, and keep replicas
on a per-chassis level.
From what I read in the documentati
Thanks Frank,
I don’t have too much experience editing crush rules, but I assume the
chooseleaf step would also have to change to:
step chooseleaf firstn 0 type chassis
Correct? Is that the only other change that is needed? It looks like the rule
change can happen both inside and out
Hi all,
I'm wondering if there is a plan to add Nautilus builds for CentOS 8 [0]. Right
now I see there are builds for CentOS 7, but for CentOS 8 there are only builds
for Octopus and master.
Thanks,
V
[0]
https://shaman.ceph.com/api/search/?project=ceph&distros=centos/8&ref=nautilus&sha1=la
cbs.centos.org offers 14.2.7 packages for el8 eg
https://cbs.centos.org/koji/buildinfo?buildID=28564 but I don’t know anything
about their provenance or nature.
For sure a downloads.ceph.com package would be desirable.
> On Jun 3, 2020, at 4:04 PM, Victoria Martinez de la Cruz
> wrote:
>
> Hi
Thanks Anthony. Yes, it's intended to be used in a CI job in which we are
already relying on shaman builds. I'm curious if there is a reason we are
not building Nautilus (and below) on CentOS 8.
On Wed, Jun 3, 2020 at 8:17 PM Anthony D'Atri
wrote:
> cbs.centos.org offers 14.2.7 packages for el8
I think 'chassis' is OK. If you change host to chassis, you should have
chassis declaration in the crushmap, as osds and hosts do.
Using command for example, "ceph osd crush add-bucket chassis-1 chassis"
and "ceph osd crush move host-1 chassis=chassis-1" should be executed.
did you change mon_host in ceph.conf while you set the ip back to
192.168.0.104.
I did a monitor ip changing in a live cluster. But I had 3 mon and I
modified only 1 ip and then submitted the monmap.
于2020年5月29日周五 下午11:55写道:
> ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautil
Yes, that’s what I had in mind.
Thank you!
George
> On Jun 3, 2020, at 8:41 PM, Zhenshi Zhou wrote:
>
> I think 'chassis' is OK. If you change host to chassis, you should have
> chassis declaration in the crushmap, as osds and hosts do.
> Using command for example, "ceph osd crush add-bucket
maybe you could try moving the options to global or osd section
陈旭 于2020年5月29日周五 下午11:29写道:
> Hi guys, I deploy an efk cluster and use ceph as block storage in
> kubernetes, but RBD write iops sometimes becomes zero and last for a few
> minutes. I want to check logs about RBD so I add some confi
You can use the command-line without editing the crush map. Look at the
documentation of commands like
ceph osd crush add-bucket ...
ceph osd crush move ...
Before starting this, set "ceph osd set norebalance" and unset after you are
happy with the crush tree. Let everything peer. You should se
Try using the kernel client instead of the FUSE client. The FUSE client
is known to be slow for a variety of reasons and I suspect you may see
faster performance with the kernel client.
Thanks,
Mark
On 6/2/20 8:00 PM, Derrick Lin wrote:
Hi guys,
We just deployed a CEPH 14.2.9 cluster wit
Hi all,
I'm gonna deploy a rbd-mirror in order to sync image from clusterA to
clusterB.
The image will be used while syncing. I'm not sure if the rbd-mirror will
sync image
continuously or not. If not, I will inform clients not to write data in it.
Thanks. Regards
On 6/3/20 4:49 PM, Simon Sutter wrote:
> Hello,
>
>
> I think I missunderstood the internal / public network concepts in the docs
> https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/.
>
> Now there are two questions:
>
> - Is it somehow possible to bind the MON daemon
On 6/4/20 12:24 AM, Frank Schilder wrote:
> You can use the command-line without editing the crush map. Look at the
> documentation of commands like
>
> ceph osd crush add-bucket ...
> ceph osd crush move ...
>
> Before starting this, set "ceph osd set norebalance" and unset after you are
>
Hi,
that's the point of rbd-mirror, to constantly replay changes from the
primary image to the remote image (if the rbd journal feature is
enabled).
Zitat von Zhenshi Zhou :
Hi all,
I'm gonna deploy a rbd-mirror in order to sync image from clusterA to
clusterB.
The image will be used wh
22 matches
Mail list logo