Dear Greg,
Am 30.11.18 um 18:38 schrieb Gregory Farnum:
> I’m pretty sure the monitor command there won’t move intermediate buckets
> like the host. This is so if an osd has incomplete metadata it doesn’t
> inadvertently move 11 other OSDs into a different rack/row/whatever.
>
> So in this case
The only relevant component for this issue is OSDs. Upgrading the
monitors first as usual is fine. If your OSDs are all 12.2.8, moving
them to 12.2.10 has no chance of hitting this bug.
If you upgrade the OSDs to 13.2.2, which does have the PG hard limit
patches, you may hit the bug as noted he
On 28/11/2018 19:06, Maxime Guyot wrote:
> Hi Florian,
>
> You assumed correctly, the "test" container (private) was created with
> the "openstack container create test", then I am using the S3 API to
> enable/disable object versioning on it.
> I use the following Python snippet to enable/disable
On Fri, Nov 30, 2018 at 3:10 PM Paul Emmerich wrote:
>
> Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza :
> >
> > On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
> > >
> > > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > > well compared to ceph-disk.
> >
> > N
Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza :
>
> On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
> >
> > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > well compared to ceph-disk.
>
> Not sure I follow, would you mind expanding on what you mean by
> "ceph-v
I’m pretty sure the monitor command there won’t move intermediate buckets
like the host. This is so if an osd has incomplete metadata it doesn’t
inadvertently move 11 other OSDs into a different rack/row/whatever.
So in this case, it finds the host osd0001 and matches it, but since the
crush map a
radosgw-admin likes to create these pools, some monitoring tool might
be trying to use it?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am Fr., 30. Nov. 2018 um 12:
Is that one big xfs filesystem? Are you able to mount with krbd?
On Tue, 27 Nov 2018, 13:49 Vikas Rana Hi There,
>
> We are replicating a 100TB RBD image to DR site. Replication works fine.
>
> rbd --cluster cephdr mirror pool status nfs --verbose
>
> health: OK
>
> images: 1 total
>
> 1 repl
Dear Cephalopodians,
sorry for the spam, but I found the following in mon logs just now and am
finally out of ideas:
--
2018-11-30 15:43:05.207 7f9d64aac700 0 mon.mon001@0(leader) e3 handle_command mon_comma
Dear Cephalopodians,
further experiments revealed that the crush-location-hook is indeed called!
It's just my check (writing to a file in tmp from inside the hook) which somehow failed.
Using "logger" works for debugging.
So now, my hook outputs:
host=osd001 datacenter=FTD root=default
as expla
Hello,
how can I disable the automatic creation of the rgw pools?
I have no radosgw instances running, and currently do not intend to do
so on this cluster. But these pools keep reappearing:
.rgw.root
default.rgw.meta
default.rgw.log
I just don't want them to eat up pgs for no reason...
My ve
Dear Cephalopodians,
I'm probably missing something obvious, but I am at a loss here on how to
actually make use of a customized crush location hook.
I'm currently on "ceph version 13.2.1" on CentOS 7 (i.e. the last version
before the upgrade-preventing bugs). Here's what I did:
1. Write a sc
Does anybody know how to install ceph-fuse on Centos5?
Zhenshi Zhou 于2018年11月29日周四 下午4:50写道:
> Hi,
>
> I have a Centos5 server with kernel version 2.6.18.
> Does it support to mount cephfs with ceph-fuse?
>
> Thanks
>
___
ceph-users mailing list
ceph-u
13 matches
Mail list logo