Probably same problem here. When I try to add another MON, "ceph health"
becomes mostly unresponsive. One of the existing ceph-mon processes uses
100% CPU for several minutes. Tried it on 2 test clusters (14.2.4, 3
MONs, 5 storage nodes with around 2 hdd osds each). To avoid errors like
"lease
Hi,
Any body has same problem with my case?
Best regards,
On Tue, Oct 8, 2019, 19:52 Lazuardi Nasution wrote:
> Hi,
>
> I get following weird negative objects number on tiering. Why is this
> happening? How to get back to normal?
>
> Best regards,
>
> [root@management-a ~]# ceph df detail
> GL
Hi Lazuardi,
never seen that. Just wondering what Ceph version are you running?
Thanks,
Igor
On 10/8/2019 3:52 PM, Lazuardi Nasution wrote:
Hi,
I get following weird negative objects number on tiering. Why is this
happening? How to get back to normal?
Best regards,
[root@management-a ~]
@Mike
Did you have the chance to update download.ceph.com repositories for the new
version?
I just tested the packages from shaman in our DEV environment and it seems to
fix the work - after updating the packages i was not able to reproduce the
error again and tcmu-runner starts up without
On Thu, Oct 10, 2019 at 2:23 PM huxia...@horebdata.cn
wrote:
>
> Hi, folks,
>
> I have a middle-size Ceph cluster as cinder backup for openstack (queens).
> Duing testing, one Ceph node went down unexpected and powered up again ca 10
> minutes later, Ceph cluster starts PG recovery. To my surpri
On Mon, Oct 14, 2019 at 01:40:19PM +0200, Harald Staub wrote:
> Probably same problem here. When I try to add another MON, "ceph
> health" becomes mostly unresponsive. One of the existing ceph-mon
> processes uses 100% CPU for several minutes. Tried it on 2 test
> clusters (14.2.4, 3 MONs, 5 storag
Hi Igor,
It is the old Jewel (v10.2.11). This case is happen after I
do cache-try-flush-evict-all or cache-flush-evict-all on respected tier
pool.
Best regards,
On Mon, Oct 14, 2019 at 7:38 PM Igor Fedotov wrote:
> Hi Lazuardi,
>
> never seen that. Just wondering what Ceph version are you runn
On Mon, Oct 14, 2019 at 04:31:22PM +0200, Nikola Ciprich wrote:
> On Mon, Oct 14, 2019 at 01:40:19PM +0200, Harald Staub wrote:
> > Probably same problem here. When I try to add another MON, "ceph
> > health" becomes mostly unresponsive. One of the existing ceph-mon
> > processes uses 100% CPU for
How big is the mon's DB? As in just the total size of the directory you copied
FWIW I recently had to perform mon surgery on a 14.2.4 (or was it
14.2.2?) cluster with 8 GB mon size and I encountered no such problems
while syncing a new mon which took 10 minutes or so.
Paul
--
Paul Emmerich
Lo
Hi folks,
Mimic cluster here, RGW pool with only default zone. I have a
persistent error here
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large omap object found' for more
details.
I think I've narrowed it
Looks like the usage log (radosgw-admin usage show), how often do you trim it?
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, Oct 14, 2019 at 11:55 PM Troy Ablan w
Paul,
Apparently never. Appears to (potentially) have every request from the
beginning of time (late last year, in my case). In our use case, we
don't really need this data (not multi-tenant), so I might simply clear it.
But in the case where this were an extremely high transaction cluster
Yeah, the number of shards is configurable ("rgw usage num shards"? or
something).
Are you sure you aren't using it? This feature is not enabled by
default, someone had to explicitly set "rgw enable usage log" for you
to run into this problem.
Paul
--
Paul Emmerich
Looking for help with your C
Yep, that's on me. I did enable it in the config originally, and I
think that I thought at the time that it might be useful, but I wasn't
aware of a sharding caveat owing to most of our traffic is happening on
one rgw user.
I think I know what I need to do to fix it now though.
Thanks again!
Dear ceph users,
we're experiencing a segfault during MDS startup (replay process) which is
making our FS inaccessible.
MDS log messages:
Oct 15 03:41:39.894584 mds1 ceph-mds: -472> 2019-10-15 00:40:30.201
7f3c08f49700 1 -- 192.168.8.195:6800/3181891717 <== osd.26
192.168.8.209:6821/2419345 3
On Mon, Oct 14, 2019 at 11:52:55PM +0200, Paul Emmerich wrote:
> How big is the mon's DB? As in just the total size of the directory you
> copied
>
> FWIW I recently had to perform mon surgery on a 14.2.4 (or was it
> 14.2.2?) cluster with 8 GB mon size and I encountered no such problems
> wh
On Tue, Oct 15, 2019 at 06:50:31AM +0200, Nikola Ciprich wrote:
>
>
> On Mon, Oct 14, 2019 at 11:52:55PM +0200, Paul Emmerich wrote:
> > How big is the mon's DB? As in just the total size of the directory you
> > copied
> >
> > FWIW I recently had to perform mon surgery on a 14.2.4 (or was it
Hi all,
I also hit the bug #24866 in my test environment. According to the logs, the
last_clean_epoch in the specified OSD/PG is 17703, but the interval starts with
17895. So the OSD fails to start. There are some other OSDs in the same status.
2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg
18 matches
Mail list logo