On 13.06.19 00:29, Sage Weil wrote:
On Thu, 13 Jun 2019, Simon Leinen wrote:
Sage Weil writes:
2019-06-12 23:40:43.555 7f724b27f0c0 1 rocksdb: do_open column families:
[default]
Unrecognized command: stats
ceph-kvstore-tool: /build/ceph-14.2.1/src/rocksdb/db/version_set.cc:356:
rocksdb::Ve
On 13.06.19 00:33, Sage Weil wrote:
[...]
One other thing to try before taking any drastic steps (as described
below):
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-NNN fsck
This gives: fsck success
and the large alloc warnings:
tcmalloc: large alloc 2145263616 bytes == 0x562412e1
hi everyone,
I am a bit confused about num of objects degraded that ceph -s show when
ceph recovery.
ceph -s as flollow:
[root@ceph-25 src]# ./ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
cluster 3d52f70a-d82f-46e3-9f03-be03e5e68e33
health HEALTH_WARN
Hi,
20067 objects actual data
with 3x replication = 60201 objects
On 13/06/2019 08:36, zhanrzh...@teamsun.com.cn wrote:
And total num of objects are 20067
/[root@ceph-25 src]# ./rados -p rbd ls| wc -l/
/20013/
/[root@ceph-25 src]# ./rados -p cephfs_data ls | wc -l/
/0/
/[root@ceph-25 src]# ./r
We want to change index pool(radosgw) rule from sata to ssd, when we run
ceph osd pool set default.rgw.buckets.index crush_ruleset x
All of index pg migrated to ssd, but only one pg is still stuck in sata and
cannot be migrated
and it status is active+undersized+degraded+remapped+backfilling
ceph
Idea received from Wido den Hollander:
bluestore rocksdb options = "compaction_readahead_size=0"
With this option, I just tried to start 1 of the 3 crashing OSDs, and it
came up! I did with "ceph osd set noin" for now.
Later it aborted:
2019-06-13 13:11:11.862 7f2a19f5f700 1 heartbeat_map re
I'm running Ceph 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972)
nautilus (stable) on a Kubernetes cluster using Rook
(https://github.com/rook/rook), and my OSD daemons do not start.
Each OSD process runs inside a Kubernetes pod, and each pod gets its
own IP address. I spotted the following log
hello - Can we list the objects in rgw, via last modified date?
For example - I wanted to list all the objects which were modified 01 Jun
2019.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
http://docs.ceph.com/docs/master/rbd/rbd-config-ref/
From: Trilok Agarwal
To: ceph-users@lists.ceph.com
Date: 06/12/2019 07:31 PM
Subject:[EXTERNAL] [ceph-users] Enable buffered write for bluestore
Sent by:"ceph-users"
Hi
How can we enable bluestore_default_buffer
There's no (useful) internal ordering of these entries, so there isn't a
more efficient way than getting everything and sorting it :(
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49
On Thu, 13 Jun 2019, Harald Staub wrote:
> Idea received from Wido den Hollander:
> bluestore rocksdb options = "compaction_readahead_size=0"
>
> With this option, I just tried to start 1 of the 3 crashing OSDs, and it came
> up! I did with "ceph osd set noin" for now.
Yay!
> Later it aborted:
>
Something I had suggested off-list (repeated here if anyone else finds
themselves in a similar scenario):
since only one PG is dead and the OSD now seems to be alive enough to
start/mount: consider taking a backup of the affected PG with
ceph-objectstore-tool --op export --pgid X.YY
(That might
On Thu, 13 Jun 2019, Paul Emmerich wrote:
> Something I had suggested off-list (repeated here if anyone else finds
> themselves in a similar scenario):
>
> since only one PG is dead and the OSD now seems to be alive enough to
> start/mount: consider taking a backup of the affected PG with
>
> cep
On 13.06.19 15:52, Sage Weil wrote:
On Thu, 13 Jun 2019, Harald Staub wrote:
[...]
I think that increasing the various suicide timeout options will allow
it to stay up long enough to clean up the ginormous objects:
ceph config set osd.NNN osd_op_thread_suicide_timeout 2h
ok
It looks heal
On Thu, 13 Jun 2019, Harald Staub wrote:
> On 13.06.19 15:52, Sage Weil wrote:
> > On Thu, 13 Jun 2019, Harald Staub wrote:
> [...]
> > I think that increasing the various suicide timeout options will allow
> > it to stay up long enough to clean up the ginormous objects:
> >
> > ceph config set
Hello,
I would like to modify Bluestore label of an OSD, is there a way to do this
?
I so that we could diplay them with "ceph-bluestore-tool show-label" but i
did not find anyway to modify them...
Is it possible ?
I changed LVM tags but that don't help with bluestore labels..
# ceph-bluestore
Hello,
I would like to modify Bluestore label of an OSD, is there a way to do this
?
I so that we could diplay them with "ceph-bluestore-tool show-label" but i
did not find anyway to modify them...
Is it possible ?
I changed LVM tags but that don't help with bluestore labels..
# ceph-bluestor
Woaw ok thanks a lot i missed that in the doc...
Le jeu. 13 juin 2019 à 16:49, Konstantin Shalygin a écrit :
> Hello,
>
> I would like to modify Bluestore label of an OSD, is there a way to do this
> ?
>
> I so that we could diplay them with "ceph-bluestore-tool show-label" but i
> did not find
I'm using mimic, which I thought was supported. Here's the full version:
# ceph -v
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic
(stable)
# ceph daemon osd.0 config show | grep memory
"debug_deliberately_leak_memory": "false",
"mds_cache_memory_limit": "1073741824
I think this option was added in 13.2.4 (or 13.2.5?)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Thu, Jun 13, 2019 at 7:00 PM Jorge Garcia wrote:
> I'm using
Hi everyone,
The Ceph Day Netherlands schedule is now available!
https://ceph.com/cephdays/netherlands-2019/
Registration is free and still open, so please come join us for some
great content and discussion with members of the community of all
levels!
https://www.eventbrite.com/e/ceph-day-nethe
Looks fine (at least so far), thank you all!
After having exported all 3 copies of the bad PG, we decided to try it
in-place. We also set norebalance to make sure that no data is moved.
When the PG was up, the resharding finished with a "success" message.
The large omap warning is gone after d
In case you missed these events on the community calendar, here are
the recordings:
https://www.youtube.com/playlist?list=PLrBUGiINAakPCrcdqjbBR_VlFa5buEW2J
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Thanks! That's the correct solution. I upgraded to 13.2.6 (latest mimic)
and the option is now there...
On 6/13/19 10:22 AM, Paul Emmerich wrote:
I think this option was added in 13.2.4 (or 13.2.5?)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
Hi everyone,
There has been some interest in a feature that helps users to mute
health warnings. There is a trello card[1] associated with it and
we've had some discussion[2] in the past in a CDM about it. In
general, we want to understand a few things:
1. what is the level of interest in this fe
Hi,
Is it normal that osd beacon could be without pgs? Like below. This
drive contain data, but I cannot make him to run.
Ceph v.12.2.4
{
"description": "osd_beacon(pgs [] lec 857158 v869771)",
"initiated_at": "2019-06-14 06:39:37.972795",
"age": 189.310037,
26 matches
Mail list logo