Hi,
Our cluster (14.2.6) has sporadic slow ops warnings since upgrading from Jewel
1 month ago. Today I checked the OSD log files and found out a lot of entries
like:
ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observ
Hi,
I’ve just upgrade my cluster from Jewel to Nautilus (still running filestore).
Since I'v stopped deepscrub for several days for this upgrade, now I got
warning of "3 pgs not deep-scrubbed in time”. I tried to increase
osd_max_scrubs to 3, osd_scrub_load_threshold to 5.0 and osd_deep_scrub_s
;
> On 1/20/20 1:07 AM, 徐蕴 wrote:
>> Hi,
>>
>> We upgraded our cluster from Jewel to Luminous, and it turned out that more
>> than 80% object misplaced. Since our cluster has 130T data, backfilling
>> seems take forever. We didn’t modify any crushmap. Any thought
Hi,
We upgraded our cluster from Jewel to Luminous, and it turned out that more
than 80% object misplaced. Since our cluster has 130T data, backfilling seems
take forever. We didn’t modify any crushmap. Any thoughts about this issue?
br,
Xu Yun
___
ce
ed but there were existing
> clones so basically just the openstack database was updated, but the base
> image still existed within ceph.
>
> Try to figure out if that is also the case. If it's something else, check the
> logs in your openstack environment, maybe they reveal someth
No every volume. It seems that volumes with high capacity have higher
probability to trigger this problem.
> 2020年1月15日 下午4:28,Eugen Block 写道:
>
> Then it's probably something different. Does that happen with every
> volume/image or just this one time?
>
>
> Zit
ething else, check the
> logs in your openstack environment, maybe they reveal something. Also check
> the ceph logs.
>
> Regards,
> Eugen
>
>
> Zitat von 徐蕴 :
>
>> Hello,
>>
>> My setup is Ceph pike working with OpenStack. When I deleted an image, I
>
Hello,
My setup is Ceph pike working with OpenStack. When I deleted an image, I found
that the space was not reclaimed. I checked with rbd ls and confirmed that this
image was disappeared. But when I check the objects with rados ls, most objects
named rbd_data.xxx are still existed in my cluste
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
> <https://croit.io/>
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io/>
> Tel: +49 89 1896
Hello,
We are planning to upgrade our cluster from Jewel to Nautilus. From my
understanding, leveldb of monitor and filestore of OSDs will not be converted
to rocketdb and bluestore automatically. So do you suggest to convert them
manually after upgrading software? Is there any document or guid
#28
2019-10-22 19:20:17.139452 7f1e0e9de680 -1 unable to read magic from mon data
Any suggestion?
> 2019年10月22日 下午6:03,徐蕴 写道:
>
> It seems that ceph-kvstore-tool is not available in 10.2.10/Jewel.
>
>
>> 2019年10月22日 下午5:28,huang jun 写道:
>>
>> Try this https
It seems that ceph-kvstore-tool is not available in 10.2.10/Jewel.
> 2019年10月22日 下午5:28,huang jun 写道:
>
> Try this https://docs.ceph.com/docs/mimic/man/8/ceph-kvstore-tool/ and
> use the 'repair' operation
>
> 徐蕴 于2019年10月22日周二 下午3:51写道:
>>
>> Hi,
&g
Hi,
Our cluster got an unexpected power outage. Ceph mon cannot start after that.
The log shows:
Running command: '/usr/bin/ceph-mon -f -i 10.10.198.11 --public-addr
10.10.198.11:6789'
Corruption: 15 missing files; e.g.:
/var/lib/ceph/mon/ceph-10.10.198.11/store.db/2676107.sst
Is there any wa
Hi Igor,
Got it. Thank you!
> 在 2019年9月26日,下午11:27,Igor Fedotov 写道:
>
> Hi Xu Yun!
>
> You might want to use "ceph osd metadata" command and check
> "ceph_objectstore" parameter in the output.
>
>
> Thanks,
>
> Igor
>
> On 9/26/
Hi,
Is there a command to check whether an OSD is running filestore or bluestore?
BR,
Xu Yun
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
3/09/2019 08:27, 徐蕴 wrote:
>> Hi ceph experts,
>>
>> I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same hardware,
>> and made a rough performance comparison. The result seems Luminous is much
>> better, which is unexpected.
>>
>>
>&
Hi ceph experts,
I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same hardware, and
made a rough performance comparison. The result seems Luminous is much better,
which is unexpected.
My setup:
3 servers, each has 3 HDD OSDs, 1 SSD as DB, two separated 1G network for
cluster and
Is it related to https://tracker.ceph.com/issues/39671?
> 在 2019年9月21日,下午6:13,徐蕴 写道:
>
> Hello Ceph Users,
>
> I deployed a ceph cluster (v14.2.1, using docker), the cluster status seems
> OK but the write performance tested by rados bench seems bad. When I check
> t
Hello Ceph Users,
I deployed a ceph cluster (v14.2.1, using docker), the cluster status seems OK
but the write performance tested by rados bench seems bad. When I check the
network connection using netstat -nat, I found that there is no connection
established using the cluster network interface
19 matches
Mail list logo