Thank you for sharing your experience. Glad to hear that someone has already
used this strategy and it works well.
> 在 2020年10月27日,03:10,Reed Dier 写道:
>
> Late reply, but I have been using what I refer to as a "hybrid" crush
> topology for some data for a while now.
>
> Initially with just r
The ceph mon logs... many of this unstoppable on my log:
--
2020-10-26T15:40:28.875729-0400 osd.23 [WRN] slow request
osd_op(client.86168166.0:9023356 5.56 5.1cd5a6d6 (undecoded)
ondisk+retry+write+known_if_redirected e159644) initiated
2020-
I was 3 mons, but i have 2 physical datacenters, one of them breaks with
not short term fix, so i remove all osds and ceph mon (2 of them) and
now i have only the osds of 1 datacenter with the monitor. I was stopped
the ceph manager, but i was see that when i restart a ceph manager then
ceph -s
The recovery process (ceph -s) is independent of the MGR service but
only depends on the MON service. It seems you only have the one MON,
if the MGR is overloading it (not clear why) it could help to leave
MGR off and see if the MON service then has enough RAM to proceed with
the recovery.
El 2020-10-26 15:16, Eugen Block escribió:
You could stop the MGRs and wait for the recovery to finish, MGRs are
not a critical component. You won’t have a dashboard or metrics
during/of that time but it would prevent the high RAM usage.
Zitat von "Ing. Luis Felipe Domínguez Vega" :
El 2020-10
You could stop the MGRs and wait for the recovery to finish, MGRs are
not a critical component. You won’t have a dashboard or metrics
during/of that time but it would prevent the high RAM usage.
Zitat von "Ing. Luis Felipe Domínguez Vega" :
El 2020-10-26 12:23, 胡 玮文 escribió:
在 2020年10月26日,
Late reply, but I have been using what I refer to as a "hybrid" crush topology
for some data for a while now.
Initially with just rados objects, and later with RBD.
We found that we were able to accelerate reads to roughly all-ssd performance
levels, while bringing up the tail end of the write
Hi Dave,
El 23/10/20 a las 22:28, Dave Hall escribió:
Eneko,
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 7 OSD(s)
BLUEFS_SPILLOVER BlueFS spillover detected on 7 OSD(s)
osd.1 spilled over 648 MiB metadata from 'db' device (28 GiB used
of 124 GiB) to slow device
osd
El 2020-10-26 12:23, 胡 玮文 escribió:
在 2020年10月26日,23:29,Ing. Luis Felipe Domínguez Vega
写道:
mgr: fond-beagle(active, since 39s)
Your manager seems crash looping, it only started since 39s. Looking
at mgr logs may help you identify why your cluster is not recovering.
You may hit some bug in m
Please share benchmark data if you test this out. I am sure many would
be interested.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> 在 2020年10月26日,23:29,Ing. Luis Felipe Domínguez Vega
> 写道:
>
> mgr: fond-beagle(active, since 39s)
Your manager seems crash looping, it only started since 39s. Looking at mgr
logs may help you identify why your cluster is not recovering. You may hit some
bug in mgr.
_
> 在 2020年10月26日,15:43,Frank Schilder 写道:
>
>
>> I’ve never seen anything that implies that lead OSDs within an acting set
>> are a function of CRUSH rule ordering.
>
> This is actually a good question. I believed that I had seen/heard that
> somewhere, but I might be wrong.
>
> Looking at
Hi,
could this be of help?
---snip---
ceph1:~ # ceph config ls | grep registr
mgr/cephadm/registry_password
mgr/cephadm/registry_url
mgr/cephadm/registry_username
# set configs
ceph config-key set mgr/cephadm/registry_user $user
ceph config-key set mgr/cephadm/registry_password $password
---sni
On 2020-09-27 22:11, Igor Fedotov wrote:
>
> On 9/25/2020 6:07 PM, sa...@planethoster.info wrote:
>> Hi Igor,
>>
>> The only thing abnormal about this osdstore is that it was created by
>> Mimic 13.2.8 and I can see that the OSDs size of this osdstore are not
>> the same as the others in the clust
On 2020-09-14 16:22, Igor Fedotov wrote:
> Thanks!
>
> Now got the root cause. The fix is on its way...
What is the commit / PR for this fix? Is this fixed in 14.2.12?
Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Exactly the cluster is recovering from a huge break, but i dont see any
progress on "recovering", not show the progress of recovering
--
cluster:
id: 039bf268-b5a6-11e9-bbb7-d06726ca4a78
h
On 26/10/2020 14:13, Ing. Luis Felipe Domínguez Vega wrote:
How can i free the store of ceph monitor?:
root@fond-beagle:/var/lib/ceph/mon/ceph-fond-beagle# du -h -d1
542G ./store.db
542G .
Hi Kristof,
I missed that: why do you need to do manual compaction?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Kristof Coucke
Sent: 26 October 2020 11:33:52
To: Frank Schilder; a.jazdzew...@googlemail.com
Cc
Sorry, they have backported this. The fix should be shipped with 15.2.6.
https://github.com/ceph/ceph/pull/37436
> 在 2020年10月26日,23:03,胡 玮文 写道:
>
> This is a known bug from a quick search.
> https://tracker.ceph.com/issues/47438
>
> I’m not very familiar with the development procedure, but
Hi,
>> Your first mail shows 67T (instead of 62)
I have just given an approximate number the first given number is the
right number.
I have deleted all pools and just created a fresh pool for test with PG num
128 and now it's showing a full size of 248TB.
output from " ceph df "
--- RAW STORA
This is a known bug from a quick search. https://tracker.ceph.com/issues/47438
I’m not very familiar with the development procedure, but maybe they should
backport it?
> 在 2020年10月26日,22:40,Marco Venuti 写道:
>
> I indeed have very small osds, but shortly I will be able to test ceph on
> much l
Only 9M
--
root@fond-beagle:/var/lib/ceph/mon/ceph-fond-beagle/store.db# ls -lh
*.log
-rw--- 1 ceph ceph 9.9M Oct 26 10:57 1443554.log
---
在 2020年10月26日,22:30,Amudhan P 写道:
Hi Jane,
I agree with you and I was trying to say disk which has more PG will fill up
quick.
But, My question even though RAW disk space is 262 TB, pool 2 replica max
storage is showing only 132 TB in the dashboard and when mounting the pool
using cephfs
I indeed have very small osds, but shortly I will be able to test ceph on
much larger osds.
However, looking in syslog I found this
Oct 23 23:45:14 ceph0 bash[2265]: debug 2020-10-23T21:45:14.918+
7fe8de931700 -1 mgr load Failed to construct class in 'cephadm'
Oct 23 23:45:14 ceph0 bash[2265]
Hi Jane,
I agree with you and I was trying to say disk which has more PG will fill
up quick.
But, My question even though RAW disk space is 262 TB, pool 2 replica max
storage is showing only 132 TB in the dashboard and when mounting the pool
using cephfs it's showing 62 TB, I could understand tha
How can i free the store of ceph monitor?:
root@fond-beagle:/var/lib/ceph/mon/ceph-fond-beagle# du -h -d1
542G./store.db
542G.
Okay, so far I figured out that the value in the Ceph dashboard is gathered
from a Metric from Prometheus (*ceph_osd_numpg*). Is there anyone here that
knows how this is populated?
Op ma 26 okt. 2020 om 12:52 schreef Kristof Coucke :
> Hi Frank,
>
> We're having a lot of small objects in the clu
Here it is: https://tracker.ceph.com/issues/47443
Available in master and pending backport for both O & N.
On 10/26/2020 2:44 PM, Stefan Kooman wrote:
On 2020-09-14 16:22, Igor Fedotov wrote:
Thanks!
Now got the root cause. The fix is on its way...
What is the commit / PR for this fix? Is t
Hi Frank,
We're having a lot of small objects in the cluster... RocksDb has issues
with the compaction causing high disk load... That's why we are performing
manual compaction...
See https://github.com/ceph/ceph/pull/37496
Br,
Kristof
Op ma 26 okt. 2020 om 12:14 schreef Frank Schilder :
> Hi
> I’ve never seen anything that implies that lead OSDs within an acting set are
> a function of CRUSH rule ordering.
This is actually a good question. I believed that I had seen/heard that
somewhere, but I might be wrong.
Looking at the definition of a PG, is states that a PG is an ordered set
Hi Ansgar, Frank, all,
Thanks for the feedback in the first place.
In the meantime, I've added all the disks and the cluster is rebalancing
itself... Which will take ages as you've mentioned. Last week after this
conversation it was around 50% (little bit more), today it's around 44,5%.
Every day
Interesting, what do you see in the MGR logs, there should be
something in there.
Zitat von Marco Venuti :
Yes, this is the status
# ceph -s
cluster:
id: ab471d92-14a2-11eb-ad67-525400bbdc0d
health: HEALTH_OK
services:
mon: 5 daemons, quorum ceph0.starfleet.sns.it,ceph1
Hi Wladim,
If the "unable to find keyring" message disappeared, what was the error after
that fix?
If it's still failing to fetch the mon config, check your authentication (you
might have to add the osd key to the keyring again), and/or that the mons ips
are correct in your osd ceph.conf file.
Den sön 25 okt. 2020 kl 15:18 skrev Amudhan P :
> Hi,
>
> For my quick understanding How PG's are responsible for allowing space
> allocation to a pool?
>
An objects name will decide which PG (from the list of PGs in the pool) it
will end
up on, so if you have very few PGs, the hashed/pseudorando
34 matches
Mail list logo