e you start doing a ton a maintenance so old PG
maps can be trimmed. That's the best I can ascertain from the logs for now.
-Original Message-
From: Frank Schilder
Sent: Tuesday, August 4, 2020 8:35 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
If
Do you have any monitor / OSD logs from the maintenance when the issues
occurred?
Original message
From: Frank Schilder
Date: 8/4/20 8:07 AM (GMT-05:00)
To: Eric Smith , ceph-users
Subject: Re: Ceph does not recover from OSD restart
Hi Eric,
thanks for the clarification, I
/ rebalancing
ongoing it's not unexpected. You should not have to move OSDs in and out of the
CRUSH tree however in order to solve any data placement problems (This is the
baffling part).
-Original Message-
From: Frank Schilder
Sent: Tuesday, August 4, 2020 7:45 AM
To: Eric Smith ;
Schilder
Sent: Tuesday, August 4, 2020 7:10 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
Hi Eric,
> Have you adjusted the min_size for pool sr-rbd-data-one-hdd
Yes. For all EC pools located in datacenter ServerRoom, we currently set
min_size=k=6, because
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 August 2020 16:59:04
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
Hi Eric,
the procedure for re-discovering all objects is:
# Flag: norebalance
ceph osd crush move osd.288 host=bb-04
ceph osd crush mov
You said you had to move some OSDs out and back in for Ceph to go back to
normal (The OSDs you added). Which OSDs were added?
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 9:55 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
Can you post the output of these commands:
ceph osd pool ls detail
ceph osd tree
ceph osd crush rule dump
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 9:19 AM
To: ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
After moving the newl
Can you post the output of a couple of commands:
ceph df
ceph osd pool ls detail
Then we can probably explain the utilization you're seeing.
-Original Message-
From: Mateusz Skała
Sent: Saturday, July 18, 2020 1:35 AM
To: ceph-users@ceph.io
Subject: [ceph-users] EC profile datastore us
We're lucky that we are in the process of expanding the cluster, instead of
expanding we'll just build a new Bluestore cluster and migrate data to it.
-Original Message-
From: Dan van der Ster
Sent: Tuesday, July 14, 2020 9:17 AM
To: Eric Smith
Cc: ceph-users@ceph.io
S
FWIW Bluestore is not affected by this problem!
-Original Message-
From: Eric Smith
Sent: Saturday, July 11, 2020 6:40 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Luminous 12.2.12 - filestore OSDs take an hour to boot
It does appear that long file names and filestore seem to be
If you run (Substitute your pool name for ):
rados -p list-inconsistent-obj 1.574 --format=json-pretty
You should get some detailed information about which piece of data actually has
the error and you can determine what to do with it from there.
-Original Message-
From: Abhimnyu Dhobal
ee if it's also
susceptible to these boot / read issues.
Eric
-Original Message-----
From: Eric Smith
Sent: Friday, July 10, 2020 1:46 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Luminous 12.2.12 - filestore OSDs take an hour to boot
For what it's worth - all of our ob
82400-1593486601\u\uwzdchd3._0bfd7c716b839cb7b3ad_0_long
Does this matter? AFAICT it sees this as a long file name and has to lookup the
object name in the xattrs ? Is that bad?
-Original Message-
From: Eric Smith
Sent: Friday, July 10, 2020 6:59 AM
To: ceph-users@ceph.io
Subject: [ceph-
I have a cluster running Luminous 12.2.12 with Filestore and it takes my OSDs
somewhere around an hour to start (They do start successfully - eventually). I
have the following log entries that seem to show the OSD process attempting to
descend into the PG directory on disk and create an object l
14 matches
Mail list logo