Hi all,
For information, I updated my Luminous cluster to the latest version
12.2.12 two weeks ago and, since then, I no longer encounter any
problems of inconsistent pgs :)
Regards,
rv
Le 03/05/2019 à 11:54, Hervé Ballans a écrit :
Le 24/04/2019 à 10:06, Janne Johansson a écrit :
Den ons 2
Hi Reed and Brad,
Did you ever learn more about this problem?
We currently have a few inconsistencies arriving with the same env
(cephfs, v13.2.5) and symptoms.
PG Repair doesn't fix the inconsistency, nor does Brad's omap
workaround earlier in the thread.
In our case, we can fix by cp'ing the fi
Hi Jake,
I would definitely go for the "leave the rest unused" solution.
Regards,
Mattia
On 5/29/19 4:25 PM, Jake ` wrote:
> Thank you for a lot of detailed and useful information :)
>
> I'm tempted to ask a related question on SSD endurance...
>
> If 60GB is the sweet spot for each DB/WAL par
Hi List / James,
In the Ceph master (and also Ceph 14.2.1), file: src/common/options.cc,
line # 192:
Option::size_t sz{strict_iecstrtoll(val.c_str(), error_message)};
On ARM 32-bit, compiling with CLang 7.1.0, compilation fails hard at
this line.
The reason is because strict_iecs
On Mon, May 27, 2019 at 2:36 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> in the process of migrating a cluster from Luminous (12.2.12) to Mimic
> (13.2.5), we have upgraded the FUSE clients first (we took the chance during
> a time of low activity),
> thinking that this should not c
Hello Wesley,
On Wed, May 29, 2019 at 8:35 AM Wesley Dillingham
wrote:
> On further thought, Im now thinking this is telling me which rank is stopped
> (2), not that two ranks are stopped.
Correct!
> I guess I am still curious about why this information is retained here
Time has claimed that
Hello,
sorry to jump in.
I'm looking to expand with SSDs on an HDD cluster.
I'm thinking about moving cephfs_metadata to the SSDs (maybe with device
class?) or to use them as cache layer in front of the cluster.
Any tips on how to do it with ceph-ansible?
I can share the config I currently have
The cephfs_metadata pool makes sense on ssd, but it won't need a lot of
space. Chances are that you'll have plenty of ssd storage to spare for
other uses.
Personally, I'm migrating away from a cache tier and rebuilding my OSDs. I
am finding that performance with Bluestore OSDs with the block.db o
On Mon, Jun 3, 2019 at 3:06 PM James Wilkins
wrote:
>
> Hi all,
>
> After a bit of advice to ensure we’re approaching this the right way.
>
> (version: 12.2.12, multi-mds, dirfrag is enabled)
>
> We have corrupt meta-data as identified by ceph
>
> health: HEALTH_ERR
> 2 MDSs report