Hi David,
Thank you for pointing out the option.
On http://docs.ceph.com/docs/infernalis/release-notes/ one can read:
*
Ceph daemons now run as user and group ceph by default. The ceph
user has a static UID assigned by Fedora and Debian (also used by
derivative distributions like RHE
On Fri, Aug 17, 2018 at 7:05 PM, Daznis wrote:
> Hello,
>
>
> I have replace one of our failed OSD drives and recreated a new osd
> with ceph-deploy and it failes to start.
Is it possible you haven't zapped the journal on nvme0n1p13 ?
>
> Command: ceph-deploy --overwrite-conf osd create --file
The reason to separate the items is to make one change at a time so you
know what might have caused your problems. Good luck.
On Sat, Aug 18, 2018, 4:52 AM Kees Meijs wrote:
> Hi David,
>
> Thank you for pointing out the option.
>
> On http://docs.ceph.com/docs/infernalis/release-notes/ one can
Hi again,
After listing all placement groups the problematic OSD (osd.0) being
part of, I forced a deep-scrub for all those PGs.
A few hours later (and some other deep scrubbing as well) the result
seems to be:
HEALTH_ERR 8 pgs inconsistent; 14 scrub errors
pg 3.6c is active+clean+inconsis
You can't change the file permissions while the pads are still running. The
instructions you pasted earlier said to stop the osd and run the chmod.
What do your osd logs of the primary osd say about the PGs that are
inconsistent?
On Sat, Aug 18, 2018, 11:43 AM Kees Meijs wrote:
> Hi again,
>
> A
Hi everyone,
I am new to Ceph and trying to test out my understanding on the CRUSH
map. Attached is a hypothetical cluster diagram with 3 racks. On each
rack, the first host runs 3 SSD-based OSDs and the second 3 HDD-based.
My goal is to create two rules that separate SSD and HDD performance
doma
The previous email had mistakes in the rack a2 and a3 bucket
definition. I missed the a2-2 and a3-2 hosts. It should be:
rack a2 {
id -8
alg straw
hash 0
item a2-1 weight 3.0
item a2-2 weight 3.0
}
rack a3 {
id -9
alg straw
hash 0
item a3
Good morning,
And... the results:
2018-08-18 17:45:08.927387 7fa3cbe0d700 0 log_channel(cluster) log
[INF] : 3.32 repair starts
2018-08-18 17:45:12.350343 7fa3c9608700 -1 log_channel(cluster) log
[ERR] : 3.32 soid -5/0032/temp_3.32_0_16187756_293/head: failed to
pick suitable auth object