[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-27 Thread Frank Schilder
Thanks a lot! I will fix the pool meta data and clean up my keys. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Ilya Dryomov Sent: 25 January 2020 09:01 To: Frank Schilder Cc: Yoann Moulin; ceph-users Subject:

[ceph-users] EC pool creation results in incorrect M value?

2020-01-27 Thread Smith, Eric
I have a Ceph Luminous (12.2.12) cluster with 6 nodes. I'm attempting to create an EC3+2 pool with the following commands: 1. Create the EC profile: * ceph osd erasure-code-profile set es32 k=3 m=2 plugin=jerasure w=8 technique=reed_sol_van crush-failure-domain=host crush-root=sgshared

[ceph-users] Re: EC pool creation results in incorrect M value?

2020-01-27 Thread Paul Emmerich
min_size in the crush rule and min_size in the pool are completely different things that happen to share the same name. Ignore min_size in the crush rule, it has virtually no meaning in almost all cases (like this one). Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us

[ceph-users] Re: EC pool creation results in incorrect M value?

2020-01-27 Thread Smith, Eric
Thanks for the info regarding min_size in the crush rule - does this seem like a bug to you then? Is anyone else able to reproduce this? -Original Message- From: Paul Emmerich Sent: Monday, January 27, 2020 11:15 AM To: Smith, Eric Cc: ceph-users@ceph.io Subject: Re: [ceph-users] EC po

[ceph-users] Re: EC pool creation results in incorrect M value?

2020-01-27 Thread Smith, Eric
OK I see this: https://github.com/ceph/ceph/pull/8008 Perhaps it's just to be safe... -Original Message- From: Smith, Eric Sent: Monday, January 27, 2020 11:22 AM To: Paul Emmerich Cc: ceph-users@ceph.io Subject: [ceph-users] Re: EC pool creation results in incorrect M value? Thanks f

[ceph-users] data loss on full file system?

2020-01-27 Thread Håkan T Johansson
Hi, for test purposes, I have set up two 100 GB OSDs, one taking a data pool and the other metadata pool for cephfs. Am running 14.2.6-1-gffd69200ad-1 with packages from https://mirror.croit.io/debian-nautilus Am then running a program that creates a lot of 1 MiB files by calling fopen()

[ceph-users] Nautilus 14.2.6 ceph-volume bluestore _read_fsid unparsable uuid

2020-01-27 Thread Dave Hall
All, I've just spent a significant amount of time unsuccessfully chasing the  _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.  Since this is a brand new cluster, last night I gave up and moved back to Debian 9 / Luminous 12.2.11.  In both cases I'm using the packages from Debi