[ceph-users] How do I get a sector marked bad?

2020-03-30 Thread David Herselman
Hi, We have a single inconsistent placement group where I then subsequently triggered a deep scrub and tried doing a 'pg repair'. The placement group remains in an inconsistent state. How do I discard the objects for this placement group only on the one OSD and get Ceph to essentially write th

[ceph-users] Odd CephFS Performance

2020-03-30 Thread Gabryel Mason-Williams
We have been benchmarking CephFS and comparing it Rados to see the performance difference and how much overhead CephFS has. However, we are getting odd results when using more than 1 OSD server (each OSDS has only one disk) using CephFS but using Rados everything appears normal. These tests are

[ceph-users] Re: Unable to use iscsi gateway with https | iscsi-gateway-add returns errors

2020-03-30 Thread Mike Christie
On 03/29/2020 04:43 PM, givemeone wrote: > Hi all, > I am installing ceph Nautilus and getting constantly errors while adding > iscsi gateways > It was working using http schema but after moving to https with wildcard > certs gives API errors > > Below some of my configurations > Thanks for you

[ceph-users] Re: Odd CephFS Performance

2020-03-30 Thread Mark Nelson
Hi Gabryel, Are the pools always using 1X replication?  The rados results are scaling like it's using 1X but the CephFS results definitely look suspect.  Have you tried turning up the iodepth in addition to tuning numjobs?  Also is this kernel cephfs or fuse?  The fuse client is far slower. 

[ceph-users] Terrible IOPS performance

2020-03-30 Thread Jarett DeAngelis
Hi folks, I have a three-node cluster on a 10G network with very little traffic. I have a six-OSD flash-only pool with two devices — a 1TB NVMe drive and a 256GB SATA SSD — on each node, and here’s how it benchmarks: Oof. How can I troubleshoot this? Anthony mentioned that I might be able to ru

[ceph-users] Re: Terrible IOPS performance

2020-03-30 Thread Marc Roos
Your system is indeed slow, benchmark results are still not here ;) -Original Message- Sent: 27 March 2020 19:44 To: ceph-users@ceph.io Subject: [ceph-users] Terrible IOPS performance Hi folks, I have a three-node cluster on a 10G network with very little traffic. I have a six-OS

[ceph-users] Re: How do I get a sector marked bad?

2020-03-30 Thread Dan van der Ster
Hi, I have a feeling that the pg repair didn't actually run yet. Sometimes if the OSDs are busy scrubbing, the repair doesn't start when you ask it to. You can force it through with something like: ceph osd set noscrub ceph osd set nodeep-scrub ceph config set osd_max_scrubs 3 ceph pg repair cep

[ceph-users] Re: ceph cephadm generate-key => No such file or directory: '/tmp/tmp4ejhr7wh/key'

2020-03-30 Thread Sage Weil
On Mon, 30 Mar 2020, Ml Ml wrote: > Hello List, > > is this a bug? > > root@ceph02:~# ceph cephadm generate-key > Error EINVAL: Traceback (most recent call last): > File "/usr/share/ceph/mgr/cephadm/module.py", line 1413, in _generate_key > with open(path, 'r') as f: > FileNotFoundError: [E

[ceph-users] ceph cephadm generate-key => No such file or directory: '/tmp/tmp4ejhr7wh/key'

2020-03-30 Thread Ml Ml
Hello List, is this a bug? root@ceph02:~# ceph cephadm generate-key Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py", line 1413, in _generate_key with open(path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp4ejhr7wh

[ceph-users] Multiple CephFS creation

2020-03-30 Thread Jarett DeAngelis
Hi guys, This is documented as an experimental feature, but it doesn’t explain how to ensure that affinity for a given MDS sticks to the second filesystem you create. Has anyone had success implementing a second CephFS? In my case it will be based on a completely different pool from my first on

[ceph-users] Re: samba ceph-vfs and scrubbing interval

2020-03-30 Thread David Disseldorp
Hi Marco and Jeff, On Fri, 27 Mar 2020 08:04:56 -0400, Jeff Layton wrote: > > i‘m running a 3 node ceph cluster setup with collocated mons and mds > > for actually 3 filesystems at home since mimic. I’m planning to > > downgrade to one FS and use RBD in the future, but this is another > > story.

[ceph-users] Re: Unable to use iscsi gateway with https | iscsi-gateway-add returns errors

2020-03-30 Thread Matthew Oliver
*sigh* and this time reply to all. rbd-target-api is a little opinionated on where the ssl cert and key files live and what they're named. It expects: cert_files = ['/etc/ceph/iscsi-gateway.crt', '/etc/ceph/iscsi-gateway.key'] So make sure these exist, and are named corre

[ceph-users] Re: Multiple CephFS creation

2020-03-30 Thread Eugen Block
Hi, to create a second filesystem you have to use different pools anyway. If you already have one CephFS up and running then you also should have at least one standby daemon, right? If you create a new FS and that standby daemon is not configured to any specific rank then it will be used f