Hi,
We have a single inconsistent placement group where I then subsequently
triggered a deep scrub and tried doing a 'pg repair'. The placement group
remains in an inconsistent state.
How do I discard the objects for this placement group only on the one OSD and
get Ceph to essentially write th
We have been benchmarking CephFS and comparing it Rados to see the performance
difference and how much overhead CephFS has. However, we are getting odd
results when using more than 1 OSD server (each OSDS has only one disk) using
CephFS but using Rados everything appears normal. These tests are
On 03/29/2020 04:43 PM, givemeone wrote:
> Hi all,
> I am installing ceph Nautilus and getting constantly errors while adding
> iscsi gateways
> It was working using http schema but after moving to https with wildcard
> certs gives API errors
>
> Below some of my configurations
> Thanks for you
Hi Gabryel,
Are the pools always using 1X replication? The rados results are
scaling like it's using 1X but the CephFS results definitely look
suspect. Have you tried turning up the iodepth in addition to tuning
numjobs? Also is this kernel cephfs or fuse? The fuse client is far
slower.
Hi folks,
I have a three-node cluster on a 10G network with very little traffic. I have a
six-OSD flash-only pool with two devices — a 1TB NVMe drive and a 256GB SATA
SSD — on each node, and here’s how it benchmarks:
Oof. How can I troubleshoot this? Anthony mentioned that I might be able to ru
Your system is indeed slow, benchmark results are still not here ;)
-Original Message-
Sent: 27 March 2020 19:44
To: ceph-users@ceph.io
Subject: [ceph-users] Terrible IOPS performance
Hi folks,
I have a three-node cluster on a 10G network with very little traffic. I
have a six-OS
Hi,
I have a feeling that the pg repair didn't actually run yet. Sometimes
if the OSDs are busy scrubbing, the repair doesn't start when you ask
it to.
You can force it through with something like:
ceph osd set noscrub
ceph osd set nodeep-scrub
ceph config set osd_max_scrubs 3
ceph pg repair
cep
On Mon, 30 Mar 2020, Ml Ml wrote:
> Hello List,
>
> is this a bug?
>
> root@ceph02:~# ceph cephadm generate-key
> Error EINVAL: Traceback (most recent call last):
> File "/usr/share/ceph/mgr/cephadm/module.py", line 1413, in _generate_key
> with open(path, 'r') as f:
> FileNotFoundError: [E
Hello List,
is this a bug?
root@ceph02:~# ceph cephadm generate-key
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 1413, in _generate_key
with open(path, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp4ejhr7wh
Hi guys,
This is documented as an experimental feature, but it doesn’t explain how to
ensure that affinity for a given MDS sticks to the second filesystem you
create. Has anyone had success implementing a second CephFS? In my case it will
be based on a completely different pool from my first on
Hi Marco and Jeff,
On Fri, 27 Mar 2020 08:04:56 -0400, Jeff Layton wrote:
> > i‘m running a 3 node ceph cluster setup with collocated mons and mds
> > for actually 3 filesystems at home since mimic. I’m planning to
> > downgrade to one FS and use RBD in the future, but this is another
> > story.
*sigh* and this time reply to all.
rbd-target-api is a little opinionated on where the ssl cert and key files
live and what they're named. It expects:
cert_files = ['/etc/ceph/iscsi-gateway.crt',
'/etc/ceph/iscsi-gateway.key']
So make sure these exist, and are named corre
Hi,
to create a second filesystem you have to use different pools anyway.
If you already have one CephFS up and running then you also should
have at least one standby daemon, right? If you create a new FS and
that standby daemon is not configured to any specific rank then it
will be used f
13 matches
Mail list logo