Hi Joe and Mehmet!
Thanks for your responses!
The requested outputs at the end of the message.
But to make my question more clear:
What we are actually after, is not about CURRENT usage of our OSDs, but
stats on total GBs written in the cluster, per OSD, and read/write ratio.
With those num
Hi Mario,
On Mon, Feb 10, 2020 at 07:50:15PM +0100, Ml Ml wrote:
> Hello List,
>
> first of all: Yes - i made mistakes. Now i am trying to recover :-/
>
> I had a healthy 3 node cluster which i wanted to convert to a single one.
> My goal was to reinstall a fresh 3 Node cluster and start with 2
Say I think my cephfs is slow when I rsync to it, slower than it used to
be. First of all, I do not get why it reads so much data. I assume the
file attributes need to come from the mds server, so the rsync backup
should mostly cause writes not?
I think it started being slow, after enabling s
> On 11 Feb 2020, at 14:53, Marc Roos wrote:
>
>
> Say I think my cephfs is slow when I rsync to it, slower than it used to
> be. First of all, I do not get why it reads so much data. I assume the
> file attributes need to come from the mds server, so the rsync backup
> should mostly cause
Quick Update in case anyone reads my previous post.
No ideas were forthcoming on how to fix the assert that was flapping the
OSD (caused by deleting unfound objects).
The affected pg was readable, so we decided to recycle the OSD...
destroy the flapping primary OSD
# ceph osd destroy 443 --force
Igor,
You are exactly right - it is my fault, I have failed to read the code
correctly.
Cheers,
Boris.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
>>>And it seems smartctl on our seagate ST4000NM0034 drives do not give us
data on total bytes written or read
If it's a SAS device, it's not always obvious where to find this information.
You can use Seagate's openseachest toolset.
For any (SAS/SATA, HDD/SSD) device, the --deviceInfo will give
Following the list migration I need to re-open this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/038014.html
...
Upgraded to 14.2.7, doesn't appear to have affected the behavior. As requested:
~$ ceph tell mds.mds1 heap stats
2020-02-10 16:52:44.313 7fbda2cae700 0 c
Thanks Samy I will give this a try.
It would be helpful if there is some value that shows cache misses or
so, so you have a more precise idea with how much you need to increase
the cache. I have now added a couple of GB's see if it is being used and
helps speed up things.
PS. I have been look