Last week I asked about a rogue inode that was causing ceph-mds to segfault
during replay. We didn't get any suggestions from this list, so we have been
familiarizing ourselves with the ceph source code, and have added the following
patch:
--- a/src/mds/CInode.cc
+++ b/src/mds/CInode.cc
@@ -7
I have probably misunterstood how to create erasure coded pools so I may be in
need of some theory and appreciate if you can point me to documentation that
may clarify my doubts.
I have so far 1 cluster with 3 hosts and 30 OSDs (10 each host).
I tried to create an erasure code profile like so:
Default failure domain in Ceph is "host" (see ec profile), i.e., you
need at least k+m hosts (but at least k+m+1 is better for production
setups).
You can change that to OSD, but that's not a good idea for a
production setup for obvious reasons. It's slightly better to write a
crush rule that expli
Ok, I'm lost here.
How am I supposed to write a crush rule?
So far I managed to run:
#ceph osd crush rule dump test -o test.txt
So I can edit the rule. Now I have two problems:
1. Whats the functions and operations to use here? Is there documentation
anywhere abuot this?
2. How may I create a
Hello Lei,
On Thu, Oct 17, 2019 at 8:43 PM Lei Liu wrote:
>
> Hi cephers,
>
> We have some ceph clusters use cephfs in production(mount with kernel
> cephfs), but several of clients often keep a lot of caps(millions) unreleased.
> I know this is due to the client's inability to complete the cach
Full disclosure - I have not created an erasure code pool yet!
I have been wanting to do the same thing that you are attempting and
have these links saved. I believe this is what you are looking for.
This link is for decompiling the CRUSH rules and recompiling:
https://docs.ceph.com/docs/lumi
Thanks for your reply.
Yes, Already set it.
[mds]
> mds_max_caps_per_client = 10485760 # default is 1048576
I think the current configuration is big enough for per client. Do I need
to continue to increase this value?
Thanks.
Patrick Donnelly 于2019年10月19日周六 上午6:30写道:
> Hello Lei,
>
> On
Only osds is v12.2.8, all of mds and mon used v12.2.12
# ceph versions
{
"mon": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777)
luminous (stable)": 4