[ceph-users] Re: Erasure Profile Pool caps at pg_num 1024

2020-02-17 Thread Bandelow, Gunnar
Hi Eugen, thank you for your contribution. I will definitvely think about leaving a number of spare hosts, very good point. My main problem remains the Health Warning of "Too few PGs". This implies that my PG Number in the pool is too low and i cant increase it with an erasure profile. I al

[ceph-users] Re: bluestore compression questions

2020-02-17 Thread Igor Fedotov
Hi Andras, please find my answers inline. On 2/15/2020 12:27 AM, Andras Pataki wrote: We're considering using bluestore compression for some of our data, and I'm not entirely sure how to interpret compression results.  As an example, one of the osd perf dump results shows:     "bluestore

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-17 Thread Georg F
Hi Peter, could be a totally different problem but did you run the command "ceph osd require-osd-release nautilus" after the upgrade? We had poor performance after upgrading to nautilus and running this command fixed it. The same was reported by others for previous updates. Here is my original

[ceph-users] Re: centos7 / nautilus where to get kernel 5.5 from?

2020-02-17 Thread Konstantin Shalygin
On 2/14/20 9:18 PM, Marc Roos wrote: I have default centos7 setup with nautilus. I have been asked to install 5.5 to check a 'bug'. Where should I get this from? I read that the elrepo kernel is not compiled like rhel. http://elrepo.org/tiki/kernel-ml k __

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-17 Thread Marc Roos
How do you check if you issued this command in the past? -Original Message- To: ceph-users@ceph.io Subject: [ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7 Hi Peter, could be a totally different problem but did you run the command "ceph osd require

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-17 Thread Dan van der Ster
This means it has been applied: # ceph osd dump -f json | jq .require_osd_release "nautilus" -- dan On Mon, Feb 17, 2020 at 11:10 AM Marc Roos wrote: > > > How do you check if you issued this command in the past? > > > -Original Message- > To: ceph-users@ceph.io > Subject: [ceph-users]

[ceph-users] Re: centos7 / nautilus where to get kernel 5.5 from?

2020-02-17 Thread Marc Roos
Ok if elrepo is fine, I will use it. This warning sort of made sense to me Warning: Please note that installing a new kernel not officially supported by both RHEL and CentOS project. It is also possible that your system may not boot. As the kernel-ml/lt packages are built from the source tar

[ceph-users] Re: MDS: obscene buffer_anon memory use when scanning lots of files

2020-02-17 Thread Dan van der Ster
On Mon, Feb 10, 2020 at 8:31 PM John Madden wrote: > > Upgraded to 14.2.7, doesn't appear to have affected the behavior. As > requested: In case it wasn't clear -- the fix that Patrick mentioned was postponed to 14.2.8. -- dan ___ ceph-users mailing l

[ceph-users] Identify slow ops

2020-02-17 Thread Thomas Schneider
Hi, the current output of ceph -s reports a warning: 2 slow ops, oldest one blocked for 347335 sec, mon.ld5505 has slow ops This time is increasing. root@ld3955:~# ceph -s   cluster:     id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae     health: HEALTH_WARN     9 daemons have recently crash

[ceph-users] cephfs metadata

2020-02-17 Thread Frank R
Hi all, Is there a way to estimate how much storage space is required for CephFS metadata given an expected number of files in the filesystem? thx Frank ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@c

[ceph-users] 转发: Causual survey on the successful usage of CephFS on production

2020-02-17 Thread huxia...@horebdata.cn
Dear Folks, I am planning a files systems project and CephFS is under serious consideration. But browsering on the web i found some negative comments on CephFS about stability and losing data. So i would like to learn more on the latest development about CephFS statbility, either a successful

[ceph-users] Re: 转发: Causual survey on the successful usage of CephFS on production

2020-02-17 Thread Marc Roos
cephfs is considered to be stable. I am using it with only one mds for 2-3 years in low load environment without any serious issues. -Original Message- Sent: 17 February 2020 16:07 To: ceph-users Subject: [ceph-users] =?eucgb2312_cn?q?=D7=AA=B7=A2=3A_Causual_survey_on_the_successful_usa

[ceph-users] Re: CephFS hangs with access denied

2020-02-17 Thread Dietmar Rieder
Hi, we set now: mds_session_blacklist_on_timeout to false mds_session_blacklist_on_evict to false mds_cap_revoke_eviction_timeout to 900 for now there was no loss of mount or kernel crash. However, one of our big computation jobs is finished and so the load on the fs is less as well. We will ke

[ceph-users] Re: Identify slow ops

2020-02-17 Thread Paul Emmerich
that's probably just https://tracker.ceph.com/issues/43893 (a harmless bug) Restart the mons to get rid of the message Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90

[ceph-users] Performance of old vs new hw?

2020-02-17 Thread jesper
Hi We have some oldish servers with ssds - all on 25gbit nics. R815 AMD - 2,4ghz+ Is there significant performance benefits in moving to a new NVMe based, new cpus? +20% IOPs? + 50% IOPs? Jesper Sent from myMail for iOS ___ ceph-users mailing li