[ceph-users] bluestore compression enabled but no data compressed

2018-09-18 Thread Frank Schilder
00 All as it should be, except for compression. Am I overlooking something? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
possibly provide a source or sample commands? Thanks and best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 09 October 2018 17:42 To: Frank Schilder Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
ssion happening. If you know about something else than "ceph osd pool set" - commands, please let me know. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 12 October 2018 15:47:20 To:

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
ou know. Thanks and have a nice weekend, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 12 October 2018 16:50:31 To: Frank Schilder Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] bluestore compression enabled but n

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-19 Thread Frank Schilder
e questions, this would be great. Most importantly right now is that I got it to work. Thanks for your help, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Frank Schilder Sent: 12 October 2018 17:00

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-23 Thread Frank Schilder
(see question marks in table above, what is the resulting mode?). What I would like to do is enable compression on all OSDs, enable compression on all data pools and disable compression on all meta data pools. Data and meta data pools might share OSDs in the future. The above ta

Re: [ceph-users] bluestore compression enabled but no data compressed

2019-03-16 Thread Frank Schilder
Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Ragan, Tj (Dr.) Sent: 14 March 2019 11:22:07 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] bluestore compression enabled but no

Re: [ceph-users] Checking cephfs compression is working

2019-03-26 Thread Frank Schilder
helps, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Rhian Resnick Sent: 16 November 2018 16:58:04 To: ceph-users@lists.ceph.com Subject: [ceph-users] Checking cephfs compression is working How do you confirm

[ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-13 Thread Frank Schilder
ser running the benchmark. Only IO to particular files/a particular directory stopped, so this problem seems to remain isolated. Also, the load on the servers was not high during the test. The fs remained responsive to other users. Also, the MDS daemons never crashed. There was no fail-over e

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-15 Thread Frank Schilder
"time": "2019-05-15 11:38:36.511381", "event": "header_read" }, { "time": "2019-05-15 11:38:36.511383", "event": "throttled"

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-15 Thread Frank Schilder
relevant if multiple MDS daemons are active on a file system. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 16 May 2019 05:50 To: Frank Schilder Cc: Stefan Kooman; ceph-users@lists.ceph.com Subject: Re: [ceph

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-16 Thread Frank Schilder
single-file-read load on it. I hope it doesn't take too long. Thanks for your input! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 16 May 2019 09:35 To: Frank Schilder Subject: Re: [ceph-users] mimic

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-16 Thread Frank Schilder
be, keeping in mind that we are in a pilot production phase already and need to maintain integrity of user data? Is there any counter showing if such operations happened at all? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
0b~1,10d~1,10f~1,111~1] The relevant pools are con-fs-meta and con-fs-data. Best regards, Frank = Frank Schilder AIT Risø Campus Bygning 109, rum S14 [root@ceph-08 ~]# cat /etc/tuned/ceph/tuned.conf [main] summary=Settings for ceph cluster. Derived from throughput-performance. inc

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} Sorry, I should have checked this first. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
00 PGs per OSD. I actually plan to give the cephfs a bit higher share for performance reasons. Its on the list. Thanks again and have a good weekend, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Stefan Kooman Sent: 18 May 201

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-20 Thread Frank Schilder
Dear Yan, thank you for taking care of this. I removed all snapshots and stopped snapshot creation. Please keep me posted. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 20 May 2019 13:34:07

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
sion. Either min_size=k is safe or not. If it is not, it should never be used anywhere in the documentation. I hope I marked my opinions and hypotheses clearly and that the links are helpful. If anyone could shed some light on as to why exactly min_size=k+1 is important, I would be grateful. Best r

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
an interesting feature? Is there any reason for not remapping all PGs (if possible) prior to starting recovery? It would eliminate the lack of redundancy for new writes (at least for new objects). Thanks again and best regards, ===== Frank Schilder AIT Risø Campus Bygni

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
10/18/surviving-a-ceph-cluster-outage-the-hard-way/ . You will easily find more. The deeper problem here is called "split-brain" and there is no real solution to it except to avoid it at all cost. Best regards, ===== Frank Schilder AIT Risø Campus Bygni

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
Dear Maged, thanks for elaborating on this question. Is there already information in which release this patch will be deployed? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users

Re: [ceph-users] Default min_size value for EC pools

2019-05-21 Thread Frank Schilder
that cannot be questioned by a single OSD trying to mark itself as in. At least the only context I have heard of OSD flapping was in connection to 2/1-pools. I have never seen such a report for, say, 3/2 pools. Am I overlooking something here? Best regards, ===== Frank Schilder AIT Risø

Re: [ceph-users] cephfs causing high load on vm, taking down 15 min later another cephfs vm

2019-05-23 Thread Frank Schilder
gh-network-load scheduled tasks on your machines (host or VM) or somewhere else affecting relevant network traffic (backups etc?) Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Marc Roos Se

[ceph-users] Pool configuration for RGW on multi-site cluster

2019-06-17 Thread Frank Schilder
ng crush rules to adjust locations of pools, etc. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph fs: stat fails on folder

2019-06-17 Thread Frank Schilder
stable) I can't see anything unusual in the logs or health reports. Thanks for your help! ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo

Re: [ceph-users] ceph fs: stat fails on folder

2019-06-17 Thread Frank Schilder
Please ignore the message below, it has nothing to do with ceph. Sorry for the spam. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Frank Schilder Sent: 17 June 2019 20:33 To: ceph

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
replicated pools, the aggregated IOPs might be heavily affected. I have, however, no data on that case. Hope that helps, Frank = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Dan van der Ster Sent: 20

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..." ____ From: Frank Schilder Sent: 20 June 2019 19:02 To: Dan van der Ster; ceph-users Subject: Re: [ceph-users] understanding the bluestore blob, chunk and compression para

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Frank Schilder
Dear Yan, Zheng, does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a link to the issue tracker? Thanks and best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 20 May 2019

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-08 Thread Frank Schilder
e works well for the majority of our use cases. We can still build small expensive pools to accommodate special performance requests. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of David Sent

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-09 Thread Frank Schilder
_size=object_size/k. Coincidentally, for spinning disks this also seems to imply best performance. If this is wrong, maybe a disk IO expert can provide a better explanation as a guide for EC profile choices? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, ru

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-09 Thread Frank Schilder
integer. alloc_size should be an integer multiple of object_size/k. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: 09 July 2019 09:22 To: Nathan Fish; ceph-users@lists.ceph.com Subject: Re: [ceph-users] What&#

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-11 Thread Frank Schilder
fig, kernel parameters etc, etc. One needs to test what one has. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Lars Marowsky-Bree Sent: 11 July 2019 10:14:04 To: ceph-users@lists.ceph.com

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-11 Thread Frank Schilder
being powers of 2. Yes, the 6+2 is a bit surprising. I have no explanation for the observation. It just seems a good argument for "do not trust what you believe, gather facts". And to try things that seem non-obvious - just to be sure. Best regards, ===== Frank Schilde

Re: [ceph-users] What if etcd is lost

2019-07-15 Thread Frank Schilder
node) against a running cluster with mons in quorum. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Oscar Segarra Sent: 15 July 2019 11:55 To: ceph-users Subject: [ceph-users] What if

Re: [ceph-users] cephfs snapshot scripting questions

2019-07-19 Thread Frank Schilder
pshots due to a not yet fixed bug; see this thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg54233.html Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Robert Ruge Sen

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread Frank Schilder
On Centos7, the option "secretfile" requires installation of ceph-fuse. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Yan, Zheng Sent: 07 August 2019 10:10:19

Re: [ceph-users] Failure to start ceph-mon in docker

2019-08-29 Thread Frank Schilder
ord: "!" comment: "ceph-container daemons" uid: 167 group: ceph shell: "/sbin/nologin" home: "/var/lib/ceph" create_home: no local: yes state: present system: yes This should err if a group and user ceph already exist with IDs

Re: [ceph-users] Can't create erasure coded pools with k+m greater than hosts?

2019-10-24 Thread Frank Schilder
e and what compromises are you willing to make with regards to sleep and sanity. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Salsa Sent: 21 October 2019 17:31 To: Martin Verges Cc: ceph-use

Re: [ceph-users] Erasure coded pools on Ambedded - advice please

2019-10-24 Thread Frank Schilder
ing. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of John Hearns Sent: 24 October 2019 08:21:47 To: ceph-users Subject: [ceph-users] Erasure coded pools on Ambedded - advice please I am se

Re: [ceph-users] v13.2.7 osds crash in build_incremental_map_msg

2019-12-04 Thread Frank Schilder
Is this issue now a no-go for updating to 13.2.7 or are there only some specific unsafe scenarios? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Dan van der Ster Sent: 03 December

Re: [ceph-users] Beginner questions

2020-01-17 Thread Frank Schilder
worst-case situations. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Bastiaan Visser Sent: 17 January 2020 06:55:25 To: Dave Hall Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-20 Thread Frank Schilder
disk_activate" && -n "${OSD_DEVICE}" ]] ; then echo "Disabling write cache on ${OSD_DEVICE}" /usr/sbin/smartctl -s wcache=off "${OSD_DEVICE}" fi This works for both, SAS and SATA drives and ensures that write cache is disabled before an OSD daemon st

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Frank Schilder
the OSD is started. Why and how else would one want this to happen? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Frank Schilder
the setting while the OSD is down. During benchmarks on raw disks I just switched cache on and off when I needed. There was nothing running on the disks and the fio benchmark is destructive any ways. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14