Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Frank Schilder
the setting while the OSD is down. During benchmarks on raw disks I just switched cache on and off when I needed. There was nothing running on the disks and the fio benchmark is destructive any ways. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Frank Schilder
the OSD is started. Why and how else would one want this to happen? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-20 Thread Frank Schilder
disk_activate" && -n "${OSD_DEVICE}" ]] ; then echo "Disabling write cache on ${OSD_DEVICE}" /usr/sbin/smartctl -s wcache=off "${OSD_DEVICE}" fi This works for both, SAS and SATA drives and ensures that write cache is disabled before an OSD daemon st

Re: [ceph-users] Beginner questions

2020-01-17 Thread Frank Schilder
worst-case situations. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Bastiaan Visser Sent: 17 January 2020 06:55:25 To: Dave Hall Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-

Re: [ceph-users] v13.2.7 osds crash in build_incremental_map_msg

2019-12-04 Thread Frank Schilder
Is this issue now a no-go for updating to 13.2.7 or are there only some specific unsafe scenarios? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Dan van der Ster Sent: 03 December

Re: [ceph-users] Erasure coded pools on Ambedded - advice please

2019-10-24 Thread Frank Schilder
ing. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of John Hearns Sent: 24 October 2019 08:21:47 To: ceph-users Subject: [ceph-users] Erasure coded pools on Ambedded - advice please I am se

Re: [ceph-users] Can't create erasure coded pools with k+m greater than hosts?

2019-10-24 Thread Frank Schilder
e and what compromises are you willing to make with regards to sleep and sanity. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Salsa Sent: 21 October 2019 17:31 To: Martin Verges Cc: ceph-use

Re: [ceph-users] Failure to start ceph-mon in docker

2019-08-29 Thread Frank Schilder
ord: "!" comment: "ceph-container daemons" uid: 167 group: ceph shell: "/sbin/nologin" home: "/var/lib/ceph" create_home: no local: yes state: present system: yes This should err if a group and user ceph already exist with IDs

Re: [ceph-users] Error Mounting CephFS

2019-08-07 Thread Frank Schilder
On Centos7, the option "secretfile" requires installation of ceph-fuse. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Yan, Zheng Sent: 07 August 2019 10:10:19

[ceph-users] Adding block.db afterwards

2019-07-26 Thread Frank Rothenstein
or path I tried different versions Any help an this would be appreciated. Frank Frank Rothenstein  Systemadministrator Fon: +49 3821 700 125 Fax: +49 3821 700 190Internet: www.bodden-kliniken.de E-Mail: f.rothenst...@bodden-kliniken.de _ BODD

Re: [ceph-users] cephfs snapshot scripting questions

2019-07-19 Thread Frank Schilder
pshots due to a not yet fixed bug; see this thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg54233.html Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Robert Ruge Sen

Re: [ceph-users] What if etcd is lost

2019-07-15 Thread Frank Schilder
node) against a running cluster with mons in quorum. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Oscar Segarra Sent: 15 July 2019 11:55 To: ceph-users Subject: [ceph-users] What if

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-11 Thread Frank Schilder
being powers of 2. Yes, the 6+2 is a bit surprising. I have no explanation for the observation. It just seems a good argument for "do not trust what you believe, gather facts". And to try things that seem non-obvious - just to be sure. Best regards, = Frank Schilde

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-11 Thread Frank Schilder
fig, kernel parameters etc, etc. One needs to test what one has. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Lars Marowsky-Bree Sent: 11 July 2019 10:14:04 To: ceph-users@lists.ceph.com

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-09 Thread Frank Schilder
integer. alloc_size should be an integer multiple of object_size/k. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: 09 July 2019 09:22 To: Nathan Fish; ceph-users@lists.ceph.com Subject: Re: [ceph-users] What&#

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-09 Thread Frank Schilder
_size=object_size/k. Coincidentally, for spinning disks this also seems to imply best performance. If this is wrong, maybe a disk IO expert can provide a better explanation as a guide for EC profile choices? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, ru

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-08 Thread Frank Schilder
e works well for the majority of our use cases. We can still build small expensive pools to accommodate special performance requests. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of David Sent

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Frank Schilder
Dear Yan, Zheng, does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a link to the issue tracker? Thanks and best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 20 May 2019

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..." ____ From: Frank Schilder Sent: 20 June 2019 19:02 To: Dan van der Ster; ceph-users Subject: Re: [ceph-users] understanding the bluestore blob, chunk and compression para

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
replicated pools, the aggregated IOPs might be heavily affected. I have, however, no data on that case. Hope that helps, Frank = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Dan van der Ster Sent: 20

Re: [ceph-users] ceph fs: stat fails on folder

2019-06-17 Thread Frank Schilder
Please ignore the message below, it has nothing to do with ceph. Sorry for the spam. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Frank Schilder Sent: 17 June 2019 20:33 To: ceph

[ceph-users] ceph fs: stat fails on folder

2019-06-17 Thread Frank Schilder
stable) I can't see anything unusual in the logs or health reports. Thanks for your help! ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo

[ceph-users] Pool configuration for RGW on multi-site cluster

2019-06-17 Thread Frank Schilder
ng crush rules to adjust locations of pools, etc. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Frank Yu
nly be providing CephFS, fairly large > > files, and will use erasure encoding. > > > > many thanks for any advice, > > > > Jake > > > > > > ___ > > ceph-users mailing list > > ceph-

Re: [ceph-users] cephfs causing high load on vm, taking down 15 min later another cephfs vm

2019-05-23 Thread Frank Schilder
gh-network-load scheduled tasks on your machines (host or VM) or somewhere else affecting relevant network traffic (backups etc?) Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Marc Roos Se

Re: [ceph-users] Default min_size value for EC pools

2019-05-21 Thread Frank Schilder
that cannot be questioned by a single OSD trying to mark itself as in. At least the only context I have heard of OSD flapping was in connection to 2/1-pools. I have never seen such a report for, say, 3/2 pools. Am I overlooking something here? Best regards, = Frank Schilder AIT Risø

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
Dear Maged, thanks for elaborating on this question. Is there already information in which release this patch will be deployed? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
10/18/surviving-a-ceph-cluster-outage-the-hard-way/ . You will easily find more. The deeper problem here is called "split-brain" and there is no real solution to it except to avoid it at all cost. Best regards, ===== Frank Schilder AIT Risø Campus Bygni

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
an interesting feature? Is there any reason for not remapping all PGs (if possible) prior to starting recovery? It would eliminate the lack of redundancy for new writes (at least for new objects). Thanks again and best regards, ===== Frank Schilder AIT Risø Campus Bygni

Re: [ceph-users] Default min_size value for EC pools

2019-05-20 Thread Frank Schilder
sion. Either min_size=k is safe or not. If it is not, it should never be used anywhere in the documentation. I hope I marked my opinions and hypotheses clearly and that the links are helpful. If anyone could shed some light on as to why exactly min_size=k+1 is important, I would be grateful. Best r

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-20 Thread Frank Schilder
Dear Yan, thank you for taking care of this. I removed all snapshots and stopped snapshot creation. Please keep me posted. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 20 May 2019 13:34:07

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
00 PGs per OSD. I actually plan to give the cephfs a bit higher share for performance reasons. Its on the list. Thanks again and have a good weekend, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Stefan Kooman Sent: 18 May 201

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} Sorry, I should have checked this first. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-18 Thread Frank Schilder
0b~1,10d~1,10f~1,111~1] The relevant pools are con-fs-meta and con-fs-data. Best regards, Frank = Frank Schilder AIT Risø Campus Bygning 109, rum S14 [root@ceph-08 ~]# cat /etc/tuned/ceph/tuned.conf [main] summary=Settings for ceph cluster. Derived from throughput-performance. inc

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-16 Thread Frank Schilder
be, keeping in mind that we are in a pilot production phase already and need to maintain integrity of user data? Is there any counter showing if such operations happened at all? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-16 Thread Frank Schilder
single-file-read load on it. I hope it doesn't take too long. Thanks for your input! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 16 May 2019 09:35 To: Frank Schilder Subject: Re: [ceph-users] mimic

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-15 Thread Frank Schilder
relevant if multiple MDS daemons are active on a file system. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 16 May 2019 05:50 To: Frank Schilder Cc: Stefan Kooman; ceph-users@lists.ceph.com Subject: Re: [ceph

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-15 Thread Frank Schilder
"time": "2019-05-15 11:38:36.511381", "event": "header_read" }, { "time": "2019-05-15 11:38:36.511383", "event": "throttled"

[ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-05-13 Thread Frank Schilder
ser running the benchmark. Only IO to particular files/a particular directory stopped, so this problem seems to remain isolated. Also, the load on the servers was not high during the test. The fs remained responsive to other users. Also, the MDS daemons never crashed. There was no fail-over e

Re: [ceph-users] Checking cephfs compression is working

2019-03-26 Thread Frank Schilder
helps, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Rhian Resnick Sent: 16 November 2018 16:58:04 To: ceph-users@lists.ceph.com Subject: [ceph-users] Checking cephfs compression is working How do you confirm

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-25 Thread Frank Yu
forgive me, it's my mistake - - On Sat, Mar 23, 2019 at 4:28 PM Frank Yu wrote: > Hi guys, > > I have try to setup a cluster with this version, I found the mgr > prometheus metrics has been changed a lot compared with version 13.2.x. > e.g: there is no ceph_mds_* related

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-23 Thread Frank Yu
gt;> ceph-users mailing list >> > >> ceph-users@lists.ceph.com >> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > ___ >> > > ceph-users mailing list >> > > ceph-users@lists.ceph.com >> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > >> > >> > ___ >> > ceph-users mailing list >> > ceph-users@lists.ceph.com >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] bluestore compression enabled but no data compressed

2019-03-16 Thread Frank Schilder
Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: ceph-users on behalf of Ragan, Tj (Dr.) Sent: 14 March 2019 11:22:07 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] bluestore compression enabled but no

Re: [ceph-users] Possible data damage: 1 pg inconsistent

2018-12-21 Thread Frank Ritchie
s return zero > and this will lead to the error message. > > I have set nodeep-scrub and i am waiting for 12.2.11. > > Thanks > Christoph > > On Fri, Dec 21, 2018 at 03:23:21PM +0100, Hervé Ballans wrote: > > Hi Frank, > > > > I encounter exactly the same iss

[ceph-users] Possible data damage: 1 pg inconsistent

2018-12-18 Thread Frank Ritchie
daily? Can the errors possibly be due to deep scrubbing too aggressively? I realize these errors indicate potential failing drives but I can't replace a drive daily. thx Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.cep

Re: [ceph-users] pg 17.36 is active+clean+inconsistent head expected clone 1 missing?

2018-11-15 Thread Frank Yu
7.36 deep-scrub 1 errors > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > ___ > ceph-users mailing list

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-23 Thread Frank Schilder
(see question marks in table above, what is the resulting mode?). What I would like to do is enable compression on all OSDs, enable compression on all data pools and disable compression on all meta data pools. Data and meta data pools might share OSDs in the future. The above ta

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-19 Thread Frank Schilder
tore_compressed_original=0.04 or bluestore_compressed_allocated/bluestore_compressed_original=0.5? The second ratio does not look too impressive given the file contents. 4) Is there any way to get uncompressed data compressed as a background task like scrub? If you have the time to look at thes

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
ou know. Thanks and have a nice weekend, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 12 October 2018 16:50:31 To: Frank Schilder Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] bluestore compression enabled but n

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
ssion happening. If you know about something else than "ceph osd pool set" - commands, please let me know. Best regards, ===== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 12 October 2018 15:47:20 To:

Re: [ceph-users] bluestore compression enabled but no data compressed

2018-10-12 Thread Frank Schilder
possibly provide a source or sample commands? Thanks and best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: David Turner Sent: 09 October 2018 17:42 To: Frank Schilder Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users

Re: [ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank de Bot (lists)
John Spray wrote: > On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote: >> >> Hi, >> >> On my cluster I tried to clear all objects from a pool. I used the >> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench >> cleanup doe

[ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank (lists)
g the object is in, but the problem persists. What causes this? I use Centos 7.5 with mimic 13.2.2 regards, Frank de Bot ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] bluestore compression enabled but no data compressed

2018-09-18 Thread Frank Schilder
00 All as it should be, except for compression. Am I overlooking something? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] bluestore_prefer_deferred_size

2018-09-15 Thread Frank Ritchie
Hi all, I was wondering if anyone out the increase the value for bluestore_prefer_deferred_size to effectively defer all writes. If so, did you experience any unforeseen side effects? thx Frank ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] [need your help] How to Fix unclean PG

2018-09-15 Thread Frank Yu
"ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 79 } } On Sat, Sep 15, 2018 at 10:45 PM Paul Emmerich wrote: > Well, that's not a lot of information to troubleshoot such a problem. > > Please post the output of the following command

[ceph-users] [need your help] How to Fix unclean PG

2018-09-15 Thread Frank Yu
n MB/s. Is there any way to fix the unclean pg quickly? -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Mimic 13.2.1 released date?

2018-07-13 Thread Frank Yu
Hi there, Any plan for the release of 13.2.1? -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] FreeBSD Initiator with Ceph iscsi

2018-06-30 Thread Frank de Bot (lists)
I session at any time. Would an of those 2 options be possible on the ceph iscsi gateway solution to configure? Regards, Frank Jason Dillaman wrote: > Conceptually, I would assume it should just work if configured correctly > w/ multipath (to properly configure the ALUA settings on the LUNs).

Re: [ceph-users] FreeBSD Initiator with Ceph iscsi

2018-06-28 Thread Frank (lists)
On Tue, Jun 26, 2018 at 6:06 PM Frank de Bot (lists) mailto:li...@searchy.net>> wrote: Hi, In my test setup I have a ceph iscsi gateway (configured as in http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ ) I would like to use thie with a FreeBSD (11.1) initiat

[ceph-users] FreeBSD Initiator with Ceph iscsi

2018-06-26 Thread Frank de Bot (lists)
with this gateway setup? Regards, Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Frequent slow requests

2018-06-19 Thread Frank de Bot (lists)
Frank (lists) wrote: > Hi, > > On a small cluster (3 nodes) I frequently have slow requests. When > dumping the inflight ops from the hanging OSD, it seems it doesn't get a > 'response' for one of the subops. The events always look like: > I've done some

[ceph-users] Frequent slow requests

2018-06-14 Thread Frank (lists)
ommit_rec from 18"     }             ] The OSD id's are not the same. Looking at osd.20, the OSD process runs, it accepts requests ('ceph tell osd.20 bench' runs fine). When I restart the process for the OSD, the requests is completed. I could no

[ceph-users] Expected performane with Ceph iSCSI gateway

2018-05-28 Thread Frank (lists)
does iscsi perform compared to krbd? I've already did some benchmarking, but it didn't performed any near what krbd is doing. krbd easily saturates  the public netwerk, iscsi about 75%. Tmcu-runner is running during a benchmark at a load of 50 to 75% on the (owner)target Re

Re: [ceph-users] Data recovery after loosing all monitors

2018-05-22 Thread Frank Li
Just having reliable hardware isn’t enough for monitor failures. I’ve had a case where a wrongly typed command Brought down all three monitors via segfault and no way to bring them back since the command caused the monitor Database to be corrupt. I wish there was a checkpoint implemented in the

[ceph-users] performance tuning

2018-04-23 Thread Frank Ritchie
and error? thx Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] BlueStore questions

2018-03-03 Thread Frank Ritchie
? Would love to hear some actual numbers from users. thx Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] planning a new cluster

2018-02-26 Thread Frank Ritchie
here https://ceph.com/pgcalc/) along with pools for Kubernetes and RGW. 2. Define a single block storage pool (to be used by OpenStack and Kubernetes) and an object pool (for RGW). I am not sure how much space each component will require at this time. thx Frank

[ceph-users] OSD stuck in booting state while monitor show it as been up

2018-02-02 Thread Frank Li
Running ceph 12.2.2 in Centos 7.4. The cluster was in healthy condition until a command caused all the monitors to crash. Applied a private build for fixing the issue (thanks !) https://tracker.ceph.com/issues/22847 the monitors are all started, and all the OSDs are reported as been up in ceph

Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread Frank Li
Thanks, I’m downloading it right now -- Efficiency is Intelligent Laziness From: "ceph.nov...@habmalnefrage.de" Date: Friday, February 2, 2018 at 12:37 PM To: "ceph.nov...@habmalnefrage.de" Cc: Frank Li , "ceph-users@lists.ceph.com" Subject: Aw: Re: [ceph-use

Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread Frank Li
Sure, please let me know where to get and run the binaries. Thanks for the fast response ! -- Efficiency is Intelligent Laziness On 2/2/18, 10:31 AM, "Sage Weil" wrote: On Fri, 2 Feb 2018, Frank Li wrote: > Yes, I was dealing with an issue where OSD are not peerin

Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread Frank Li
b47f9427c6c97e2144b094b7e5ba) luminous (stable) -- Efficiency is Intelligent Laziness On 2/2/18, 9:45 AM, "Sage Weil" wrote: On Fri, 2 Feb 2018, Frank Li wrote: > Hi, I ran the ceph osd force-create-pg command in luminious 12.2.2 to recover a failed pg, and it >

[ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread Frank Li
Hi, I ran the ceph osd force-create-pg command in luminious 12.2.2 to recover a failed pg, and it Instantly caused all of the monitor to crash, is there anyway to revert back to an earlier state of the cluster ? Right now, the monitors refuse to come up, the error message is as follows: I’ve file

Re: [ceph-users] Rename iscsi target_iqn

2017-11-20 Thread Frank Brendel
Am 20.11.2017 um 15:10 schrieb Jason Dillaman: Recommended way to do what, exactly? If you are attempting to rename the target while keeping all other settings, at step (3) you could use "rados get" to get the current config, modify it, and then "rados put" to uploaded before continuing to step

Re: [ceph-users] Rename iscsi target_iqn

2017-11-20 Thread Frank Brendel
e gateway.conf from rbd pool 'rados -p rbd rm gateway.conf' 4. Start the iSCSI gateway on all nodes 'systemctl start rbd-target-api' Is this the recommended way? Thank you Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Rename iscsi target_iqn

2017-11-17 Thread Frank Brendel
Hi, how can I rename an iscsi target_iqn? And where is the configuration that I made with gwcli stored? Thank you Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph auth doesn't work on cephfs?

2017-10-12 Thread Frank Yu
John, I tried to write some data to the new created files, it failed, just as you said. Thanks very much. On Thu, Oct 12, 2017 at 6:20 PM, John Spray wrote: > On Thu, Oct 12, 2017 at 11:12 AM, Frank Yu wrote: > > Hi, > > I have a ceph cluster with three nodes, and I have a c

[ceph-users] ceph auth doesn't work on cephfs?

2017-10-12 Thread Frank Yu
n on pool cephfs_data, this mean, I should can't write data under mountpoint /mnt/ceph/?? or I'm wrong ? thanks -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Error with ceph to cloudstack integration.

2017-03-08 Thread frank
Hi, We have made sure that the key,ceph user ,ceph admin keys are correct. could you let us know if there is any other possibility that would mess up the integration. Regards, Frank On 03/06/2017 01:22 PM, Wido den Hollander wrote: Op 6 maart 2017 om 6:26 schreef frank : Hi, We have

[ceph-users] Error with ceph to cloudstack integration.

2017-03-05 Thread frank
and jewel as its ceph version. Any help will be greatly appreciated. Regards, Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph server with errors while deployment -- on jewel

2017-02-13 Thread frank
se let me know the details about the ceph installation steps that I should follow to trouble shoot this issue. Regards, Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
"bucket_index_max_shards": 0, "read_only": "false" } ], "placement_targets": [ { "name": "default-placement", "tags": [] } ], "defau

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
up set --rgw-zonegroup=default <mailto:owass...@redhat.com> Date: 26 July 2016 at 12:32:58 To: Frank Enderle <mailto:frank.ende...@anamica.de> Cc: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>, Shilpa Manjarabad Jagannath <mailto:smanj...@redhat.com> Subj

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
732357 Geschäftsführer: Yvonne Holzwarth, Frank Enderle From: Orit Wasserman <mailto:owass...@redhat.com> Date: 26 July 2016 at 12:13:21 To: Frank Enderle <mailto:frank.ende...@anamica.de> Cc: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>, Shilpa Manjarabad Jagan

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
} ], "placement_targets": [ { "name": "default-placement", "tags": [] } ], "default_placement": "default-placement", "realm_id": "" } and radosgw-admin -

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-26 Thread Frank Enderle
Heppacher Str. 39 71404 Korb Telefon: +49 7151 1351565 0 Telefax: +49 7151 1351565 9 E-Mail: frank.ende...@anamica.de Internet: www.anamica.de Handelsregister: AG Stuttgart HRB 732357 Geschäftsführer: Yvonne Holzwarth, Frank Enderle From: Orit Wasserman <mailto:owass...@redhat.com> Da

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Frank Enderle
It most certainly looks very much like the same problem.. Is there a way to patch the configuration by hand to get the cluster back in a working state? -- From: Shilpa Manjarabad Jagannath <mailto:smanj...@redhat.com> Date: 25 July 2016 at 10:34:42 To: Frank Enderle <mailto:f

Re: [ceph-users] Problem with RGW after update to Jewel

2016-07-25 Thread Frank Enderle
t-placement", "val": { "index_pool": ".rgw.buckets.index", "data_pool": ".rgw.buckets", "data_extra_pool": ".rgw.buckets.extra", "index

[ceph-users] Problem with RGW after update to Jewel

2016-07-24 Thread Frank Enderle
mixed up with the zone/zonegroup stuff during the update. Would be somebody able to take a look at this? I'm happy to provide all the required files; just name them. Thanks, Frank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mds not starting ?

2015-09-21 Thread Frank, Petric (Petric)
Hello John, that was the info i missed (both - create pools and fs). Works now. Thank you very much. Kind regards Petric > -Original Message- > From: John Spray [mailto:jsp...@redhat.com] > Sent: Montag, 21. September 2015 14:41 > To: Frank, Petric (Petric) >

[ceph-users] mds not starting ?

2015-09-21 Thread Frank, Petric (Petric)
Hello, i'm facing a problem that mds seems not to start. I started mds in debug mode "ceph-mds -f -i storage08 --debug_mds 10" which outputs in the log: -- cut - 2015-09-21 14:12:14.313534 7ff47983d780 0 ceph version 0.94.3 (95cefea9fd9ab740

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-30 Thread Frank Brendel
If you don't need LACP you could use round-robin bonding mode. With 4x1Gbit NICs you can get a bandwidth of 4Gbit per TCP connection. Either create trunks on stacked switches (e.g. Avaya) or use single switches (e.g. HP 1810-24) and a locally managed MAC address per node/bond. The latter is some

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-13 Thread Frank Yu
Specialist , Storage Platforms > CSC - IT Center for Science, > Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland > mobile: +358 503 812758 > tel. +358 9 4572001 > fax +358 9 4572302 > http://www.csc.fi/ > **** >

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-12 Thread Frank Yu
_______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD backup and snapshot

2015-01-23 Thread Frank Yu
h-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Regards Frank Yu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Official CentOS7 support

2014-12-02 Thread Frank Even
On Tue, Dec 2, 2014 at 12:42 PM, Gregory Farnum wrote: > > On Tue, Dec 2, 2014 at 10:55 AM, Ken Dreyer wrote: > > On 12/02/2014 10:59 AM, Gregory Farnum wrote: > >> We aren't currently doing any of the ongoing testing which that page > >> covers on CentOS 7. I think that's because it's going to f

[ceph-users] Official CentOS7 support

2014-12-02 Thread Frank Even
ons/ It's absence is currently causing great amounts of consternation in a discussion about using and deploying Ceph in an environment I deal with and I'm curious if there are any particular reasons it's absent from the list. Thanks, Frank __

[ceph-users] ERROR: failed to create bucket: XmlParseFailure

2014-11-26 Thread Frank Li
Hi, Is anyone help me to resolve the error as follows ? Thank a lot's. rest-bench --api-host=172.20.10.106 --bucket=test --access-key=BXXX --secret=z --protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write host=172.20.10.106 ERROR: failed to c

[ceph-users] rest-bench error : XmlParseFailure

2014-11-22 Thread Frank Li
1. Is there any one has the answer for this error? 2. rest-bench --api-host=s3-website-us-east-1.amazonaws.com --bucket=frank-s3-test --access-key=XXX --secret=IzuCXXXDDObLU --block-size=8 --protocol=http --uri_style=path write 3. host=s3

[ceph-users] rest-bench ERROR: failed to create bucket: XmlParseFailure

2014-11-21 Thread Frank Li
Hi, Is anyone help me to resolve the error as follows ? Thank a lot's. rest-bench --api-host=172.20.10.106 --bucket=test --access-key=BXXX --secret=z --protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write host=172.20.10.106 ERROR: failed to c

  1   2   >