Re: [ceph-users] Delete unused RBD volume takes to long.

2017-07-16 Thread David Turner
In Hammer, ceph does not keep track of which objects exist and which ones don't. The delete is trying to delete every possible object in the rbd. In Jewel, the object_map keeps track of which objects exist and the delete of that rbd would be drastically faster. If you were 1PB of data to the rbd, i

Re: [ceph-users] Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

2017-07-16 Thread Phil Schwarz
Le 15/07/2017 à 23:09, Udo Lembke a écrit : Hi, On 15.07.2017 16:01, Phil Schwarz wrote: Hi, ... While investigating, i wondered about my config : Question relative to /etc/hosts file : Should i use private_replication_LAN Ip or public ones ? private_replication_LAN!! And the pve-cluster shou

Re: [ceph-users] Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

2017-07-16 Thread Udo Lembke
Hi, On 16.07.2017 15:04, Phil Schwarz wrote: > ... > Same result, the OSD is known by the node, but not by the cluster. > ... Firewall? Or missmatch in /etc/hosts or DNS?? Udo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/l

Re: [ceph-users] Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

2017-07-16 Thread Phil Schwarz
Le 16/07/2017 à 17:02, Udo Lembke a écrit : Hi, On 16.07.2017 15:04, Phil Schwarz wrote: ... Same result, the OSD is known by the node, but not by the cluster. ... Firewall? Or missmatch in /etc/hosts or DNS?? Udo OK, - No FW, - No DNS issue at this point. - Same procedure followed with the

[ceph-users] Systemd dependency cycle in Luminous

2017-07-16 Thread Michael Andersen
Hi all I recently upgraded two separate ceph clusters from Jewel to Luminous. (OS is Ubuntu xenial) Everything went smoothly except on one of the monitors in each cluster I had a problem shutting down/starting up. It seems the systemd dependencies are messed up. I get: systemd[1]: ceph-osd.target

[ceph-users] How's cephfs going?

2017-07-16 Thread 许雪寒
Hi, everyone. We intend to use cephfs of Jewel version, however, we don’t know its status. Is it production ready in Jewel? Does it still have lots of bugs? Is it a major effort of the current ceph development? And who are using cephfs now? ___ ceph-us

Re: [ceph-users] How's cephfs going?

2017-07-16 Thread Blair Bethwaite
It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage. Also note that userspace client support in e.g. nfs-ganesha and samba is a mixed bag a

[ceph-users] 答复: How's cephfs going?

2017-07-16 Thread 许雪寒
Hi, thanks for the quick reply:-) May I ask which company are you in? I'm asking this because we are collecting cephfs's usage information as the basis of our judgement about whether to use cephfs. And also, how are you using it? Are you using single-mds, the so-called active-standby mode? And

Re: [ceph-users] 答复: How's cephfs going?

2017-07-16 Thread Blair Bethwaite
I work at Monash University. We are using active-standby MDS. We don't yet have it in full production as we need some of the newer Luminous features before we can roll it out more broadly, however we are moving towards letting a subset of users on (just slowly ticking off related work like putting

[ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-16 Thread Anton Dmitriev
Hi, all! After upgrading from 10.2.7 to 10.2.9 I see that restarting osds by 'restart ceph-osd id=N' or 'restart ceph-osd-all' takes about 10 minutes for getting OSD from DOWN to UP. The same situation on all 208 OSDs on 7 servers. Also very long OSD start after rebooting servers. Before up

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-16 Thread Irek Fasikhov
Hi, Anton. You need to run the OSD with debug_ms = 1/1 and debug_osd = 20/20 for detailed information. 2017-07-17 8:26 GMT+03:00 Anton Dmitriev : > Hi, all! > > After upgrading from 10.2.7 to 10.2.9 I see that restarting osds by > 'restart ceph-osd id=N' or 'restart ceph-osd-all' takes about 10 m

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-16 Thread Anton Dmitriev
Thanks for reply. I restarted OSD with debug_ms = 1/1 and debug_osd = 20/20. Look at this: 2017-07-17 08:57:52.077481 7f4db319c840 0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-167) detect_feature: extsize is disabled by conf 2017-07-17 09:04:04.345065 7f4db319c840 0 filestore(/var/lib/ceph/o

Re: [ceph-users] Long OSD restart after upgrade to 10.2.9

2017-07-16 Thread Anton Dmitriev
During start it consumes ~90% CPU, strace shows, that OSD process doing something with LevelDB. Compact is disabled: r...@storage07.main01.ceph.apps.prod.int.grcc:~$ cat /etc/ceph/ceph.conf | grep compact #leveldb_compact_on_mount = true But with debug_leveldb=20 I see, that compaction is runn