In Hammer, ceph does not keep track of which objects exist and which ones
don't. The delete is trying to delete every possible object in the rbd. In
Jewel, the object_map keeps track of which objects exist and the delete of
that rbd would be drastically faster. If you were 1PB of data to the rbd,
i
Le 15/07/2017 à 23:09, Udo Lembke a écrit :
Hi,
On 15.07.2017 16:01, Phil Schwarz wrote:
Hi,
...
While investigating, i wondered about my config :
Question relative to /etc/hosts file :
Should i use private_replication_LAN Ip or public ones ?
private_replication_LAN!! And the pve-cluster shou
Hi,
On 16.07.2017 15:04, Phil Schwarz wrote:
> ...
> Same result, the OSD is known by the node, but not by the cluster.
> ...
Firewall? Or missmatch in /etc/hosts or DNS??
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/l
Le 16/07/2017 à 17:02, Udo Lembke a écrit :
Hi,
On 16.07.2017 15:04, Phil Schwarz wrote:
...
Same result, the OSD is known by the node, but not by the cluster.
...
Firewall? Or missmatch in /etc/hosts or DNS??
Udo
OK,
- No FW,
- No DNS issue at this point.
- Same procedure followed with the
Hi all
I recently upgraded two separate ceph clusters from Jewel to Luminous. (OS
is Ubuntu xenial) Everything went smoothly except on one of the monitors in
each cluster I had a problem shutting down/starting up. It seems the
systemd dependencies are messed up. I get:
systemd[1]: ceph-osd.target
Hi, everyone.
We intend to use cephfs of Jewel version, however, we don’t know its status. Is
it production ready in Jewel? Does it still have lots of bugs? Is it a major
effort of the current ceph development? And who are using cephfs now?
___
ceph-us
It works and can reasonably be called "production ready". However in
Jewel there are still some features (e.g. directory sharding, multi
active MDS, and some security constraints) that may limit widespread
usage. Also note that userspace client support in e.g. nfs-ganesha and
samba is a mixed bag a
Hi, thanks for the quick reply:-)
May I ask which company are you in? I'm asking this because we are collecting
cephfs's usage information as the basis of our judgement about whether to use
cephfs. And also, how are you using it? Are you using single-mds, the so-called
active-standby mode? And
I work at Monash University. We are using active-standby MDS. We don't
yet have it in full production as we need some of the newer Luminous
features before we can roll it out more broadly, however we are moving
towards letting a subset of users on (just slowly ticking off related
work like putting
Hi, all!
After upgrading from 10.2.7 to 10.2.9 I see that restarting osds by
'restart ceph-osd id=N' or 'restart ceph-osd-all' takes about 10 minutes
for getting OSD from DOWN to UP. The same situation on all 208 OSDs on 7
servers.
Also very long OSD start after rebooting servers.
Before up
Hi, Anton.
You need to run the OSD with debug_ms = 1/1 and debug_osd = 20/20 for
detailed information.
2017-07-17 8:26 GMT+03:00 Anton Dmitriev :
> Hi, all!
>
> After upgrading from 10.2.7 to 10.2.9 I see that restarting osds by
> 'restart ceph-osd id=N' or 'restart ceph-osd-all' takes about 10 m
Thanks for reply.
I restarted OSD with debug_ms = 1/1 and debug_osd = 20/20.
Look at this:
2017-07-17 08:57:52.077481 7f4db319c840 0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-167) detect_feature: extsize
is disabled by conf
2017-07-17 09:04:04.345065 7f4db319c840 0
filestore(/var/lib/ceph/o
During start it consumes ~90% CPU, strace shows, that OSD process doing
something with LevelDB.
Compact is disabled:
r...@storage07.main01.ceph.apps.prod.int.grcc:~$ cat /etc/ceph/ceph.conf
| grep compact
#leveldb_compact_on_mount = true
But with debug_leveldb=20 I see, that compaction is runn
13 matches
Mail list logo