[ceph-users] Need help with synchronizing ceph mons

2015-05-13 Thread Manuel Lausch
ng on and how I can try to fix this? I am using ceph version 0.67.11 (bc8b67bef6309a32361be76cd11fb56b057ea9d2) 5 Monitor Nodes with SSD as leveldb store 24 OSD Hosts with 1416 OSDs Thank you Manuel -- Manuel Lausch Systemadministrator Cloud Backend Services 1&1 Mail & Media Develop

[ceph-users] Question about how to start ceph OSDs with systemd

2016-07-08 Thread Manuel Lausch
hi, In the last days I do play around with ceph jewel on debian Jessie and CentOS 7. Now I have a question about systemd on this Systems. I installed ceph jewel (ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian Jessie and prepared some OSDs. While playing around I de

[ceph-users] Try to install ceph hammer on CentOS7

2016-07-21 Thread Manuel Lausch
Hi, I try to install ceph hammer on centos7 but something with the RPM Repository seems to be wrong. In my yum.repos.d/ceph.repo file I have the following configuration: [ceph] name=Ceph packages for $basearch baseurl=baseurl=http://download.ceph.com/rpm-hammer/el7/$basearch enabled=1 priorit

Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-25 Thread Manuel Lausch
Fri, Jul 22, 2016 at 3:40 PM, Manuel Lausch wrote: oh. This was a copy&pase failure. Of course I checked my config again. Some other variations of configurating didn't help as well. Finaly I took the ceph-0.94.7-0.el7.x86_64.rpm in a directory and created with createrepo the n

Re: [ceph-users] Blocked requests problem

2017-08-23 Thread Manuel Lausch
2950171 2017-08-20 04:46:59.208792 > > Active scrub does not finish (about 24 hours). I did not restart any > OSD meanwhile. I'm thinking set noscrub, noscrub-deep, norebalance, > nobackfill, and norecover flags and restart 3,29,31th OSDs. Is this > solve my problem? Or anyone has

[ceph-users] ceph-osd restartd via systemd in case of disk error

2017-09-19 Thread Manuel Lausch
his a bug which should be fixed? We use ceph jewel (10.2.9) Regards Manuel -- Manuel Lausch Systemadministrator Cloud Services 1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 | 76135 Karlsruhe | Germany Phone: +49 721 91374-1847 E-Mail: manuel.lau...@1und1.de |

Re: [ceph-users] ceph-osd restartd via systemd in case of disk error

2017-09-19 Thread Manuel Lausch
ug in the OSD itself or maybe the OOM-killer > > or something. > > Perhaps using something like RestartPreventExitStatus and defining a > specific exit code for the OSD to exit on when it is exiting due to > an IO error. A other idea: The OSD daemon keeps running in a def

Re: [ceph-users] Very slow start of osds after reboot

2017-09-20 Thread Manuel Lausch
that randomizes the start up process of osds > running on the same node? > > Kind regards, > Piotr Dzionek > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --

Re: [ceph-users] tunable question

2017-10-02 Thread Manuel Lausch
it but the backfillingtraffic is to high to be handled without impacting other services on the Network. Do someone know if it is neccessary to enable this tunable? And could it be a problem in the future if we want to upgrade to newer versions wihout it enabled? Regards, Manuel Lausch Am Thu, 28 Sep 201

[ceph-users] resolve split brain situation in ceph cluster

2016-10-14 Thread Manuel Lausch
data are harmed. Regards Manuel -- Manuel Lausch Systemadministrator Cloud Services 1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 | 76135 Karlsruhe | Germany Phone: +49 721 91374-1847 E-Mail: manuel.lau...@1und1.de | Web: www.1und1.de Amtsgericht Mo

Re: [ceph-users] resolve split brain situation in ceph cluster

2016-10-17 Thread Manuel Lausch
Gregory Farnum: On Fri, Oct 14, 2016 at 7:27 AM, Manuel Lausch wrote: Hi, I need some help to fix a broken cluster. I think we broke the cluster, but I want to know your opinion and if you see a possibility to recover it. Let me explain what happend. We have a cluster (Version 0.94.9) in two

[ceph-users] osd down detection broken in jewel?

2016-11-30 Thread Manuel Lausch
n a appropriated time? The Cluster: ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) 24 hosts á 60 OSDs -> 1440 OSDs 2 pool with replication factor 4 65536 PGs 5 Mons -- Manuel Lausch Systemadministrator Cloud Services 1&1 Mail & Media Development & Technology

Re: [ceph-users] osd down detection broken in jewel?

2016-11-30 Thread Manuel Lausch
ion in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer. On Wed, Nov 30, 2016 at 6:39 AM, Manuel Lausch <mailto:manuel.lau...@1und1.de

[ceph-users] memory usage ceph jewel OSDs

2017-03-24 Thread Manuel Lausch
0.1 0.8% 92.4% 0.1 0.8% ceph::buffer::create_aligned 0.1 0.8% 93.2% 0.1 0.8% std::string::_Rep::_S_create Is this normal? Do I do something wrong? Is there a Bug? Why need my OSDs so much RAM? Thanks for your help Regards, Manuel -- Manuel Lausch Systemadministrator

[ceph-users] releasedate for 10.2.8?

2017-05-30 Thread Manuel Lausch
Hi, is there a release date for the next Jewel release (10.2.8)? I'm waiting for it since a few weeks because there are some fixes included related to snapshot deleting and snap trim sleep. Thanks Manuel -- Manuel Lausch Systemadministrator Cloud Services 1&1 Mail & Medi

[ceph-users] purpose of ceph-mgr daemon

2017-06-14 Thread Manuel Lausch
mation about it. Regards Manuel -- Manuel Lausch Systemadministrator Cloud Services 1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 | 76135 Karlsruhe | Germany Phone: +49 721 91374-1847 E-Mail: manuel.lau...@1und1.de | Web: www.1und1.de Amtsgericht Montabaur, HRB 5

[ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-22 Thread Manuel Lausch
quot;, line 145, in __enter__ self.path File "/usr/lib/python2.7/site-packages/ceph_volume/process.py", line 153, in run raise RuntimeError(msg) RuntimeError: command returned non-zero exit status: 32 ceph version 12.2.10 (177915764b752804194937482a39e95e0ca3de94) luminous

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
gt; plain and luks keys. > > Looking through the code, it is very tightly couple to > storing/retrieving keys from the monitors, and I don't know what > workarounds might be possible here other than throwing away the OSD > and deploying a new one (I take it th

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
On Wed, 23 Jan 2019 14:25:00 +0100 Jan Fajerski wrote: > I might be wrong on this, since its been a while since I played with > that. But iirc you can't migrate a subset of ceph-disk OSDs to > ceph-volume on one host. Once you run ceph-volume simple activate, > the ceph-disk systemd units and ud

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
On Wed, 23 Jan 2019 08:11:31 -0500 Alfredo Deza wrote: > I don't know how that would look like, but I think it is worth a try > if re-deploying OSDs is not feasible for you. yes, is there a working way to migrate this I will have a try it. > > The key api for encryption is *very* odd and a lo

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-24 Thread Manuel Lausch
On Wed, 23 Jan 2019 16:32:08 +0100 Manuel Lausch wrote: > > > > The key api for encryption is *very* odd and a lot of its quirks are > > undocumented. For example, ceph-volume is stuck supporting naming > > files and keys 'lockbox' > > (for backw

[ceph-users] chown -R on every osd activating

2019-03-05 Thread Manuel Lausch
Hi, we recently updated to ceph luminous 12.2.11 after running in this bug: http://tracker.ceph.com/issues/37784. But this is a other story. Now after rebooting a host I see there is a chown -R ceph:ceph running on each OSD before the OSD daemon starts. This takes a lot of time (-> millions of o

Re: [ceph-users] chown -R on every osd activating

2019-03-05 Thread Manuel Lausch
On Tue, 5 Mar 2019 11:04:16 +0100 Paul Emmerich wrote: > On Tue, Mar 5, 2019 at 10:51 AM Manuel Lausch > wrote: > > Now after rebooting a host I see there is a chown -R ceph:ceph > > running on each OSD before the OSD daemon starts. > > > > This takes a lot of ti

Re: [ceph-users] Please help: change IP address of a cluster

2019-07-23 Thread Manuel Lausch
t; b. stop entire cluster daemons and change IP addresses > > > c. For each mon node: ceph-mon -I {mon-id} -inject-monmap > {tmp}/{filename} > > > > d. Restart cluster daemons. > > > > 3. Or any better method... > = > > Would