[ceph-users] How safe is k=2, m=1, min_size=2?
Hi all, We have said in the past that an EC pool should have min_size=k+1, for the same reasons that a replica 3 pool needs min_size=2. And we've heard several stories about replica 3, min_size=1 leading to incomplete PGs. Taking a quick poll -- did anyone ever suffer an outage on a pool with k=2, m=1, min_size=2? Thanks! Dan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] bluestore worries
Hi all, I am thinking about converting a Filestore cluster to Bluestore. The OSD nodes have 16X4TB 7200 SATA OSDs with NVME write journals. The NVME drives should be large enough to house ~30G DB/WAL OSDs. I am worried that I will see a significant performance hit when the deferred writes to the NVME journals are eliminated with Bluestore. Has anyone converted a similar setup to Bluestore? If so, what was the performance impact. thx Frank ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph-mon using 100% CPU after upgrade to 14.2.5
Adding the dev list since it seems like a bug in 14.2.5. I was able to capture the output from perf top: 21.58% libceph-common.so.0 [.] ceph::buffer::v14_2_0::list::append 20.90% libstdc++.so.6.0.19 [.] std::getline, std::allocator > 13.25% libceph-common.so.0 [.] ceph::buffer::v14_2_0::list::append 10.11% libstdc++.so.6.0.19 [.] std::istream::sentry::sentry 8.94% libstdc++.so.6.0.19 [.] std::basic_ios >::clear 3.24% libceph-common.so.0 [.] ceph::buffer::v14_2_0::ptr::unused_tail_length 1.69% libceph-common.so.0 [.] std::getline, std::allocator >@plt 1.63% libstdc++.so.6.0.19 [.] std::istream::sentry::sentry@plt 1.21% [kernel] [k] __do_softirq 0.77% libpython2.7.so.1.0 [.] PyEval_EvalFrameEx 0.55% [kernel] [k] _raw_spin_unlock_irqrestore I increased mon debugging to 20 and nothing stuck out to me. Bryan > On Dec 12, 2019, at 4:46 PM, Bryan Stillwell wrote: > > On our test cluster after upgrading to 14.2.5 I'm having problems with the > mons pegging a CPU core while moving data around. I'm currently converting > the OSDs from FileStore to BlueStore by marking the OSDs out in multiple > nodes, destroying the OSDs, and then recreating them with ceph-volume lvm > batch. This seems too get the ceph-mon process into a state where it pegs a > CPU core on one of the mons: > > 1764450 ceph 20 0 4802412 2.1g 16980 S 100.0 28.1 4:54.72 ceph-mon > > Has anyone else run into this with 14.2.5 yet? I didn't see this problem > while the cluster was running 14.2.4. > > Thanks, > Bryan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Ceph on CentOS 8?
Hi, I am interested in el8 Packages as well. Is there any plan to provide el8 packages in the near future? Regards Manuel On Mon, 2 Dec 2019 11:16:01 +0100 Jan Kasprzak wrote: > Hello, Ceph users, > > does anybody use Ceph on recently released CentOS 8? Apparently there > are no el8 packages neither at download.ceph.com, nor in the native > CentOS package tree. I am thinking about upgrading my cluster to C8 > (because of other software running on it apart from Ceph). Do el7 > packages simply work? Can they be rebuilt using rpmbuild --rebuild? > Or is running Ceph on C8 more complicated than that? > > Thanks, > > -Yenya > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Ceph on CentOS 8?
On Fri, 13 Dec 2019, Manuel Lausch wrote: > Hi, > > I am interested in el8 Packages as well. > Is there any plan to provide el8 packages in the near future? Ceph Octopus will be based on CentOS 8. It's due out in March. The centos8 transition is awkward because our python 2 dependencies don't exist on in centos8, and it is a huge amount of effort to produce them. Octopus switches to python 3, but those dependencies cannot be produces for centos7. So the nautilus->octopus upgrade will be either involve a transition to the new containerized deployment (either cephadm or ceph-ansible's container mode) or a simultaneous upgrade of the OS and Ceph. sage > > Regards > Manuel > > On Mon, 2 Dec 2019 11:16:01 +0100 > Jan Kasprzak wrote: > > > Hello, Ceph users, > > > > does anybody use Ceph on recently released CentOS 8? Apparently there > > are no el8 packages neither at download.ceph.com, nor in the native > > CentOS package tree. I am thinking about upgrading my cluster to C8 > > (because of other software running on it apart from Ceph). Do el7 > > packages simply work? Can they be rebuilt using rpmbuild --rebuild? > > Or is running Ceph on C8 more complicated than that? > > > > Thanks, > > > > -Yenya > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Ceph on CentOS 8?
Hello Manuel, if you want to get rid of all the OS type problems, you can use our free community version to deploy Ceph. We make sure every dependency is met and you do not need to worry about anything like that anymore. How to do that? - Deploy the croit docker container on an independent management node - Import your cluster using our assistant/wizard - Reboot host by host with boot from network option - Done After that, whenever you want to migrate to a newer version or release, there will be a button that you just click and that's all you need to do from that point on. No hassle, no pain, no OS trouble. It all comes to with absolutely no costs! -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io YouTube: https://goo.gl/PGE1Bx Am Fr., 13. Dez. 2019 um 17:44 Uhr schrieb Sage Weil : > On Fri, 13 Dec 2019, Manuel Lausch wrote: > > Hi, > > > > I am interested in el8 Packages as well. > > Is there any plan to provide el8 packages in the near future? > > Ceph Octopus will be based on CentOS 8. It's due out in March. > > The centos8 transition is awkward because our python 2 dependencies don't > exist on in centos8, and it is a huge amount of effort to produce them. > Octopus switches to python 3, but those dependencies cannot be produces > for centos7. So the nautilus->octopus upgrade will be either involve a > transition to the new containerized deployment (either cephadm or > ceph-ansible's container mode) or a simultaneous upgrade of the OS and > Ceph. > > sage > > > > > > > Regards > > Manuel > > > > On Mon, 2 Dec 2019 11:16:01 +0100 > > Jan Kasprzak wrote: > > > > > Hello, Ceph users, > > > > > > does anybody use Ceph on recently released CentOS 8? Apparently there > > > are no el8 packages neither at download.ceph.com, nor in the native > > > CentOS package tree. I am thinking about upgrading my cluster to C8 > > > (because of other software running on it apart from Ceph). Do el7 > > > packages simply work? Can they be rebuilt using rpmbuild --rebuild? > > > Or is running Ceph on C8 more complicated than that? > > > > > > Thanks, > > > > > > -Yenya > > > > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] v13.2.8 Mimic released
This is the eighth backport release in the Ceph Mimic stable release series. Its sole purpose is to fix a regression that found its way into the previous release. Notable Changes --- * Due to a missed backport, clusters in the process of being upgraded from 13.2.6 to 13.2.7 might suffer an OSD crash in build_incremental_map_msg. This regression was reported in https://tracker.ceph.com/issues/43106 and is fixed in 13.2.8 (this release). Users of 13.2.6 can upgrade to 13.2.8 directly - i.e., skip 13.2.7 - to avoid this. Changelog - * osd: fix sending incremental map messages (issue#43106 pr#32000, Sage Weil) * tests: added missing point release versions (pr#32087, Yuri Weinstein) * tests: rgw: add missing force-branch: ceph-mimic for swift tasks (pr#32033, Casey Bodley) For a blog with links to PRs and issues please check out https://ceph.io/releases/v13-2-8-mimic-released/ Getting Ceph * Git at git://github.com/ceph/ceph.git * Tarball at http://download.ceph.com/tarballs/ceph-13.2.8.tar.gz * For packages, see http://docs.ceph.com/docs/master/install/get-packages/ * Release git sha1: 5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0 -- Abhishek Lekshmanan SUSE Software Solutions Germany GmbH ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Can't create new OSD
[ Re-adding users list ] On Thu, Dec 12, 2019 at 10:12 AM Rodrigo Severo - Fábrica wrote: > > Em qui., 12 de dez. de 2019 às 12:55, Gregory Farnum > escreveu: > > > > On Wed, Dec 11, 2019 at 12:04 PM Rodrigo Severo - Fábrica > > wrote: > >> > >> Hi, > >> > >> > >> Trying to create a new OSD following the instructions available at > >> https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ > >> > >> On step 3 I'm instructed to run "ceph-osd -i {osd-num} --mkfs > >> --mkkey". Unfortunately it doesn't work: > >> > >> # ceph-osd -i 3 --mkfs --mkkey > >> 2019-12-11 16:59:58.257 7fac4597fc00 -1 auth: unable to find a keyring > >> on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > >> 2019-12-11 16:59:58.257 7fac4597fc00 -1 AuthRegistry(0x55ad976ea140) > >> no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > >> 2019-12-11 16:59:58.261 7fac4597fc00 -1 auth: unable to find a keyring > >> on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > >> 2019-12-11 16:59:58.261 7fac4597fc00 -1 AuthRegistry(0x7fffac4075e8) > >> no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > >> failed to fetch mon config (--no-mon-config to skip) > > I'm full of questions. > > > This is the important bit. Since you’re running the command without a way > > to access the Ceph monitors, > > Should I provide a way do access the Ceph monitors? I was just > following the docs and there is no mention to it in the above page. > > Anyway I tried to do it but the results were exactly the same: > > # ceph-osd -m 192.168.109.233:3300 -i 3 --mkfs --mkkey > 2019-12-12 14:56:18.408 7f392c0a1c00 -1 auth: unable to find a keyring > on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > 2019-12-12 14:56:18.408 7f392c0a1c00 -1 AuthRegistry(0x564545484140) > no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > 2019-12-12 14:56:18.408 7f392c0a1c00 -1 auth: unable to find a keyring > on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > 2019-12-12 14:56:18.408 7f392c0a1c00 -1 AuthRegistry(0x7ffccdef0958) > no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > failed to fetch mon config (--no-mon-config to skip) > > I also tried in the 6789 port and without providing port at all. > Always the same results. Tried also on the other monitors. Always the > same results. > > > it can’t use the cluster default configuration. You need to move the client.admin keyring onto the node (or another one, if you configure the ceph.conf or pass in the name on each command invocation). I haven't been through the docs in a while so maybe it's missing or lost but it should describe that somewhere. This is necessary whenever you are making changes to the cluster, such as adding an OSD — without a keyring, the cluster has no idea if the user making the change is authorized! > > Shouldn't it automatically read /etc/ceph/ceph.conf and figure everything out? > > I also tried explicitly setting the conf path but, again, same results: It's reading the ceph.conf but still needs cluster access, which you haven't given it. > > # ceph-osd -i 3 --mkfs --mkkey -c /etc/ceph/ceph.conf > 2019-12-12 14:57:35.740 7fee29b61c00 -1 auth: unable to find a keyring > on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > 2019-12-12 14:57:35.740 7fee29b61c00 -1 AuthRegistry(0x56457c784140) > no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > 2019-12-12 14:57:35.740 7fee29b61c00 -1 auth: unable to find a keyring > on /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > 2019-12-12 14:57:35.740 7fee29b61c00 -1 AuthRegistry(0x7ffc6eccb0b8) > no keyring found at /var/lib/ceph/osd/ceph-3/keyring, disabling cephx > failed to fetch mon config (--no-mon-config to skip) > > > You can pass the given option to not worry about that, > > Tried that. Apparently it worked, despite several disturbing messages: > > # ceph-osd -i 3 --mkfs --mkkey --no-mon-config > 2019-12-12 14:57:50.256 7f31d77b2c00 -1 auth: error reading file: > /var/lib/ceph/osd/ceph-3/keyring: can't open > /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory > 2019-12-12 14:57:50.260 7f31d77b2c00 -1 created new key in keyring > /var/lib/ceph/osd/ceph-3/keyring > 2019-12-12 14:57:50.260 7f31d77b2c00 -1 > bluestore(/var/lib/ceph/osd/ceph-3/block) _read_bdev_label failed to > open /var/lib/ceph/osd/ceph-3/block: (2) No such file or directory > 2019-12-12 14:57:50.260 7f31d77b2c00 -1 > bluestore(/var/lib/ceph/osd/ceph-3/block) _read_bdev_label failed to > open /var/lib/ceph/osd/ceph-3/block: (2) No such file or directory > 2019-12-12 14:57:50.260 7f31d77b2c00 -1 > bluestore(/var/lib/ceph/osd/ceph-3/block) _read_bdev_label failed to > open /var/lib/ceph/osd/ceph-3/block: (2) No such file or directory > 2019-12-12 14:57:50.629 7f31d77b2c00 -1 > bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid > # ll /var/lib/ceph/osd/ceph-3 > tota
[ceph-users] Re: atime with cephfs
Hi together, I had a look at ceph-fuse code and if I read it correctly, it does indeed not seem to have the relatime behaviour since kernels 2.6.30 implemented. Should I open a ticket on this? Cheers, Oliver Am 02.12.19 um 14:31 schrieb Oliver Freyermuth: > I was thinking about the behaviour of relatime on kernels since 2.6.30 > (quoting mount(8)): > > "Update inode access times relative to modify or change time. Access time is > only updated if the previous access time was earlier than the current modify > or change time. (Similar to > noatime, but it doesn't break mutt or other applications that need to know > if a file has been read since the last time it was modified.) > > Since Linux 2.6.30, the kernel defaults to the behavior provided by this > option (unless noatime was specified), > and the strictatime option is required to obtain traditional semantics. In > addition, since > Linux 2.6.30, the file's last access time is always updated if it is more > than 1 day old." > smime.p7s Description: S/MIME Cryptographic Signature ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io