[ceph-users] cephadm adopting osd failed

2020-04-16 Thread bbk
Hi, >From a healty nautilus cluster version 14.2.9 on a CentOS7 i try to follow the >upgrade procedure to the containerized octopus setup with cephadm. * https://docs.ceph.com/docs/octopus/cephadm/adoption/ Everything step went fine until i wanted to adopt the osds, then i get a error. Does a

[ceph-users] Re: how to fix num_strays?

2020-04-16 Thread Dan van der Ster
On Thu, Apr 16, 2020 at 3:53 AM Yan, Zheng wrote: > > On Thu, Apr 16, 2020 at 12:15 AM Dan van der Ster wrote: > > > > On Wed, Apr 15, 2020 at 5:13 PM Yan, Zheng wrote: > > > > > > On Wed, Apr 15, 2020 at 2:33 AM Dan van der Ster > > > wrote: > > > > > > > > Hi all, > > > > > > > > Following s

[ceph-users] Re: cephadm adopting osd failed

2020-04-16 Thread bbk
Hi again, it is not the first time, just after i posted my question i find a solution :-) What i needed to do was stopping the osd first: systemctl stop ceph-osd@0 Then unmounting the tempfs: umount /var/lib/ceph/osd/ceph-0 So now the script is able to remove the folder, and adopt the

[ceph-users] Re: cephadm adopting osd failed

2020-04-16 Thread John Zachary Dover
This is a comment for documentation purposes. Note to slightly-future Zac: Add to https://docs.ceph.com/docs/octopus/cephadm/adoption/ a step directing the reader to stop the osd and unmount the tempfs as described in this email thread. CEPH DOCUMENTATION INITIATIVE On Thu, Apr 16, 2020 at 5:47

[ceph-users] Re: radosgw-admin error: "could not fetch user info: no user info saved"

2020-04-16 Thread Janne Johansson
Den ons 15 apr. 2020 kl 21:01 skrev Mathew Snyder < mathew.sny...@protonmail.com>: > I'm running into a problem that I've found around the Internet, but for > which I'm unable to find a solution: > $ sudo radosgw-admin user info > could not fetch user info: no user info saved > radosgw-ad

[ceph-users] Issues with RGW PUT performance after upgrade to 14.2.8

2020-04-16 Thread Katarzyna Myrek
Hi After upgrading to 14.2.8 i can see that PUT operations are significantly slower. GET and DELETE still have the same performance. I double checked OSD nodes and I cannot find anything suspicious there. No extreme iowaits etc. Anyone have the same problem? Kind regards / Pozdrawiam, Katarzyna

[ceph-users] Re: New to ceph / Very unbalanced cluster

2020-04-16 Thread Simon Sutter
Thank you very much, I couldn't see the forest for the trees. Now I moved a disk and added another one, now the problem is gone, I have 8TB to use. Thanks again. Simon Sutter Von: Reed Dier Gesendet: Mittwoch, 15. April 2020 22:59:12 An: Simon Sutter Cc: ceph

[ceph-users] Re: radosgw multisite

2020-04-16 Thread Ignazio Cassano
Hello Casey, I solved going on primary site ad executing: radosgw-admin zone modify --rgw-zone nivolazonegroup-primarysite --access-key=blablabla --secret=blablabla Ignazio Il giorno mer 15 apr 2020 alle ore 19:45 Casey Bodley ha scritto: > On Wed, Apr 15, 2020 at 12:06 PM Ignazio Cassano > wr

[ceph-users] RGW and the orphans

2020-04-16 Thread Katarzyna Myrek
Hi Is there any new way to find and remove orphans from RGW pools on Nautilus? I have found info that "orphans find" is now deprecated? I can see that I have tons of orphans in one of our clusters. Was wondering how to safely remove them - make sure that they are really orphans. Does anyone have

[ceph-users] MDS_CACHE_OVERSIZED warning

2020-04-16 Thread jesper
Hi. I have a cluster that has been running for close to 2 years now - pretty much with the same setting, but over the past day I'm seeing this warning. (and the cache seem to keep growing) - Can I figure out which clients is accumulating the inodes? Ceph 12.2.8 - is it ok just to "bump" the memo

[ceph-users] Re: RGW and the orphans

2020-04-16 Thread EDH - Manuel Rios
Hi, >From my experience orphans find didn't work since several releases ago, and >command should be re-coded or deprecated because its not running. Im our cases it loops over generated shards until RGW daemon crash. Interested into this post, in our case orphans find takes more than 24 hours i

[ceph-users] Re: cephadm adopting osd failed

2020-04-16 Thread bbk
As i progressed with the migration i found out, that my problem is more of a rare case. On my 3 nodes, where i had the problem. I did once move the /var/lib/ceph to a other partition, and symlinked it back. The kernel however is mounting the tempfs at the real path (/whatever/lib/ceph is mounte

[ceph-users] Re: radosgw-admin error: "could not fetch user info: no user info saved"

2020-04-16 Thread Mathew Snyder
As simple as that, eh? Cripes. Thank you, Mathew Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Thursday, April 16, 2020 4:40 AM, Janne Johansson wrote: > Den ons 15 apr. 2020 kl 21:01 skrev Mathew Snyder > : > >> I'm running into a problem t

[ceph-users] Re: RGW and the orphans

2020-04-16 Thread Katarzyna Myrek
Hi Thanks for the quick response. To be honest my cluster is getting full because of that trash and I am at the point where I have to do the removal manually ;/. Kind regards / Pozdrawiam, Katarzyna Myrek czw., 16 kwi 2020 o 13:09 EDH - Manuel Rios napisał(a): > > Hi, > > From my experience or

[ceph-users] Re: cephadm adopting osd failed

2020-04-16 Thread Marco Savoca
Hi, i habe a similar issue. After migration to Cephadm, the osd services have to be started manually after every cluster reboot. Marco > Am 16.04.2020 um 15:11 schrieb b...@nocloud.ch: > > As i progressed with the migration i found out, that my problem is more of a > rare case. > > On my 3

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-16 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-16 Thread Stolte, Felix
Hi Jeff, my ganesha instances are running on ubuntu 18.04 with packages from http://ppa.launchpad.net/nfs-ganesha/nfs-ganesha-3.0/ubuntu Unfortunately they do not provide the debug package, so I cannot poke around due to the missing debug symbols. Do you have another approach to get the informa

[ceph-users] Re: RGW and the orphans

2020-04-16 Thread Eric Ivancich
There is currently a PR for an “orphans list” capability. I’m currently working on the testing side to make sure it’s part of our teuthology suite. See: https://github.com/ceph/ceph/pull/34148 Eric > On Apr 16, 2020, at 9:26 AM, Katarzyna Myrek wrote

[ceph-users] Re: RGW and the orphans

2020-04-16 Thread EDH - Manuel Rios
Hi Eric, Are there any ETA for get those script backported maybe in 14.2.10? Regards Manuel De: Eric Ivancich Enviado el: jueves, 16 de abril de 2020 19:05 Para: Katarzyna Myrek ; EDH - Manuel Rios CC: ceph-users@ceph.io Asunto: Re: [ceph-users] RGW and the orphans There is currently a PR f

[ceph-users] v13.2.9 Mimic released

2020-04-16 Thread Abhishek Lekshmanan
We're glad to announce the availability of the ninth and very likely the last stable release in the Ceph Mimic stable release series. This release fixes bugs across all components and also contains a RGW security fix. We recommend all mimic users to upgrade to this version. We thank everyone for m

[ceph-users] Re: PGs unknown (osd down) after conversion to cephadm

2020-04-16 Thread Sebastian Wagner
Hi Marco, # ceph orch upgrade start --ceph-version 15.2.1 should do the trick. Am 15.04.20 um 17:40 schrieb Dr. Marco Savoca: > Hi Sebastian, > >   > > as I said, the orchestrator does not seem to be reachable after > cluster’s reboot. The requested output could only be gathered after > manu

[ceph-users] Re: MDS: what's the purpose of using LogEvent with empty metablob?

2020-04-16 Thread Xinying Song
Hi, Yan: I agree with the idea that log event can be used to reconstruct cache when crash happens. But master can reconstruct its cache by replaying its EUpdate logevent. The ESlaveUpdate::OP_COMMIT log event seems to have nothing to do with cache of master, it's on slave. Besides, that log event o