[ceph-users] Nautilus RadosGW "One Zone" like AWS

2019-12-18 Thread Florian Engelmann
Hello, is it already possible or planned to enable RadosGW to have a storage class like AWS does with "one zone" where the objects only exists in one zone and don't get mirrored to any other zone? eg. by creating the placement pool only in one zone? If this feature isn't planned is it worth

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Lars Täuber
Hi Frank, thanks for you hint. The find for the inode is really fast. At least fast enough for me: $ time find /mnt/point -inum 1093514215110 -print -quit real0m3,009s user0m0,037s sys 0m0,032s Cheers, Lars Tue, 17 Dec 2019 15:08:06 + Frank Schilder ==> Marc Roos , taeuber :

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Frank Schilder
I found it, should have taken a note: Command: rados -p getxattr . parent | ceph-dencoder type inode_backtrace_t import - decode dump_json Note: is hex encoded, use 'printf "%x\n" INUM' to convert from the decimal numbers obtained with dump snaps. Explanation:

[ceph-users] radosgw - Etags suffixed with #x0e

2019-12-18 Thread Ingo Reimann
Hi, We had a strange problem with some buckets. After a s3cmd sync, some objects got ETAGs with the suffix "#x0e". This rendered the XML output of "GET /" e.g. (s3cmd du) invalid. Unfortunately, this behaviour was not reproducable but could be fixed by "GET /{object}" + "PUT /{object}" (s3cmd g

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Lars Täuber
Hi Frank, the command takes the metadata pool not the data pool as argument! rados -p getxattr $(printf "%x\n" ). parent | ceph-dencoder type inode_backtrace_t import - decode dump_json The result is a not very human readable json output: { "ino": 1099511755110, "ancestors": [

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Marc Roos
This is working! rados -p fs_meta getxattr $(printf "%x" 1099519896355). parent | ceph-dencoder type inode_backtrace_t import - decode dump_json { "ino": 1099519896355, "ancestors": [ { "dirino": 1099519874624, "dname": "", "ver

[ceph-users] Use Wireshark to analysis ceph network package

2019-12-18 Thread Xu Chen
Hi guys, I want to use tcpdump and Wireshark to capture and analysis packages between clients and ceph cluster. But the protocol only shows tcp, no ceph, so I can not read the data between client and cluster. The wireshark version is 3.07. Hope your help. Thank you. ___

[ceph-users] re-balancing resulting in unexpected availability issues

2019-12-18 Thread steve . nolen
Hi! We've found ourselves in state with our ceph cluster that we haven't seen before, and are looking for a bit of expertise to chime in. We're running a (potentially unusually laid out) moderately large luminous-based ceph cluster in a public cloud, with 234*8TB OSDs, with a single osd per clo

[ceph-users] High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Bryan Stillwell
After upgrading one of our clusters from Nautilus 14.2.2 to Nautilus 14.2.5 I'm seeing 100% CPU usage by a single ceph-mgr thread (found using 'top -H'). Attaching to the thread with strace shows a lot of mmap and munmap calls. Here's the distribution after watching it for a few minutes: 48.7

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Sage Weil
On Wed, 18 Dec 2019, Bryan Stillwell wrote: > After upgrading one of our clusters from Nautilus 14.2.2 to Nautilus 14.2.5 > I'm seeing 100% CPU usage by a single ceph-mgr thread (found using 'top -H'). > Attaching to the thread with strace shows a lot of mmap and munmap calls. > Here's the dis

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread eric
Hey, That sounds very similar to what I described there: https://tracker.ceph.com/issues/43364 Best, Eric ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Bryan Stillwell
On Dec 18, 2019, at 1:48 PM, e...@lapsus.org wrote: > > That sounds very similar to what I described there: > https://tracker.ceph.com/issues/43364 I would agree that they're quite similar if not the same thing! Now that you mention it I see the thread is named mgr-fin in 'top -H' as well. I

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Bryan Stillwell
On Dec 18, 2019, at 11:58 AM, Sage Weil mailto:s...@newdream.net>> wrote: On Wed, 18 Dec 2019, Bryan Stillwell wrote: After upgrading one of our clusters from Nautilus 14.2.2 to Nautilus 14.2.5 I'm seeing 100% CPU usage by a single ceph-mgr thread (found using 'top -H'). Attaching to the threa

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Paul Mezzanini
Just wanted to say that we are seeing the same thing on our large cluster. It manifested mainly in the from of Prometheus stats being totally broken (they take too long to return if at all so the requesting program just gives up) -- Paul Mezzanini Sr Systems Administrator / Engineer, Research

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Bryan Stillwell
That's how we noticed it too. Our graphs went silent after the upgrade completed. Is your large cluster over 350 OSDs? Bryan On Dec 18, 2019, at 2:59 PM, Paul Mezzanini mailto:pfm...@rit.edu>> wrote: Notice: This email is from an external sender. Just wanted to say that we are seeing the sa

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Andras Pataki
We are also running into this issue on one of our clusters - balancer mode upmap, about 950 OSDs. Andras On 12/18/19 4:44 PM, Bryan Stillwell wrote: On Dec 18, 2019, at 11:58 AM, Sage Weil > wrote: On Wed, 18 Dec 2019, Bryan Stillwell wrote: After upgrading one of o

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2019-12-18 Thread Paul Mezzanini
>From memory we are in the 700s. -- Paul Mezzanini Sr Systems Administrator / Engineer, Research Computing Information & Technology Services Finance & Administration Rochester Institute of Technology o:(585) 475-3245 | pfm...@rit.edu Sent from my phone. Please excuse any brevity or typoos. CONF