[ceph-users] Bucket index OMAP keys unevenly distributed among shards

2021-05-20 Thread James, GleSYS
Hi, we're running 15.2.7 and our cluster is warning us about LARGE_OMAP_OBJECTS (1 large omap objects). Here is what the distribution looks like for the bucket in question, and as you can see all but 3 of the keys reside in shard 2. .dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.0

[ceph-users] Re: Can see objects with "rados ls" but cannot delete them with "rados rm"

2021-01-29 Thread James, GleSYS
this by repairing pg. I'm not sure if this is good way of fixing this > problem. In my case I'm looking for other solution which will be faster ( > removing 1 object was taking about 1-2min per osd on hdd drive) > > > Best regards > > Bartosz Skotnicki > &

[ceph-users] Can see objects with "rados ls" but cannot delete them with "rados rm"

2021-01-28 Thread James, GleSYS
Hi, We have in issue in our cluster (octopus 15.2.7) where we’re unable to remove orphaned objects from a pool, despite the fact these objects can be listed with “rados ls”. Here is an example of an orphaned object which we can list (not sure why multiple objects are returned with the same nam

[ceph-users] bucket radoslist stuck in a loop while listing objects

2020-12-04 Thread James, GleSYS
Hi, I recently attempted to run the ‘rgw-orphan-list’ tool against our cluster (octopus 15.2.7) to identify any orphans and noticed that the 'radosgw-admin bucket radoslist’ command appeared to be stuck in a loop. I saw in the 'radosgw-admin-XX.intermediate’ output file the same sequence o

[ceph-users] Re: RGW listing slower on nominally faster setup

2020-06-12 Thread James, GleSYS
Hi, I’m experiencing the same symptoms as OP. We’re running Ceph Octopus 15.2.1 with RGW, and have seen on multiple occasions the bucket index pool go up to 500MB/s read throughput / 100K read IOPS. Our logs during this time are flooded with these entries: 2020-06-09T07:11:18.070+0200 7f2676efd

[ceph-users] Re: radosgw garbage collection error

2020-05-06 Thread James, GleSYS
while reading the queue head data, but to > debug that further, I need rgw logs and osd logs at debug level 20, > specifically debug_objclass=20 on osds. > > Although you have mentioned that you installed a new Ceph cluster with > Octopus v15.2.1, but just wanted to make sure that rgw,

[ceph-users] Re: radosgw garbage collection error

2020-05-05 Thread James, GleSYS
3b554a700 10 osd.15 pg_epoch: 5395 pg[5.9( v 5395'481462 (5387'478000,5395'481462] local-lis/les=5394/5395 n=48 ec=67/67 lis/c=5394/5394 les/c/f=5395/5395/0 sis=5394 pruub=12.023579210s) [15,21,26] r=0 lpr=5394 crt=5395'481460 lcod 5395'481461 mlcod 5395'481461 acti

[ceph-users] radosgw garbage collection error

2020-05-05 Thread James, GleSYS
Hi, We’ve recently installed a new Ceph cluster running Octopus 15.2.1, and we’re using RGW with an erasure coded backed pool. I started to get a suspicion that deleted objects were not getting cleaned up properly, and I wanted to verify this by checking the garbage collector. That’s when I di

[ceph-users] Re: Netplan bonding configuration

2020-04-02 Thread James, GleSYS
:41, Robert Sander wrote: > > On 01.04.20 08:29, James, GleSYS wrote: > >> The reason I want to create two bonds is to have enp179s0f0 as active for >> the public network, and enp179s0f1 as active for the cluster network, >> therefore spreading the traffic across the

[ceph-users] Re: Netplan bonding configuration

2020-03-31 Thread James, GleSYS
Hi Gilles, Yes, your configuration works with Netplan on Ubuntu 18 as well. However, this would use only one of the physical interfaces (the current active interface for the bond) for both networks. The reason I want to create two bonds is to have enp179s0f0 as active for the public network,

[ceph-users] Re: Netplan bonding configuration

2020-03-31 Thread James, GleSYS
Thanks for the suggestion, Paul. I renamed “bond0" to “zbond0” but unfortunately this did not solve the problem in our Ubuntu 18 environment. There is still an issue during boot adding the vlan interfaces to the bond. Regards, James. > On 31 Mar 2020, at 16:08, Paul Mezzanini wrote: > > We r