[ceph-users] Re: rgw multisite sync not syncing data, error: RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards

2023-07-20 Thread david . piper
Hey Christian, What does sync look like on the first site? And does restarting the RGW instances on the first site fix up your issues? We saw issues in the past that sound a lot like yours. We've adopted the practice of restarting the RGW instances in the first cluster after deploying a seco

[ceph-users] pubsub RGW and OSD processes suddenly start using much more CPU

2020-08-19 Thread david . piper
Hi all, I've got a 3-node cluster in a lab environment running on ceph version 14.2.9 (containerized). Each node is running a OSD, MON, MGR, MDS and 2 x RGW (we're using a second RGW instance to host a pubsub endpoint). I've been monitoring my nodes with the ceph dashboard, and noticed that al

[ceph-users] Bucket index logs (bilogs) not being trimmed automatically (multisite, ceph nautilus 14.2.9)

2020-07-09 Thread david . piper
Hi all, We're seeing a problem in our multisite Ceph deployment, where bilogs aren't being trimmed for several buckets. This is causing bilogs to accumulate over time, leading to large OMAP object warnings for the indexes on these buckets. In every case, Ceph reports that the bucket is in sync

[ceph-users] Re: rgw multisite with https endpoints

2020-04-06 Thread david . piper
Hi Richard, We've got a (also relatively small) multisite deployment working with HTTPS endpoints - so it's certainly possible. Differences in how we've set this up compared with your description: 1) We're using beast rather than civetweb, so the content of ceph.conf is quite different e.g.

[ceph-users] Commands on cephfs mounts getting stuck in uninterruptible sleep

2020-04-06 Thread david . piper
Hello, I am seeing some commands running on CephFS mounts getting stuck in an uninterruptible sleep, at which point I can only terminate them by rebooting the client. Has anyone experienced anything similar and found a way to safe-guard against this? My mount is using the ceph kernel driver, w