[ceph-users] Re: Huge amounts of objects orphaned by lifecycle policy.

2024-06-27 Thread Casey Bodley
hi Adam, On Thu, Jun 27, 2024 at 4:41 AM Adam Prycki wrote: > > Hello, > > I have a question. Do people use rgw lifecycle policies in production? > I had big hopes for this technology bug in practice it seems to be very > unreliable. > > Recently I've been testing different pool layouts and using

[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-01 Thread Casey Bodley
On Mon, Jul 1, 2024 at 10:23 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/66756#note-1 > > Release Notes - TBD > LRC upgrade - TBD > > (Reruns were not done yet.) > > Seeking approvals/reviews for: > > smoke > rados - Radek, Laura >

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-03 Thread Casey Bodley
(cc Thomas Goirand) in April, an 18.2.3 tarball was uploaded to https://download.ceph.com/tarballs/ceph_18.2.3.orig.tar.gz. that's been picked up and packaged by the Debian project under the assumption that it was a supported release when we do finally release 18.2.3, we will presumably overwrite

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-09 Thread Casey Bodley
this was discussed in the ceph leadership team meeting yesterday, and we've agreed to re-number this release to 18.2.4 On Wed, Jul 3, 2024 at 1:08 PM wrote: > > > On Jul 3, 2024 5:59 PM, Kaleb Keithley wrote: > > > > > > > > > Replacing the tar file is problematic too, if only because it's a pot

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-09 Thread Casey Bodley
in general, these omap entries should be evenly spread over the bucket's index shard objects. but there are two features that may cause entries to clump on a single shard: 1. for versioned buckets, multiple versions of the same object name map to the same index shard. this can become an issue if a

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-10 Thread Casey Bodley
but the secondary zone isn't processing them in this case > > Thank you > > > > From: Casey Bodley > Sent: Tuesday, July 9, 2024 10:39 PM > To: Szabo, Istvan (Agoda) > Cc: Eugen Block ; ceph-users@ceph.io > Subject: Re: [ceph-users]

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-10 Thread Casey Bodley
--bucket={bucket_name}` to fix up the bucket object count and object > sizes at the end > > This process takes quite some time and I can't say if it's 100% > perfect but it enabled us to get to a state where we could delete the > buckets and clean up the objects. > I h

[ceph-users] Re: v19.1.0 Squid RC0 released

2024-07-19 Thread Casey Bodley
On Fri, Jul 19, 2024 at 9:04 AM Stefan Kooman wrote: > > Hi, > > On 12-07-2024 00:27, Yuri Weinstein wrote: > > ... > > > * For packages, see https://docs.ceph.com/en/latest/install/get-packages/ > > I see that only packages have been build for Ubuntu 22.04 LTS. Will > there also be packages built

[ceph-users] Re: [Query] Safe to discard bucket lock objects in reshard pool?

2021-01-19 Thread Casey Bodley
On Tue, Jan 19, 2021 at 10:57 AM Prasad Krishnan wrote: > > Dear Ceph users, > > We have a slightly dated version of Luminous cluster in which dynamic > bucket resharding was accidentally enabled due to a misconfig (we don't use > this feature since the number of objects per bucket is capped). > >

[ceph-users] Re: Storage-class split objects

2021-02-10 Thread Casey Bodley
On Wed, Feb 10, 2021 at 8:31 AM Marcelo wrote: > > Hello all! > > We have a cluster where there are HDDs for data and NVMEs for journals and > indexes. We recently added pure SSD hosts, and created a storage class SSD. > To do this, we create a default.rgw.hot.data pool, associate a crush rule > u

[ceph-users] Re: Storage-class split objects

2021-02-11 Thread Casey Bodley
> de vírus. www.avast.com > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>. > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > Em qua., 10 de fev. de 2021 às 11:43, Casey Bodley > escreveu: > >

[ceph-users] Re: Rados gateway static website

2021-03-30 Thread Casey Bodley
this error 2039 is ERR_NO_SUCH_WEBSITE_CONFIGURATION. if you want to access a bucket via rgw_dns_s3website_name, you have to set a website configuration on the bucket - see https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html On Tue, Mar 30, 2021 at 10:05 AM Marcel Kuiper wro

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-06 Thread Casey Bodley
thanks for the details. this is a regression from changes to the datalog storage for multisite - this -5 error is coming from the new 'fifo' backend. as a workaround, you can set the new 'rgw_data_log_backing' config variable back to 'omap' Adam has fixes already merged to the pacific branch; be a

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread Casey Bodley
On Wed, Apr 14, 2021 at 11:44 AM wrote: > > Konstantin; > > Dynamic resharding is disabled in multisite environments. > > I believe you mean radosgw-admin reshard stale-instances rm. > > Documentation suggests this shouldn't be run in a multisite environment. > Does anyone know the reason for th

[ceph-users] Re: Configuring an S3 gateway

2021-04-22 Thread Casey Bodley
On Thu, Apr 22, 2021 at 2:26 PM Fabrice Bacchella wrote: > > I'm trying to configure an S3 gateway with pacific and can't wrap my mind > around. > > In the configuration file, my configuration is: > > [client.radosgw.fa41] > rgw_data = /data/ceph/data/radosgw/$cluster.$id > log_file = /data/c

[ceph-users] Re: [v15.2.11] radosgw / RGW crash at start, Segmentation Fault

2021-05-07 Thread Casey Bodley
this is https://tracker.ceph.com/issues/50218, a radosgw build issue specific to ubuntu bionic that affected all of our releases. the build issue has been resolved, so the next point releases should resolve the crashes On Fri, May 7, 2021 at 10:51 AM Gilles Mocellin wrote: > > Hello, > > Since I

[ceph-users] Re: Do people still use LevelDBStore?

2021-10-13 Thread Casey Bodley
+1 from a dev's perspective. we don't test leveldb, and we don't expect it to perform as well as rocksdb in ceph, so i don't see any value in keeping it the rados team put a ton of effort into converting existing clusters to rocksdb, so i would be very surprised if removing leveldb left any users

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
hey Tim, your changes to rgw_admin_entry probably aren't taking effect on the running radosgws. you'd need to restart them in order to set up the new route there also seems to be some confusion about the need for a bucket named 'default'. radosgw just routes requests with paths starting with '/{r

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this release

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
#x27;t configured. But knowing where to inject > the magic that activates that interface eludes me and whether to do it > directly on the RGW container hos (and how) or on my master host is > totally unclear to me. It doesn't help that this is an item that has > multiple values, not

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this release

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
errno 125 is ECANCELED, which is the code we use when we detect a racing write. so it sounds like something else is modifying that user at the same time. does it eventually succeed if you retry? On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi wrote: > > Hi all, > > I couldn't understand what doe

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
idi wrote: > > Thanks Casey for your explanation, > > Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that > it stays in racing condition for that much time. > > Best Regards, > Mahnoosh > > On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote: >&

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-10-25 Thread Casey Bodley
if you have an administrative user (created with --admin), you should be able to use its credentials with awscli to delete or overwrite this bucket policy On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham wrote: > > I have a bucket which got injected with bucket policy which locks the > bucket e

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-10-25 Thread Casey Bodley
dillingham.com > LinkedIn > > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley wrote: >> >> if you have an administrative user (created with --admin), you should >> be able to use its credentials with awscli to delete or overwrite this >> bucket policy >> >&

[ceph-users] Re: RGW access logs with bucket name

2023-10-30 Thread Casey Bodley
another option is to enable the rgw ops log, which includes the bucket name for each request the http access log line that's visible at log level 1 follows a known apache format that users can scrape, so i've resisted adding extra s3-specific stuff like bucket/object names there. there was some re

[ceph-users] Ceph Leadership Team Meeting: 2023-11-1 Minutes

2023-11-01 Thread Casey Bodley
quincy 17.2.7: released! * major 'dashboard v3' changes causing issues? https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7 * planning a retrospective to discuss what kind of changes should go in minor releases when members of the dashboard team are present reef 18.2.1: * most PRs alr

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Casey Bodley
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63443#note-1 > > Seeking approvals/reviews for: > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures) > rados - Neha, Radek, Travis, Ernesto,

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Casey Bodley
 PM Wesley Dillingham > wrote: >> >> Thank you, this has worked to remove the policy. >> >> Respectfully, >> >> *Wes Dillingham* >> w...@wesdillingham.com >> LinkedIn <http://www.linkedin.com/in/wesleydillingham> >> >> >> On W

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Casey Bodley
were in v16.2.12. >>>> We upgraded the cluster to v17.2.7 two days ago and it seems obvious that >>>> the IAM error logs are generated the next minute rgw daemon upgraded from >>>> v16.2.12 to v17.2.7. Looks like there is some issue with parsing. >>>&g

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-09 Thread Casey Bodley
ithi/7450325/ > >> > >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz > >> (#cephfs on ceph slack) to have a look. WIll update as soon as > >> possible. > >> > >> > orch - Adam King > >> > rbd - Ilya app

[ceph-users] Re: RGW: user modify default_storage_class does not work

2023-11-13 Thread Casey Bodley
my understanding is that default placement is stored at the bucket level, so changes to the user's default placement only take effect for newly-created buckets On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote: > > Hi community, > I'm using Ceph version 16.2.13. I tried to set default_storage_clas

[ceph-users] Re: Help on rgw metrics (was rgw_user_counters_cache)

2024-01-31 Thread Casey Bodley
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote: > > good morning, > i was struggling trying to understand why i cannot find this setting on > my reef version, is it because is only on latest dev ceph version and not > before? that's right, this new feature will be part of the squid release. we

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-01-31 Thread Casey Bodley
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/64151#note-1 > > Seeking approvals/reviews for: > > rados - Radek, Laura, Travis, Ernesto, Adam King > rgw - Casey rgw approved, thanks > fs - Venky > rbd -

[ceph-users] Re: Debian 12 (bookworm) / Reef 18.2.1 problems

2024-02-02 Thread Casey Bodley
On Fri, Feb 2, 2024 at 11:21 AM Chris Palmer wrote: > > Hi Matthew > > AFAIK the upgrade from quincy/deb11 to reef/deb12 is not possible: > > * The packaging problem you can work around, and a fix is pending > * You have to upgrade both the OS and Ceph in one step > * The MGR will not run un

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-08 Thread Casey Bodley
thanks, i've created https://tracker.ceph.com/issues/64360 to track these backports to pacific/quincy/reef On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman wrote: > > Hi, > > Is this PR: https://github.com/ceph/ceph/pull/54918 included as well? > > You definitely want to build the Ubuntu / debian pac

[ceph-users] Re: How to solve data fixity

2024-02-09 Thread Casey Bodley
i've cc'ed Matt who's working on the s3 object integrity feature https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html, where rgw compares the generated checksum with the client's on ingest, then stores it with the object so clients can read it back for later integrit

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-21 Thread Casey Bodley
run here, approved > > ceph-volume - Guillaume, fixed by > https://github.com/ceph/ceph/pull/55658 retesting > > On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley wrote: > > > > thanks, i've created https://tracker.ceph.com/issues/64360 to track > > these backpo

[ceph-users] Ceph Leadership Team Meeting: 2024-2-21 Minutes

2024-02-21 Thread Casey Bodley
Estimate on release timeline for 17.2.8? - after pacific 16.2.15 and reef 18.2.2 hotfix (https://tracker.ceph.com/issues/64339, https://tracker.ceph.com/issues/64406) Estimate on release timeline for 19.2.0? - target April, depending on testing and RCs - Testing plan for Squid beyond dev freeze (r

[ceph-users] Re: list topic shows endpoint url and username e password

2024-02-23 Thread Casey Bodley
thanks Giada, i see that you created https://tracker.ceph.com/issues/64547 for this unfortunately, this topic metadata doesn't really have a permission model at all. topics are shared across the entire tenant, and all users have access to read/overwrite those topics a lot of work was done for htt

[ceph-users] Re: Hanging request in S3

2024-03-06 Thread Casey Bodley
hey Christian, i'm guessing this relates to https://tracker.ceph.com/issues/63373 which tracks a deadlock in s3 DeleteObjects requests when multisite is enabled. rgw_multi_obj_del_max_aio can be set to 1 as a workaround until the reef backport lands On Wed, Mar 6, 2024 at 2:41 PM Christian Kugler

[ceph-users] Re: Disable signature url in ceph rgw

2024-03-07 Thread Casey Bodley
anything we can do to narrow down the policy issue here? any of the Principal, Action, Resource, or Condition matches could be failing here. you might try replacing each with a wildcard, one at a time, until you see the policy take effect On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote: > > Hi

[ceph-users] v17.2.7 Quincy now supports Ubuntu 22.04 (Jammy Jellyfish)

2024-03-29 Thread Casey Bodley
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release. The upcoming Squid release will not support Ubuntu 20.04 (Focal Fossa). Ubuntu users planning to upgrade from Quincy to Squid will first need to perform a distro upgrade to 22.04. Getting Ceph * Git at git://githu

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote: > > Hi everybody, > > we upgraded our containerized Red Hat Pacific cluster to the latest > Quincy release (Community Edition). i'm afraid this is not an upgrade path that we try to test or support. Red Hat makes its own decisions about what to

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
object names when trying to list those buckets. 404 NoSuchKey is the response i would expect in that case On Wed, Apr 3, 2024 at 12:20 PM Casey Bodley wrote: > > On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote: > > > > Hi everybody, > > > > we upgraded our contain

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
On Wed, Apr 3, 2024 at 3:09 PM Lorenz Bausch wrote: > > Hi Casey, > > thank you so much for analysis! We tested the upgraded intensively, but > the buckets in our test environment were probably too small to get > dynamically resharded. > > > after upgrading to the Quincy release, rgw would > > loo

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Casey Bodley
unfortunately, this cloud sync module only exports data from ceph to a remote s3 endpoint, not the other way around: "This module syncs zone data to a remote cloud service. The sync is unidirectional; data is not synced back from the remote zone." i believe that rclone supports copying from one s

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-12 Thread Casey Bodley
On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/65393#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - infra issues, still trying, Laura PTL > > rados - Radek,

[ceph-users] Re: Best practice regarding rgw scaling

2024-05-23 Thread Casey Bodley
On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda) wrote: > > Hi, > > Wonder what is the best practice to scale RGW, increase the thread numbers or > spin up more gateways? > > > * > Let's say I have 21000 connections on my haproxy > * > I have 3 physical gateway servers so let's say each

[ceph-users] Ceph Leadership Team Weekly Minutes 2024-06-10

2024-06-10 Thread Casey Bodley
# quincy now past estimated 2024-06-01 end-of-life will 17.2.8 be the last point release? maybe not, depending on timing # centos 8 eol * Casey tried to summarize the fallout in https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/H7I4Q4RAIT6UZQNPPZ5O3YB6AUXLLAFI/ * c8 builds were disabled

[ceph-users] Re: radosgw API issues

2022-07-15 Thread Casey Bodley
are you running quincy? it looks like this '/admin/info' API was new to that release https://docs.ceph.com/en/quincy/radosgw/adminops/#info On Fri, Jul 15, 2022 at 7:04 AM Marcus Müller wrote: > > Hi all, > > I’ve created a test user on our radosgw to work with the API. I’ve done the > followin

[ceph-users] Re: radosgw API issues

2022-07-18 Thread Casey Bodley
are running Pacific, that was my issue here. > > Can someone share a example of a full API request and answer with curl? I’m > still having issues, now getting 401 or 403 answers (but providing Auth-User > and Auth-Key). > > Regards > Marcus > > > > Am 15.07.2022

[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-20 Thread Casey Bodley
On Wed, Jul 20, 2022 at 12:57 AM Yuval Lifshitz wrote: > > yes, that would work. you would get a "404" until the object is fully > uploaded. just note that you won't always get 404 before multipart complete, because multipart uploads can overwrite existing objects ___

[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-25 Thread Casey Bodley
On Sun, Jul 24, 2022 at 11:33 AM Yuri Weinstein wrote: > > Still seeking approvals for: > > rados - Travis, Ernesto, Adam > rgw - Casey rgw approved > fs, kcephfs, multimds - Venky, Patrick > ceph-ansible - Brad pls take a look > > Josh, upgrade/client-upgrade-nautilus-octopus failed, do we need

[ceph-users] rgw: considering deprecation of SSE-KMS integration with OpenStack Barbican

2022-08-05 Thread Casey Bodley
Barbican was the first key management server used for rgw's Server Side Encryption feature. it's integration is documented in https://docs.ceph.com/en/quincy/radosgw/barbican/ we've since added SSE-KMS support for Vault and KMIP, and the SSE-S3 feature (coming soon to quincy) requires Vault our B

[ceph-users] Re: Problem adding secondary realm to rados-gw

2022-08-22 Thread Casey Bodley
On Mon, Aug 22, 2022 at 12:37 PM Matt Dunavant wrote: > > Hello, > > > I'm trying to add a secondary realm to my ceph cluster but I'm getting the > following error after running a 'radosgw-admin realm pull --rgw-realm=$REALM > --url=http://URL:80 --access-key=$KEY --secret=$SECRET': > > > reques

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-13 Thread Casey Bodley
On Tue, Sep 13, 2022 at 4:03 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/57472#note-1 > Release Notes - https://github.com/ceph/ceph/pull/48072 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey rgw ap

[ceph-users] Re: Public RGW access without any LB in front?

2022-09-19 Thread Casey Bodley
hi Boris, it looks like your other questions have been covered but i'll snipe this one: On Fri, Sep 16, 2022 at 7:55 AM Boris Behrens wrote: > > How good is it handling bad HTTP request, sent by an attacker?) rgw relies on the boost.beast library to parse these http requests. that library has ha

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-30 Thread Casey Bodley
On Thu, Sep 29, 2022 at 12:40 PM Neha Ojha wrote: > > > > On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote: >> >> Update: >> >> Remaining => >> upgrade/octopus-x - Neha pls review/approve > > > Both the failures in > http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_16:33:35-upgrade:octop

[ceph-users] Re: octopus 15.2.17 RGW daemons begin to crash regularly

2022-10-06 Thread Casey Bodley
hey Boris, that looks a lot like https://tracker.ceph.com/issues/40018 where an exception was thrown when trying to read a socket's remote_endpoint(). i didn't think that local_endpoint() could fail the same way, but i've opened https://tracker.ceph.com/issues/57784 to track this and the fix shoul

[ceph-users] Re: Rgw compression any experience?

2022-10-17 Thread Casey Bodley
On Mon, Oct 17, 2022 at 6:12 AM Szabo, Istvan (Agoda) wrote: > > Hi, > > I’m looking in ceph octopus in my existing cluster to have object compression. > Any feedback/experience appreciated. > Also I’m curious is it possible to set after cluster setup or need to setup > at the beginning? it's fi

[ceph-users] Re: Too strong permission for RGW in OpenStack

2022-10-18 Thread Casey Bodley
On Tue, Oct 18, 2022 at 4:01 AM Michal Strnad wrote: > > Hi. > > We have ceph cluster with a lot of users who use S3 and RBD protocols. > Now we need to give access to one use group with OpenStack, so they run > RGW on their side, but we have to set "ceph caps" for this RGW. In the > documentation

[ceph-users] Ceph Leadership Team Meeting Minutes - 2022 Oct 19

2022-10-19 Thread Casey Bodley
only one agenda item discussed today: * 17.2.5 is almost ready, Upgrade testing has been completed in upstream gibba and LRC clusters! ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-26 Thread Casey Bodley
lab issues blocking centos container builds and teuthology testing: * https://tracker.ceph.com/issues/57914 * delays testing for 16.2.11 upcoming events: * Ceph Developer Monthly (APAC) next week, please add topics: https://tracker.ceph.com/projects/ceph/wiki/CDM_02-NOV-2022 * Ceph Virtual 2022 st

[ceph-users] Re: Configuring rgw connection timeouts

2022-11-16 Thread Casey Bodley
hi Thilo, you can find a 'request_timeout_ms' frontend option documented in https://docs.ceph.com/en/quincy/radosgw/frontends/ On Wed, Nov 16, 2022 at 12:32 PM Thilo-Alexander Ginkel wrote: > > Hi there, > > we are using Ceph Quincy's rgw S3 API to retrieve one file ("GET") over a > longer time p

[ceph-users] Re: Configuring rgw connection timeouts

2022-11-17 Thread Casey Bodley
it doesn't look like cephadm supports extra frontend options during deployment. but these are stored as part of the `rgw_frontends` config option, so you can use a command like 'ceph config set' after deployment to add request_timeout_ms On Thu, Nov 17, 2022 at 11:18 AM Thilo-Alexander Ginkel wro

[ceph-users] Re: failure resharding radosgw bucket

2022-11-23 Thread Casey Bodley
hi Jan, On Wed, Nov 23, 2022 at 12:45 PM Jan Horstmann wrote: > > Hi list, > I am completely lost trying to reshard a radosgw bucket which fails > with the error: > > process_single_logshard: Error during resharding bucket > 68ddc61c613a4e3096ca8c349ee37f56/snapshotnfs:(2) No such file or > direc

[ceph-users] Re: 16.2.11 pacific QE validation status

2022-12-20 Thread Casey Bodley
thanks Yuri, rgw approved based on today's results from https://pulpito.ceph.com/yuriw-2022-12-20_15:27:49-rgw-pacific_16.2.11_RC2-distro-default-smithi/ On Mon, Dec 19, 2022 at 12:08 PM Yuri Weinstein wrote: > If you look at the pacific 16.2.8 QE validation history ( > https://tracker.ceph.com/

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Casey Bodley
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dashboard - Ern

[ceph-users] CLT meeting summary 2023-02-01

2023-02-01 Thread Casey Bodley
distro testing for reef * https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to supported distros * centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491 - lsb_release command no longer exists, use /etc/os-release instead - ceph stopped depending on lsb_release

[ceph-users] Re: Migrate a bucket from replicated pool to ec pool

2023-02-11 Thread Casey Bodley
hi Boris, On Sat, Feb 11, 2023 at 7:07 AM Boris Behrens wrote: > > Hi, > we use rgw as our backup storage, and it basically holds only compressed > rbd snapshots. > I would love to move these out of the replicated into a ec pool. > > I've read that I can set a default placement target for a user

[ceph-users] Re: Migrate a bucket from replicated pool to ec pool

2023-02-13 Thread Casey Bodley
On Mon, Feb 13, 2023 at 4:31 AM Boris Behrens wrote: > > Hi Casey, > >> changes to the user's default placement target/storage class don't >> apply to existing buckets, only newly-created ones. a bucket's default >> placement target/storage class can't be changed after creation > > > so I can easi

[ceph-users] Re: [RGW - octopus] too many omapkeys on versioned bucket

2023-02-13 Thread Casey Bodley
On Mon, Feb 13, 2023 at 8:41 AM Boris Behrens wrote: > > I've tried it the other way around and let cat give out all escaped chars > and the did the grep: > > # cat -A omapkeys_list | grep -aFn '/' > 9844:/$ > 9845:/^@v913^@$ > 88010:M-^@1000_/^@$ > 128981:M-^@1001_/$ > > Did anyone ever saw somet

[ceph-users] Re: OpenSSL in librados

2023-02-26 Thread Casey Bodley
On Sun, Feb 26, 2023 at 8:20 AM Ilya Dryomov wrote: > > On Sun, Feb 26, 2023 at 2:15 PM Patrick Schlangen > wrote: > > > > Hi Ilya, > > > > > Am 26.02.2023 um 14:05 schrieb Ilya Dryomov : > > > > > > Isn't OpenSSL 1.0 long out of support? I'm not sure if extending > > > librados API to support

[ceph-users] Re: CompleteMultipartUploadResult has empty ETag response

2023-02-28 Thread Casey Bodley
On Tue, Feb 28, 2023 at 8:19 AM Lars Dunemark wrote: > > Hi, > > I notice that CompleteMultipartUploadResult does return an empty ETag > field when completing an multipart upload in v17.2.3. > > I haven't had the possibility to verify from which version this changed > and can't find in the changel

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-22 Thread Casey Bodley
On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59070#note-1 > Release Notes - TBD > > The reruns were in the queue for 4 days because of some slowness issues. > The core team (Neha, Radek, Laura, and others

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-23 Thread Casey Bodley
hi Ernesto and lists, > [1] https://github.com/ceph/ceph/pull/47501 are we planning to backport this to quincy so we can support centos 9 there? enabling that upgrade path on centos 9 was one of the conditions for dropping centos 8 support in reef, which i'm still keen to do if not, can we find

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Casey Bodley
On Wed, Mar 22, 2023 at 9:27 AM Casey Bodley wrote: > > On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/59070#note-1 > > Release Notes - TBD > > > >

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-27 Thread Casey Bodley
On Fri, Mar 24, 2023 at 3:46 PM Yuri Weinstein wrote: > > Details of this release are updated here: > > https://tracker.ceph.com/issues/59070#note-1 > Release Notes - TBD > > The slowness we experienced seemed to be self-cured. > Neha, Radek, and Laura please provide any findings if you have them.

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-27 Thread Casey Bodley
t question, I don't know who's the maintainer of those > > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620) > > requesting that specific package, but that's only one out of the dozen of > > missing packages (plus transitive dependenc

[ceph-users] Re: RGW don't use .rgw.root multisite configuration

2023-04-11 Thread Casey Bodley
there's a rgw_period_root_pool option for the period objects too. but it shouldn't be necessary to override any of these On Sun, Apr 9, 2023 at 11:26 PM wrote: > > Up :) > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an emai

[ceph-users] Re: ceph 17.2.6 and iam roles (pr#48030)

2023-04-11 Thread Casey Bodley
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote: > > > Hi, > I see that this PR: https://github.com/ceph/ceph/pull/48030 > made it into ceph 17.2.6, as per the change log at: > https://docs.ceph.com/en/latest/releases/quincy/ That's great. > But my scenario is as follows: > I have two

[ceph-users] Re: ceph 17.2.6 and iam roles (pr#48030)

2023-04-11 Thread Casey Bodley
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote: > > On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote: > > > > > > Hi, > > I see that this PR: https://github.com/ceph/ceph/pull/48030 > > made it into ceph 17.2.6, as per the change log at: > >

[ceph-users] Re: Rados gateway data-pool replacement.

2023-04-19 Thread Casey Bodley
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote: > > Hi everyone, quick question regarding radosgw zone data-pool. > > I’m currently planning to migrate an old data-pool that was created with > inappropriate failure-domain to a newly created pool with appropriate > failure-domain. > > If I’m do

[ceph-users] Re: quincy user metadata constantly changing versions on multisite slave with radosgw roles

2023-04-20 Thread Casey Bodley
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote: > > Hi, > > I am using 17.2.6 on rocky linux for both the master and the slave site > I noticed that: > radosgw-admin sync status > often shows that the metadata sync is behind a minute or two on the slave. > This didn't make sense, as the

[ceph-users] Re: Can I delete rgw log entries?

2023-04-20 Thread Casey Bodley
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote: > > Hi Everyone, > I've been having trouble finding an answer to this question. Basically > I'm wanting to know if stuff in the .log pool is actively used for > anything or if it's just logs that can be deleted. > In particular I was wondering a

[ceph-users] Ceph Leadership Team meeting minutes - 2023 April 26

2023-04-26 Thread Casey Bodley
# ceph windows tests PR check will be made required once regressions are fixed windows build currently depends on gcc11 which limits use of c++20 features. investigating newer gcc or clang toolchain # 16.2.13 release final testing in progress # prometheus metric regressions https://tracker.ceph.c

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-04-26 Thread Casey Bodley
of those >> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620) >> > requesting that specific package, but that's only one out of the dozen of >> > missing packages (plus transitive dependencies)... >> > >> > Kind Rega

[ceph-users] Re: Radosgw multisite replication issues

2023-04-27 Thread Casey Bodley
On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT) wrote: > > After working on this issue for a bit. > The active plan is to fail over master, to the “west” dc. Perform a realm > pull from the west so that it forces the failover to occur. Then have the > “east” DC, then pull the realm data

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-02 Thread Casey Bodley
On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59542#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Radek, Laura > rados - Radek, Laura > rook - Sébastien Han > cephadm - Adam K >

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Casey Bodley
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote: > > All PRs were cherry-picked and the new RC1 build is: > > https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/ > > Rados, fs and rgw were rerun and results are summarized here: > https://tracker.ceph.c

[ceph-users] Re: Radosgw multisite replication issues

2023-05-11 Thread Casey Bodley
speed less than 1024 Bytes per second during > 300 seconds. > 2023-05-09T15:46:21.069+ 7f20b12b8700 0 WARNING: curl operation timed > out, network average transfer speed less than 1024 Bytes per second during > 300 seconds. > 2023-05-09T15:46:21.069+ 7f2085ff3700 0 rgw a

[ceph-users] Re: multisite sync and multipart uploads

2023-05-11 Thread Casey Bodley
sync doesn't distinguish between multipart and regular object uploads. once a multipart upload completes, sync will replicate it as a single object using an s3 GetObject request replicating the parts individually would have some benefits. for example, when sync retries are necessary, we might only

[ceph-users] Re: how to enable multisite resharding feature?

2023-05-17 Thread Casey Bodley
i'm afraid that feature will be new in the reef release. multisite resharding isn't supported on quincy On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote: > > https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding > When I try this I get: > root@ceph-m-02:~# radosgw-admin zo

[ceph-users] Re: Creating a bucket with bucket constructor in Ceph v16.2.7

2023-05-18 Thread Casey Bodley
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi wrote: > > Hi > > I'm currently using Ceph version 16.2.7 and facing an issue with bucket > creation in a multi-zone configuration. My setup includes two zone groups: > > ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and >

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-05-18 Thread Casey Bodley
That final one (logutils) should go to EPEL's stable repo in a week > (faster with karma). > > - Ken > > > > > On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote: > > > > are there any volunteers willing to help make these python packages > > ava

[ceph-users] Re: Encryption per user Howto

2023-05-22 Thread Casey Bodley
rgw supports the 3 flavors of S3 Server-Side Encryption, along with the PutBucketEncryption api for per-bucket default encryption. you can find the docs in https://docs.ceph.com/en/quincy/radosgw/encryption/ On Mon, May 22, 2023 at 10:49 AM huxia...@horebdata.cn wrote: > > Dear Alexander, > > Tha

[ceph-users] Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-26 Thread Casey Bodley
Our downstream QE team recently observed an md5 mismatch of replicated objects when testing rgw's server-side encryption in multisite. This corruption is specific to s3 multipart uploads, and only affects the replicated copy - the original object remains intact. The bug likely affects Ceph releases

[ceph-users] Re: Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-30 Thread Casey Bodley
fference is where they get the key > > [1] > https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only > > > On 26 May 2023, at 22:45, Casey Bodley wrote: > > > > Our downstream QE team recently observed an md5 mismatch of replica

[ceph-users] Re: Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-31 Thread Casey Bodley
e/overwrite the original copy > > Best regards > Tobias > > On 30 May 2023, at 14:48, Casey Bodley wrote: > > On Tue, May 30, 2023 at 8:22 AM Tobias Urdin > mailto:tobias.ur...@binero.com>> wrote: > > Hello Casey, > > Thanks for the information

  1   2   3   >