hi Adam,
On Thu, Jun 27, 2024 at 4:41 AM Adam Prycki wrote:
>
> Hello,
>
> I have a question. Do people use rgw lifecycle policies in production?
> I had big hopes for this technology bug in practice it seems to be very
> unreliable.
>
> Recently I've been testing different pool layouts and using
On Mon, Jul 1, 2024 at 10:23 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
>
(cc Thomas Goirand)
in April, an 18.2.3 tarball was uploaded to
https://download.ceph.com/tarballs/ceph_18.2.3.orig.tar.gz. that's been
picked up and packaged by the Debian project under the assumption that it
was a supported release
when we do finally release 18.2.3, we will presumably overwrite
this was discussed in the ceph leadership team meeting yesterday, and
we've agreed to re-number this release to 18.2.4
On Wed, Jul 3, 2024 at 1:08 PM wrote:
>
>
> On Jul 3, 2024 5:59 PM, Kaleb Keithley wrote:
> >
> >
> >
>
> > Replacing the tar file is problematic too, if only because it's a pot
in general, these omap entries should be evenly spread over the
bucket's index shard objects. but there are two features that may
cause entries to clump on a single shard:
1. for versioned buckets, multiple versions of the same object name
map to the same index shard. this can become an issue if a
but the secondary zone isn't processing them in this case
>
> Thank you
>
>
>
> From: Casey Bodley
> Sent: Tuesday, July 9, 2024 10:39 PM
> To: Szabo, Istvan (Agoda)
> Cc: Eugen Block ; ceph-users@ceph.io
> Subject: Re: [ceph-users]
--bucket={bucket_name}` to fix up the bucket object count and object
> sizes at the end
>
> This process takes quite some time and I can't say if it's 100%
> perfect but it enabled us to get to a state where we could delete the
> buckets and clean up the objects.
> I h
On Fri, Jul 19, 2024 at 9:04 AM Stefan Kooman wrote:
>
> Hi,
>
> On 12-07-2024 00:27, Yuri Weinstein wrote:
>
> ...
>
> > * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
>
> I see that only packages have been build for Ubuntu 22.04 LTS. Will
> there also be packages built
On Tue, Jan 19, 2021 at 10:57 AM Prasad Krishnan
wrote:
>
> Dear Ceph users,
>
> We have a slightly dated version of Luminous cluster in which dynamic
> bucket resharding was accidentally enabled due to a misconfig (we don't use
> this feature since the number of objects per bucket is capped).
>
>
On Wed, Feb 10, 2021 at 8:31 AM Marcelo wrote:
>
> Hello all!
>
> We have a cluster where there are HDDs for data and NVMEs for journals and
> indexes. We recently added pure SSD hosts, and created a storage class SSD.
> To do this, we create a default.rgw.hot.data pool, associate a crush rule
> u
> de vírus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Em qua., 10 de fev. de 2021 às 11:43, Casey Bodley
> escreveu:
>
>
this error 2039 is ERR_NO_SUCH_WEBSITE_CONFIGURATION. if you want to
access a bucket via rgw_dns_s3website_name, you have to set a website
configuration on the bucket - see
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html
On Tue, Mar 30, 2021 at 10:05 AM Marcel Kuiper wro
thanks for the details. this is a regression from changes to the
datalog storage for multisite - this -5 error is coming from the new
'fifo' backend. as a workaround, you can set the new
'rgw_data_log_backing' config variable back to 'omap'
Adam has fixes already merged to the pacific branch; be a
On Wed, Apr 14, 2021 at 11:44 AM wrote:
>
> Konstantin;
>
> Dynamic resharding is disabled in multisite environments.
>
> I believe you mean radosgw-admin reshard stale-instances rm.
>
> Documentation suggests this shouldn't be run in a multisite environment.
> Does anyone know the reason for th
On Thu, Apr 22, 2021 at 2:26 PM Fabrice Bacchella
wrote:
>
> I'm trying to configure an S3 gateway with pacific and can't wrap my mind
> around.
>
> In the configuration file, my configuration is:
>
> [client.radosgw.fa41]
> rgw_data = /data/ceph/data/radosgw/$cluster.$id
> log_file = /data/c
this is https://tracker.ceph.com/issues/50218, a radosgw build issue
specific to ubuntu bionic that affected all of our releases. the build
issue has been resolved, so the next point releases should resolve the
crashes
On Fri, May 7, 2021 at 10:51 AM Gilles Mocellin
wrote:
>
> Hello,
>
> Since I
+1 from a dev's perspective. we don't test leveldb, and we don't
expect it to perform as well as rocksdb in ceph, so i don't see any
value in keeping it
the rados team put a ton of effort into converting existing clusters
to rocksdb, so i would be very surprised if removing leveldb left any
users
hey Tim,
your changes to rgw_admin_entry probably aren't taking effect on the
running radosgws. you'd need to restart them in order to set up the
new route
there also seems to be some confusion about the need for a bucket
named 'default'. radosgw just routes requests with paths starting with
'/{r
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release
#x27;t configured. But knowing where to inject
> the magic that activates that interface eludes me and whether to do it
> directly on the RGW container hos (and how) or on my master host is
> totally unclear to me. It doesn't help that this is an item that has
> multiple values, not
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release
errno 125 is ECANCELED, which is the code we use when we detect a
racing write. so it sounds like something else is modifying that user
at the same time. does it eventually succeed if you retry?
On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi
wrote:
>
> Hi all,
>
> I couldn't understand what doe
idi
wrote:
>
> Thanks Casey for your explanation,
>
> Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that
> it stays in racing condition for that much time.
>
> Best Regards,
> Mahnoosh
>
> On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote:
>&
if you have an administrative user (created with --admin), you should
be able to use its credentials with awscli to delete or overwrite this
bucket policy
On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham
wrote:
>
> I have a bucket which got injected with bucket policy which locks the
> bucket e
dillingham.com
> LinkedIn
>
>
> On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley wrote:
>>
>> if you have an administrative user (created with --admin), you should
>> be able to use its credentials with awscli to delete or overwrite this
>> bucket policy
>>
>&
another option is to enable the rgw ops log, which includes the bucket
name for each request
the http access log line that's visible at log level 1 follows a known
apache format that users can scrape, so i've resisted adding extra
s3-specific stuff like bucket/object names there. there was some
re
quincy 17.2.7: released!
* major 'dashboard v3' changes causing issues?
https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7
* planning a retrospective to discuss what kind of changes should go
in minor releases when members of the dashboard team are present
reef 18.2.1:
* most PRs alr
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis, Ernesto,
PM Wesley Dillingham
> wrote:
>>
>> Thank you, this has worked to remove the policy.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>>
>>
>> On W
were in v16.2.12.
>>>> We upgraded the cluster to v17.2.7 two days ago and it seems obvious that
>>>> the IAM error logs are generated the next minute rgw daemon upgraded from
>>>> v16.2.12 to v17.2.7. Looks like there is some issue with parsing.
>>>&g
ithi/7450325/
> >>
> >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz
> >> (#cephfs on ceph slack) to have a look. WIll update as soon as
> >> possible.
> >>
> >> > orch - Adam King
> >> > rbd - Ilya app
my understanding is that default placement is stored at the bucket
level, so changes to the user's default placement only take effect for
newly-created buckets
On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote:
>
> Hi community,
> I'm using Ceph version 16.2.13. I tried to set default_storage_clas
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote:
>
> good morning,
> i was struggling trying to understand why i cannot find this setting on
> my reef version, is it because is only on latest dev ceph version and not
> before?
that's right, this new feature will be part of the squid release. we
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
rgw approved, thanks
> fs - Venky
> rbd -
On Fri, Feb 2, 2024 at 11:21 AM Chris Palmer wrote:
>
> Hi Matthew
>
> AFAIK the upgrade from quincy/deb11 to reef/deb12 is not possible:
>
> * The packaging problem you can work around, and a fix is pending
> * You have to upgrade both the OS and Ceph in one step
> * The MGR will not run un
thanks, i've created https://tracker.ceph.com/issues/64360 to track
these backports to pacific/quincy/reef
On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman wrote:
>
> Hi,
>
> Is this PR: https://github.com/ceph/ceph/pull/54918 included as well?
>
> You definitely want to build the Ubuntu / debian pac
i've cc'ed Matt who's working on the s3 object integrity feature
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html,
where rgw compares the generated checksum with the client's on ingest,
then stores it with the object so clients can read it back for later
integrit
run here, approved
>
> ceph-volume - Guillaume, fixed by
> https://github.com/ceph/ceph/pull/55658 retesting
>
> On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley wrote:
> >
> > thanks, i've created https://tracker.ceph.com/issues/64360 to track
> > these backpo
Estimate on release timeline for 17.2.8?
- after pacific 16.2.15 and reef 18.2.2 hotfix
(https://tracker.ceph.com/issues/64339,
https://tracker.ceph.com/issues/64406)
Estimate on release timeline for 19.2.0?
- target April, depending on testing and RCs
- Testing plan for Squid beyond dev freeze (r
thanks Giada, i see that you created
https://tracker.ceph.com/issues/64547 for this
unfortunately, this topic metadata doesn't really have a permission
model at all. topics are shared across the entire tenant, and all
users have access to read/overwrite those topics
a lot of work was done for htt
hey Christian, i'm guessing this relates to
https://tracker.ceph.com/issues/63373 which tracks a deadlock in s3
DeleteObjects requests when multisite is enabled.
rgw_multi_obj_del_max_aio can be set to 1 as a workaround until the
reef backport lands
On Wed, Mar 6, 2024 at 2:41 PM Christian Kugler
anything we can do to narrow down the policy issue here? any of the
Principal, Action, Resource, or Condition matches could be failing
here. you might try replacing each with a wildcard, one at a time,
until you see the policy take effect
On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote:
>
> Hi
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release.
The upcoming Squid release will not support Ubuntu 20.04 (Focal
Fossa). Ubuntu users planning to upgrade from Quincy to Squid will
first need to perform a distro upgrade to 22.04.
Getting Ceph
* Git at git://githu
On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
>
> Hi everybody,
>
> we upgraded our containerized Red Hat Pacific cluster to the latest
> Quincy release (Community Edition).
i'm afraid this is not an upgrade path that we try to test or support.
Red Hat makes its own decisions about what to
object names when trying to list those buckets. 404
NoSuchKey is the response i would expect in that case
On Wed, Apr 3, 2024 at 12:20 PM Casey Bodley wrote:
>
> On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
> >
> > Hi everybody,
> >
> > we upgraded our contain
On Wed, Apr 3, 2024 at 3:09 PM Lorenz Bausch wrote:
>
> Hi Casey,
>
> thank you so much for analysis! We tested the upgraded intensively, but
> the buckets in our test environment were probably too small to get
> dynamically resharded.
>
> > after upgrading to the Quincy release, rgw would
> > loo
unfortunately, this cloud sync module only exports data from ceph to a
remote s3 endpoint, not the other way around:
"This module syncs zone data to a remote cloud service. The sync is
unidirectional; data is not synced back from the remote zone."
i believe that rclone supports copying from one s
On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/65393#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - infra issues, still trying, Laura PTL
>
> rados - Radek,
On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> Wonder what is the best practice to scale RGW, increase the thread numbers or
> spin up more gateways?
>
>
> *
> Let's say I have 21000 connections on my haproxy
> *
> I have 3 physical gateway servers so let's say each
# quincy now past estimated 2024-06-01 end-of-life
will 17.2.8 be the last point release? maybe not, depending on timing
# centos 8 eol
* Casey tried to summarize the fallout in
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/H7I4Q4RAIT6UZQNPPZ5O3YB6AUXLLAFI/
* c8 builds were disabled
are you running quincy? it looks like this '/admin/info' API was new
to that release
https://docs.ceph.com/en/quincy/radosgw/adminops/#info
On Fri, Jul 15, 2022 at 7:04 AM Marcus Müller wrote:
>
> Hi all,
>
> I’ve created a test user on our radosgw to work with the API. I’ve done the
> followin
are running Pacific, that was my issue here.
>
> Can someone share a example of a full API request and answer with curl? I’m
> still having issues, now getting 401 or 403 answers (but providing Auth-User
> and Auth-Key).
>
> Regards
> Marcus
>
>
>
> Am 15.07.2022
On Wed, Jul 20, 2022 at 12:57 AM Yuval Lifshitz wrote:
>
> yes, that would work. you would get a "404" until the object is fully
> uploaded.
just note that you won't always get 404 before multipart complete,
because multipart uploads can overwrite existing objects
___
On Sun, Jul 24, 2022 at 11:33 AM Yuri Weinstein wrote:
>
> Still seeking approvals for:
>
> rados - Travis, Ernesto, Adam
> rgw - Casey
rgw approved
> fs, kcephfs, multimds - Venky, Patrick
> ceph-ansible - Brad pls take a look
>
> Josh, upgrade/client-upgrade-nautilus-octopus failed, do we need
Barbican was the first key management server used for rgw's Server
Side Encryption feature. it's integration is documented in
https://docs.ceph.com/en/quincy/radosgw/barbican/
we've since added SSE-KMS support for Vault and KMIP, and the SSE-S3
feature (coming soon to quincy) requires Vault
our B
On Mon, Aug 22, 2022 at 12:37 PM Matt Dunavant
wrote:
>
> Hello,
>
>
> I'm trying to add a secondary realm to my ceph cluster but I'm getting the
> following error after running a 'radosgw-admin realm pull --rgw-realm=$REALM
> --url=http://URL:80 --access-key=$KEY --secret=$SECRET':
>
>
> reques
On Tue, Sep 13, 2022 at 4:03 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
rgw ap
hi Boris, it looks like your other questions have been covered but
i'll snipe this one:
On Fri, Sep 16, 2022 at 7:55 AM Boris Behrens wrote:
>
> How good is it handling bad HTTP request, sent by an attacker?)
rgw relies on the boost.beast library to parse these http requests.
that library has ha
On Thu, Sep 29, 2022 at 12:40 PM Neha Ojha wrote:
>
>
>
> On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote:
>>
>> Update:
>>
>> Remaining =>
>> upgrade/octopus-x - Neha pls review/approve
>
>
> Both the failures in
> http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_16:33:35-upgrade:octop
hey Boris,
that looks a lot like https://tracker.ceph.com/issues/40018 where an
exception was thrown when trying to read a socket's remote_endpoint().
i didn't think that local_endpoint() could fail the same way, but i've
opened https://tracker.ceph.com/issues/57784 to track this and the fix
shoul
On Mon, Oct 17, 2022 at 6:12 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> I’m looking in ceph octopus in my existing cluster to have object compression.
> Any feedback/experience appreciated.
> Also I’m curious is it possible to set after cluster setup or need to setup
> at the beginning?
it's fi
On Tue, Oct 18, 2022 at 4:01 AM Michal Strnad wrote:
>
> Hi.
>
> We have ceph cluster with a lot of users who use S3 and RBD protocols.
> Now we need to give access to one use group with OpenStack, so they run
> RGW on their side, but we have to set "ceph caps" for this RGW. In the
> documentation
only one agenda item discussed today:
* 17.2.5 is almost ready, Upgrade testing has been completed in
upstream gibba and LRC clusters!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lab issues blocking centos container builds and teuthology testing:
* https://tracker.ceph.com/issues/57914
* delays testing for 16.2.11
upcoming events:
* Ceph Developer Monthly (APAC) next week, please add topics:
https://tracker.ceph.com/projects/ceph/wiki/CDM_02-NOV-2022
* Ceph Virtual 2022 st
hi Thilo, you can find a 'request_timeout_ms' frontend option
documented in https://docs.ceph.com/en/quincy/radosgw/frontends/
On Wed, Nov 16, 2022 at 12:32 PM Thilo-Alexander Ginkel
wrote:
>
> Hi there,
>
> we are using Ceph Quincy's rgw S3 API to retrieve one file ("GET") over a
> longer time p
it doesn't look like cephadm supports extra frontend options during
deployment. but these are stored as part of the `rgw_frontends` config
option, so you can use a command like 'ceph config set' after
deployment to add request_timeout_ms
On Thu, Nov 17, 2022 at 11:18 AM Thilo-Alexander Ginkel
wro
hi Jan,
On Wed, Nov 23, 2022 at 12:45 PM Jan Horstmann wrote:
>
> Hi list,
> I am completely lost trying to reshard a radosgw bucket which fails
> with the error:
>
> process_single_logshard: Error during resharding bucket
> 68ddc61c613a4e3096ca8c349ee37f56/snapshotnfs:(2) No such file or
> direc
thanks Yuri, rgw approved based on today's results from
https://pulpito.ceph.com/yuriw-2022-12-20_15:27:49-rgw-pacific_16.2.11_RC2-distro-default-smithi/
On Mon, Dec 19, 2022 at 12:08 PM Yuri Weinstein wrote:
> If you look at the pacific 16.2.8 QE validation history (
> https://tracker.ceph.com/
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ern
distro testing for reef
* https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to
supported distros
* centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491
- lsb_release command no longer exists, use /etc/os-release instead
- ceph stopped depending on lsb_release
hi Boris,
On Sat, Feb 11, 2023 at 7:07 AM Boris Behrens wrote:
>
> Hi,
> we use rgw as our backup storage, and it basically holds only compressed
> rbd snapshots.
> I would love to move these out of the replicated into a ec pool.
>
> I've read that I can set a default placement target for a user
On Mon, Feb 13, 2023 at 4:31 AM Boris Behrens wrote:
>
> Hi Casey,
>
>> changes to the user's default placement target/storage class don't
>> apply to existing buckets, only newly-created ones. a bucket's default
>> placement target/storage class can't be changed after creation
>
>
> so I can easi
On Mon, Feb 13, 2023 at 8:41 AM Boris Behrens wrote:
>
> I've tried it the other way around and let cat give out all escaped chars
> and the did the grep:
>
> # cat -A omapkeys_list | grep -aFn '/'
> 9844:/$
> 9845:/^@v913^@$
> 88010:M-^@1000_/^@$
> 128981:M-^@1001_/$
>
> Did anyone ever saw somet
On Sun, Feb 26, 2023 at 8:20 AM Ilya Dryomov wrote:
>
> On Sun, Feb 26, 2023 at 2:15 PM Patrick Schlangen
> wrote:
> >
> > Hi Ilya,
> >
> > > Am 26.02.2023 um 14:05 schrieb Ilya Dryomov :
> > >
> > > Isn't OpenSSL 1.0 long out of support? I'm not sure if extending
> > > librados API to support
On Tue, Feb 28, 2023 at 8:19 AM Lars Dunemark wrote:
>
> Hi,
>
> I notice that CompleteMultipartUploadResult does return an empty ETag
> field when completing an multipart upload in v17.2.3.
>
> I haven't had the possibility to verify from which version this changed
> and can't find in the changel
On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The reruns were in the queue for 4 days because of some slowness issues.
> The core team (Neha, Radek, Laura, and others
hi Ernesto and lists,
> [1] https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
to do
if not, can we find
On Wed, Mar 22, 2023 at 9:27 AM Casey Bodley wrote:
>
> On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/59070#note-1
> > Release Notes - TBD
> >
> >
On Fri, Mar 24, 2023 at 3:46 PM Yuri Weinstein wrote:
>
> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings if you have them.
t question, I don't know who's the maintainer of those
> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
> > requesting that specific package, but that's only one out of the dozen of
> > missing packages (plus transitive dependenc
there's a rgw_period_root_pool option for the period objects too. but
it shouldn't be necessary to override any of these
On Sun, Apr 9, 2023 at 11:26 PM wrote:
>
> Up :)
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an emai
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
>
>
> Hi,
> I see that this PR: https://github.com/ceph/ceph/pull/48030
> made it into ceph 17.2.6, as per the change log at:
> https://docs.ceph.com/en/latest/releases/quincy/ That's great.
> But my scenario is as follows:
> I have two
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote:
>
> On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
> >
> >
> > Hi,
> > I see that this PR: https://github.com/ceph/ceph/pull/48030
> > made it into ceph 17.2.6, as per the change log at:
> >
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote:
>
> Hi everyone, quick question regarding radosgw zone data-pool.
>
> I’m currently planning to migrate an old data-pool that was created with
> inappropriate failure-domain to a newly created pool with appropriate
> failure-domain.
>
> If I’m do
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote:
>
> Hi,
>
> I am using 17.2.6 on rocky linux for both the master and the slave site
> I noticed that:
> radosgw-admin sync status
> often shows that the metadata sync is behind a minute or two on the slave.
> This didn't make sense, as the
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote:
>
> Hi Everyone,
> I've been having trouble finding an answer to this question. Basically
> I'm wanting to know if stuff in the .log pool is actively used for
> anything or if it's just logs that can be deleted.
> In particular I was wondering a
# ceph windows tests
PR check will be made required once regressions are fixed
windows build currently depends on gcc11 which limits use of c++20
features. investigating newer gcc or clang toolchain
# 16.2.13 release
final testing in progress
# prometheus metric regressions
https://tracker.ceph.c
of those
>> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
>> > requesting that specific package, but that's only one out of the dozen of
>> > missing packages (plus transitive dependencies)...
>> >
>> > Kind Rega
On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT)
wrote:
>
> After working on this issue for a bit.
> The active plan is to fail over master, to the “west” dc. Perform a realm
> pull from the west so that it forces the failover to occur. Then have the
> “east” DC, then pull the realm data
On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
> https://tracker.ceph.c
speed less than 1024 Bytes per second during
> 300 seconds.
> 2023-05-09T15:46:21.069+ 7f20b12b8700 0 WARNING: curl operation timed
> out, network average transfer speed less than 1024 Bytes per second during
> 300 seconds.
> 2023-05-09T15:46:21.069+ 7f2085ff3700 0 rgw a
sync doesn't distinguish between multipart and regular object uploads.
once a multipart upload completes, sync will replicate it as a single
object using an s3 GetObject request
replicating the parts individually would have some benefits. for
example, when sync retries are necessary, we might only
i'm afraid that feature will be new in the reef release. multisite
resharding isn't supported on quincy
On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote:
>
> https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding
> When I try this I get:
> root@ceph-m-02:~# radosgw-admin zo
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
wrote:
>
> Hi
>
> I'm currently using Ceph version 16.2.7 and facing an issue with bucket
> creation in a multi-zone configuration. My setup includes two zone groups:
>
> ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and
>
That final one (logutils) should go to EPEL's stable repo in a week
> (faster with karma).
>
> - Ken
>
>
>
>
> On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote:
> >
> > are there any volunteers willing to help make these python packages
> > ava
rgw supports the 3 flavors of S3 Server-Side Encryption, along with
the PutBucketEncryption api for per-bucket default encryption. you can
find the docs in https://docs.ceph.com/en/quincy/radosgw/encryption/
On Mon, May 22, 2023 at 10:49 AM huxia...@horebdata.cn
wrote:
>
> Dear Alexander,
>
> Tha
Our downstream QE team recently observed an md5 mismatch of replicated
objects when testing rgw's server-side encryption in multisite. This
corruption is specific to s3 multipart uploads, and only affects the
replicated copy - the original object remains intact. The bug likely
affects Ceph releases
fference is
where they get the key
>
> [1]
> https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only
>
> > On 26 May 2023, at 22:45, Casey Bodley wrote:
> >
> > Our downstream QE team recently observed an md5 mismatch of replica
e/overwrite the original copy
>
> Best regards
> Tobias
>
> On 30 May 2023, at 14:48, Casey Bodley wrote:
>
> On Tue, May 30, 2023 at 8:22 AM Tobias Urdin
> mailto:tobias.ur...@binero.com>> wrote:
>
> Hello Casey,
>
> Thanks for the information
1 - 100 of 205 matches
Mail list logo