Hi Casey,
Thanks for the tip. Although it isn't the ideal solution as my application is
rgw itself and I try to avoid changing its code, it could still help minimize
the change. I will give it a try.
Yixin
Sent from Yahoo Mail on Android
On Wed., Aug. 23, 2023 at 4:54 p.m., Casey Bodley w
Thanks, I understand that. I was just explicitly asking for the
conversion of an existing directory (created without subvolume) with
xattr as mentioned in the thread [2]. Anyway, apparently it works like
Anh Phan stated in his response, moving an existing directory to the
subvolumegroup sub
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
> dashb
On Wed, Aug 23, 2023 at 4:41 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
> dashbo
Dashboard approved!
@Laura Flores https://tracker.ceph.com/issues/62559,
this could be a dashboard issue. We'll be removing those tests from the
orch suite. Because we are already checking them
in the jenkins pipeline. The current one in the teuthology suite is a bit
flaky and not reliable.
Rega
Pls review and approve the release notes PR
https://github.com/ceph/ceph/pull/53107/
And approve the remaining test results.
We plan to publish this release early next week.
TIA
On Wed, Aug 23, 2023 at 7:40 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tr
Is there going to be another Pacific point release (16.2.14) in the
pipeline?
- Yes, 16.2.14 is going through QA right now. See
https://www.spinics.net/lists/ceph-users/msg78528.html for updates.
Need pacific backport for https://tracker.ceph.com/issues/59478
- Laura will check on this,
Hi folks,
we deployed new reef cluster to our lab.
all of the nodes are up and running, but we can't allocate lun to target.
on the gui we got "disk create/update failed on ceph-iscsigw0. LUN
allocation failure" message.
We created images on gui
do you have any idea?
Thanks
root@ceph-mgr0
fyi - the xattr is indeed required even if the dir is under a subvolumegroup dir
there's some management involved in the way the subvolume dir is created
On Thu, Aug 24, 2023 at 5:10 PM Eugen Block wrote:
>
> Thanks, I understand that. I was just explicitly asking for the
> conversion of an exis
On 24 Aug 2023, at 18:51, Laura Flores wrote:
>
> Need pacific backport for https://tracker.ceph.com/issues/59478
>
> - Laura will check on this, although a Pacific backport is unlikely due
> to incompatibilities from the scrub backend refactoring.
Laura, this fix "for malformed fix" of ear
Hey Konstantin,
Please follow the tracker ticket (https://tracker.ceph.com/issues/59478) for
additional updates as we evaluate how to best aid Pacific clusters with
leaked clones due to this bug.
- Laura Flores
On Thu, Aug 24, 2023 at 11:56 AM Konstantin Shalygin wrote:
> On 24 Aug 2023, at 18
Hi,
I have a multi zone configuration with 4 zones.
While adding a secondary zone, getting this error:
root@cs17ca101ja0702:/# radosgw-admin realm pull --rgw-realm=global
--url=http://10.45.128.139:8080 --default --access-key=sync_user
--secret=sync_secret
request failed: (13) Permission denie
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
> dashb
Hi,
Currently, I trying to create a CNAME record point to a s3 website, for
example: s3.example.com => s3.example.com.s3-website.myceph.com. So in this
way, my subdomain s3. will have https.
But then only http works. If I go to https://s3.example.com, it shows the
metadata of index.html:
This
This issue doesn't occur using S3 website domain of AWS. Seems like it only
happens with Ceph.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 11.08.23 16:06, Eugen Block wrote:
if you deploy OSDs from scratch you don't have to create LVs manually,
that is handled entirely by ceph-volume (for example on cephadm based
clusters you only provide a drivegroup definition).
By looking at
https://docs.ceph.com/en/latest/man/8/ceph-volu
16 matches
Mail list logo