[ceph-users] Re: Rados object transformation

2023-08-24 Thread Yixin Jin
Hi Casey, 
Thanks for the tip. Although it isn't the ideal solution as my application is 
rgw itself and I try to avoid changing its code, it could still help minimize 
the change. I will give it a try. 
Yixin

Sent from Yahoo Mail on Android 
 
  On Wed., Aug. 23, 2023 at 4:54 p.m., Casey Bodley wrote:  
 you could potentially create a cls_crypt object class that exposes
functions like crypt_read() and crypt_write() to do this. but your
application would have to use cls_crypt for all reads/writes instead
of the normal librados read/write operations. would that work for you?

On Wed, Aug 23, 2023 at 4:43 PM Yixin Jin  wrote:
>
> Hi folks,
> Is it possible through object classes to transform object content? For 
> example, I'd like this transformer to change the content of the object when 
> it is read and when it is written. In this way, I can potentially encrypt the 
> object content in storage without the need to make ceph/osd to do 
> encryption/decryption. It could be taken care of by the object class itself.
> Thanks,Yixin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

  
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CephFS: convert directory into subvolume

2023-08-24 Thread Eugen Block
Thanks, I understand that. I was just explicitly asking for the  
conversion of an existing directory (created without subvolume) with  
xattr as mentioned in the thread [2]. Anyway, apparently it works like  
Anh Phan stated in his response, moving an existing directory to the  
subvolumegroup subdir makes it a subvolume. So there's no need for  
xattr here.


Thanks,
Eugen

Zitat von Milind Changire :


well, you should've used the ceph command to create the subvol
it's much simpler that way

$ ceph fs subvolume create mycephfs subvol2

The above command creates a new subvol (subvol2) in the default  
subvolume group.

So, in your case the actual path to the subvolume would be

/mnt/volumes/_nogroup/subvol2/


On Tue, Aug 22, 2023 at 4:50 PM Eugen Block  wrote:


Hi,

while writing a response to [1] I tried to convert an existing
directory within a single cephfs into a subvolume. According to [2]
that should be possible, I'm just wondering how to confirm that it
actually worked. Because setting the xattr works fine, the directory
just doesn't show up in the subvolume ls command. This is what I tried
(in Reef and Pacific):

# one "regular" subvolume already exists
$ ceph fs subvolume ls cephfs
[
 {
 "name": "subvol1"
 }
]

# mounted / and created new subdir
$ mkdir /mnt/volumes/subvol2
$ setfattr -n ceph.dir.subvolume -v 1 /mnt/volumes/subvol2

# still only one subvolume
$ ceph fs subvolume ls cephfs
[
 {
 "name": "subvol1"
 }
]

I also tried it directly underneath /mnt:

$ mkdir /mnt/subvol2
$ setfattr -n ceph.dir.subvolume -v 1 /mnt/subvol2

But still no subvolume2 available. What am I missing here?

Thanks
Eugen

[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/G4ZWGGUPPFQIOVB4SFAIK73H3NLU2WRF/
[2] https://www.spinics.net/lists/ceph-users/msg72341.html

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io




--
Milind



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Casey Bodley
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
>   rook - Sébastien Han
>   cephadm - Adam K
>   dashboard - Ernesto
>
> rgw - Casey

rgw approved

> rbd - Ilya
> krbd - Ilya
> fs - Venky, Patrick
>
> upgrade/pacific-p2p - Laura
> powercycle - Brad (SELinux denials)
>
>
> Thx
> YuriW
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Ilya Dryomov
On Wed, Aug 23, 2023 at 4:41 PM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
>   rook - Sébastien Han
>   cephadm - Adam K
>   dashboard - Ernesto
>
> rgw - Casey
> rbd - Ilya
> krbd - Ilya

Hi Yuri,

rbd and krbd approved (krbd based on

https://pulpito.ceph.com/yuriw-2023-08-22_23:51:56-krbd-pacific-release-testing-default-smithi/

which is not listed on the tracker page).

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Nizamudeen A
Dashboard approved!

@Laura Flores  https://tracker.ceph.com/issues/62559,
this could be a dashboard issue. We'll be removing those tests from the
orch suite. Because we are already checking them
in the jenkins pipeline. The current one in the teuthology suite is a bit
flaky and not reliable.

Regards,
Nizam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Yuri Weinstein
Pls review and approve the release notes PR
https://github.com/ceph/ceph/pull/53107/

And approve the remaining test results.
We plan to publish this release early next week.

TIA

On Wed, Aug 23, 2023 at 7:40 AM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
>   rook - Sébastien Han
>   cephadm - Adam K
>   dashboard - Ernesto
>
> rgw - Casey
> rbd - Ilya
> krbd - Ilya
> fs - Venky, Patrick
>
> upgrade/pacific-p2p - Laura
> powercycle - Brad (SELinux denials)
>
>
> Thx
> YuriW
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] User + Dev Monthly Meeting Minutes 2023-08-24

2023-08-24 Thread Laura Flores
Is there going to be another Pacific point release (16.2.14) in the
pipeline?

   - Yes, 16.2.14 is going through QA right now. See
   https://www.spinics.net/lists/ceph-users/msg78528.html for updates.

Need pacific backport for https://tracker.ceph.com/issues/59478

   - Laura will check on this, although a Pacific backport is unlikely due
   to incompatibilities from the scrub backend refactoring.

There are inconsistencies with the `ceph config dump` normal vs. json
output. A fix has been proposed in https://tracker.ceph.com/issues/62379.
Question for users: Will this change break any existing automation?
See the tracker for more details, and reach out to @Sridhar Seshasayee
 with any questions.

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage 

Chicago, IL

lflo...@ibm.com | lflo...@redhat.com 
M: +17087388804
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] lun allocation failure

2023-08-24 Thread Opánszki Gábor

Hi folks,

we deployed new reef cluster to our lab.

all of the nodes are up and running, but we can't allocate lun to target.

on the gui we got "disk create/update failed on ceph-iscsigw0. LUN 
allocation failure" message.


We created images on gui

do you have any idea?

Thanks

root@ceph-mgr0:~# ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
  cluster:
    id: ad0aede2-4100-11ee-bc14-1c40244f5c21
    health: HEALTH_OK

  services:
    mon: 5 daemons, quorum 
ceph-mgr0,ceph-mgr1,ceph-osd5,ceph-osd7,ceph-osd6 (age 28h)
    mgr: ceph-mgr0.sapbav(active, since 45h), standbys: 
ceph-mgr1.zwzyuc

    osd: 44 osds: 44 up (since 4h), 44 in (since 4h)
    tcmu-runner: 1 portal active (1 hosts)

  data:
    pools:   5 pools, 3074 pgs
    objects: 27 objects, 453 KiB
    usage:   30 GiB used, 101 TiB / 101 TiB avail
    pgs: 3074 active+clean

  io:
    client:   2.7 KiB/s rd, 2 op/s rd, 0 op/s wr

root@ceph-mgr0:~#

root@ceph-mgr0:~# rados lspools
.mgr
ace1
1T-r3-01
ace0
x
root@ceph-mgr0:~# rbd ls 1T-r3-01
111

bb
pool2
teszt
root@ceph-mgr0:~# rbd ls x
x-a
root@ceph-mgr0:~#

root@ceph-mgr0:~# rbd info 1T-r3-01/111
rbd image '111':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f927ce161de
    block_name_prefix: rbd_data.5f927ce161de
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:33:37 2023
    access_timestamp: Thu Aug 24 17:33:37 2023
    modify_timestamp: Thu Aug 24 17:33:37 2023
root@ceph-mgr0:~# rbd info 1T-r3-01/
rbd image '':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f926a0e299f
    block_name_prefix: rbd_data.5f926a0e299f
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:18:06 2023
    access_timestamp: Thu Aug 24 17:18:06 2023
    modify_timestamp: Thu Aug 24 17:18:06 2023
root@ceph-mgr0:~# rbd info x/x-a
rbd image 'x-a':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 5f922dbdf6c6
    block_name_prefix: rbd_data.5f922dbdf6c6
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Thu Aug 24 17:48:28 2023
    access_timestamp: Thu Aug 24 17:48:28 2023
    modify_timestamp: Thu Aug 24 17:48:28 2023
root@ceph-mgr0:~#

root@ceph-mgr0:~# ceph orch ls --service_type iscsi
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
NAME    PORTS   RUNNING  REFRESHED  AGE PLACEMENT
iscsi.gw-1  ?:5000  2/2  4m ago 6m ceph-iscsigw0;ceph-iscsigw1
root@ceph-mgr0:~#



GW:


root@ceph-iscsigw0:~# docker ps
CONTAINER ID   IMAGE COMMAND  CREATED STATUS 
PORTS NAMES
d677a8abd2d8   quay.io/ceph/ceph "/usr/bin/rbd-target…"   6 seconds 
ago   Up 5 seconds 
ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-iscsi-gw-1-ceph-iscsigw0-fmuyhi
ead503586cdd   quay.io/ceph/ceph "/usr/bin/tcmu-runner"   6 seconds 
ago   Up 5 seconds 
ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-iscsi-gw-1-ceph-iscsigw0-fmuyhi-tcmu
3ae0014bcc41   quay.io/ceph/ceph "/usr/bin/ceph-crash…"   About an hour 
ago   Up About an hour 
ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-crash-ceph-iscsigw0
1a7bc044ed8a   quay.io/ceph/ceph "/usr/bin/ceph-expor…"   About an hour 
ago   Up About an hour 
ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-ceph-exporter-ceph-iscsigw0
c746a4da2bbb   quay.io/prometheus/node-exporter:v1.5.0 
"/bin/node_exporter …"   About an hour ago   Up About an hour 
ceph-ad0aede2-4100-11ee-bc14-1c40244f5c21-node-exporter-ceph-iscsigw0

root@ceph-iscsigw0:~# docker exec -it d677a8abd2d8 /bin/bash
[root@ceph-iscsigw0 /]# gwcli ls
o- / 
. 
[...]
  o- cluster 
. 
[Clusters: 1]
  | o- ceph 
 
[HEALTH_OK]
  |   o- pools 
.. 
[Pools: 5]
  |   | o- .mgr 
.. 
[(x3), Commit: 0.00Y/33602764M (0%), Used: 3184K]
  |   | o- 1T-r3-01 
 [(x3), 
Commit: 0.00Y/5793684M (0%), Used: 108K]
  |   | o- ace0 
... 
[(2+1), Commit: 0.00Y/11587368M (0%), Used: 24K]
  |   | o- ace1 
... 
[(2+1), Commit: 0.00Y/55665220M (0%), Used: 12K]
  |   | o- x 

[ceph-users] Re: CephFS: convert directory into subvolume

2023-08-24 Thread Milind Changire
fyi - the xattr is indeed required even if the dir is under a subvolumegroup dir

there's some management involved in the way the subvolume dir is created

On Thu, Aug 24, 2023 at 5:10 PM Eugen Block  wrote:
>
> Thanks, I understand that. I was just explicitly asking for the
> conversion of an existing directory (created without subvolume) with
> xattr as mentioned in the thread [2]. Anyway, apparently it works like
> Anh Phan stated in his response, moving an existing directory to the
> subvolumegroup subdir makes it a subvolume. So there's no need for
> xattr here.
>
> Thanks,
> Eugen
>
> Zitat von Milind Changire :
>
> > well, you should've used the ceph command to create the subvol
> > it's much simpler that way
> >
> > $ ceph fs subvolume create mycephfs subvol2
> >
> > The above command creates a new subvol (subvol2) in the default
> > subvolume group.
> > So, in your case the actual path to the subvolume would be
> >
> > /mnt/volumes/_nogroup/subvol2/
> >
> >
> > On Tue, Aug 22, 2023 at 4:50 PM Eugen Block  wrote:
> >>
> >> Hi,
> >>
> >> while writing a response to [1] I tried to convert an existing
> >> directory within a single cephfs into a subvolume. According to [2]
> >> that should be possible, I'm just wondering how to confirm that it
> >> actually worked. Because setting the xattr works fine, the directory
> >> just doesn't show up in the subvolume ls command. This is what I tried
> >> (in Reef and Pacific):
> >>
> >> # one "regular" subvolume already exists
> >> $ ceph fs subvolume ls cephfs
> >> [
> >>  {
> >>  "name": "subvol1"
> >>  }
> >> ]
> >>
> >> # mounted / and created new subdir
> >> $ mkdir /mnt/volumes/subvol2
> >> $ setfattr -n ceph.dir.subvolume -v 1 /mnt/volumes/subvol2
> >>
> >> # still only one subvolume
> >> $ ceph fs subvolume ls cephfs
> >> [
> >>  {
> >>  "name": "subvol1"
> >>  }
> >> ]
> >>
> >> I also tried it directly underneath /mnt:
> >>
> >> $ mkdir /mnt/subvol2
> >> $ setfattr -n ceph.dir.subvolume -v 1 /mnt/subvol2
> >>
> >> But still no subvolume2 available. What am I missing here?
> >>
> >> Thanks
> >> Eugen
> >>
> >> [1]
> >> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/G4ZWGGUPPFQIOVB4SFAIK73H3NLU2WRF/
> >> [2] https://www.spinics.net/lists/ceph-users/msg72341.html
> >>
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> >
> > --
> > Milind
>
>
>


-- 
Milind
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: User + Dev Monthly Meeting Minutes 2023-08-24

2023-08-24 Thread Konstantin Shalygin
On 24 Aug 2023, at 18:51, Laura Flores  wrote:
> 
> Need pacific backport for https://tracker.ceph.com/issues/59478
> 
>   - Laura will check on this, although a Pacific backport is unlikely due
>   to incompatibilities from the scrub backend refactoring.

Laura, this fix "for malformed fix" of earlier Pacific release or lack of this 
fix still prevents deleting snapshots created on previous release (example: 
created snapshot at Luminous, then upgrade to Nautilus->Pacific)?


Thanks,
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: User + Dev Monthly Meeting Minutes 2023-08-24

2023-08-24 Thread Laura Flores
Hey Konstantin,

Please follow the tracker ticket (https://tracker.ceph.com/issues/59478) for
additional updates as we evaluate how to best aid Pacific clusters with
leaked clones due to this bug.

- Laura Flores

On Thu, Aug 24, 2023 at 11:56 AM Konstantin Shalygin  wrote:

> On 24 Aug 2023, at 18:51, Laura Flores  wrote:
> >
> > Need pacific backport for https://tracker.ceph.com/issues/59478
> >
> >   - Laura will check on this, although a Pacific backport is unlikely due
> >   to incompatibilities from the scrub backend refactoring.
>
> Laura, this fix "for malformed fix" of earlier Pacific release or lack of
> this fix still prevents deleting snapshots created on previous release
> (example: created snapshot at Luminous, then upgrade to Nautilus->Pacific)?
>
>
> Thanks,
> k
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage 

Chicago, IL

lflo...@ibm.com | lflo...@redhat.com 
M: +17087388804
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] radosgw mulsite multi zone configuration: current period realm name not same as in zonegroup

2023-08-24 Thread Adiga, Anantha
Hi,

I have a multi zone configuration with 4 zones.

While adding a secondary zone, getting this error:

root@cs17ca101ja0702:/# radosgw-admin realm pull --rgw-realm=global 
--url=http://10.45.128.139:8080 --default --access-key=sync_user 
--secret=sync_secret
request failed: (13) Permission denied
If the realm has been changed on the master zone, the master zone's gateway may 
need to be restarted to recognize this user.
root@cs17ca101ja0702:/#

The realm name is "global". Is the cause of the error due to the primary 
cluster having a current period listing the realm name as "default" instead of 
"global" ?  However, the realm id is of realm "global" AND the zonegroup does 
not list realm name but has the correct realm id. See below.

How to fix this issue.

root@fl31ca104ja0201:/# radosgw-admin realm get
{
"id": "3da7b5ea-c44b-4d44-aced-fae2aabce97b",
"name": "global",
"current_period": "b8bc1187-2a2d-4d9e-b7be-c4f4667e3fa6",
"epoch": 2
}
root@fl31ca104ja0201:/# radosgw-admin realm get --rgw-realm=global
{
"id": "3da7b5ea-c44b-4d44-aced-fae2aabce97b",
"name": "global",
"current_period": "b8bc1187-2a2d-4d9e-b7be-c4f4667e3fa6",
"epoch": 2
}

root@fl31ca104ja0201:/# radosgw-admin zonegroup list
{
"default_info": "ec8b68db-1900-464f-a21a-2f6e8c107e94",
"zonegroups": [
"alldczg"
]
}

root@fl31ca104ja0201:/# radosgw-admin zonegroup get --rgw-zonegroup=alldczg
{
"id": "ec8b68db-1900-464f-a21a-2f6e8c107e94",
"name": "alldczg",
"api_name": "alldczg",
"is_master": "true",
"endpoints": [
http://10.45.128.139:8080,
http://172.18.55.71:8080,
http://10.239.155.23:8080
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "ae267592-7cd8-4d67-8792-adc57d104cd6",
"zones": [
{
"id": "0962f0b4-beb6-4d07-a64d-07046b81529e",
"name": "CRsite",
"endpoints": [
http://172.18.55.71:8080
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
},
{
"id": "9129d118-55ac-4859-b339-b8afe0793a80",
"name": "BArga",
"endpoints": [
http://10.208.11.26:8080
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
},
{
"id": "ae267592-7cd8-4d67-8792-adc57d104cd6",
"name": "ORflex2",
"endpoints": [
http://10.45.128.139:8080
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
},
{
"id": "f5edeb4b-2a37-413b-8587-0ff40d7647ea",
"name": "SHGrasp",
"endpoints": [
http://10.239.155.23:8080
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
}
],
"default_placement": "default-placement",
"realm_id": "3da7b5ea-c44b-4d44-aced-fae2aabce97b",
"sync_policy": {
"groups": []
}
}

root@fl31ca104ja0201:/# radosgw-admin period get-current
{
"current_period": "b8bc1187-2a2d-4d9e-b7be-c4f4667e3fa6"
}
root@fl31ca104ja0201:/# radosgw-admin period get
{
"id": "b8bc1187-2a2d-4d9e-b7be-c4f4667e3fa6",
"epoch": 42,
"predecessor_uuid": "2df86f9a-d267-4b52-a13b-def8e5e612a2",
"sync_status": [],
"period_map": {
"id": "b8bc1187-2a2d-4d9e-b7be-c4f4667e3fa6",
"zonegroups": [
{
"id": "ec8b68db-1900-464f-a21a-2f6e8c107e94",
"name": "alldczg",
"api_name": "alldczg",
"is_master": "true",
"endpoints": [
http://10.45.128.139:8080,
http://172.18.55.71:8080,
http://10.239.155.23:8080
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "ae267592-7cd8-4d67-8792-adc57d104cd6",
"zones": [

[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Patrick Donnelly
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
>   rook - Sébastien Han
>   cephadm - Adam K
>   dashboard - Ernesto
>
> rgw - Casey
> rbd - Ilya
> krbd - Ilya
> fs - Venky, Patrick

approved

https://tracker.ceph.com/projects/cephfs/wiki/Pacific#2023-August-22


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Problem when configuring S3 website domain go through Cloudflare DNS proxy

2023-08-24 Thread Huy Nguyen
Hi,
Currently, I trying to create a CNAME record point to a s3 website, for 
example: s3.example.com => s3.example.com.s3-website.myceph.com. So in this 
way, my subdomain s3. will have https.

But then only http works. If I go to https://s3.example.com, it shows the 
metadata of index.html:

This XML file does not appear to have any style information associated with it. 
The document tree is shown below.
http://s3.amazonaws.com/doc/2006-03-01/";>
s3.example.com

1000
false

index.html
2023-08-24T10:03:14.046Z
"8e26caf000875221bf89d95f7f244927"
295
STANDARD

d92ac19d934a4e9b90e7707372c64996
f...@example.com

Normal




Here is my rgw configuration:

rgw_resolve_cname = true
rgw_enable_static_website = true
rgw_dns_s3website_name = ss-website.example.com
rgw_trust_forwarded_https = true

So how to make the https show the content of index.html (not its metadata)?

Thanks in advance.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem when configuring S3 website domain go through Cloudflare DNS proxy

2023-08-24 Thread Huy Nguyen
This issue doesn't occur using S3 website domain of AWS. Seems like it only 
happens with Ceph.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can ceph-volume manage the LVs optionally used for DB / WAL at all?

2023-08-24 Thread Christian Rohmann

On 11.08.23 16:06, Eugen Block wrote:
if you deploy OSDs from scratch you don't have to create LVs manually, 
that is handled entirely by ceph-volume (for example on cephadm based 
clusters you only provide a drivegroup definition). 


By looking at 
https://docs.ceph.com/en/latest/man/8/ceph-volume/#cmdoption-ceph-volume-lvm-prepare-block.db 
it seems that ceph-volume wants an LV or partition. So it's apparently 
not just taking a VG itself? Also if there were multiple VGs / devices , 
I likely would need to at least pick those.


But I suppose this orchestration would then require cephadm 
(https://docs.ceph.com/en/latest/cephadm/services/osd/#drivegroups) and 
cannot be done via ceph-volume which merely takes care of ONE OSD at a time.



I'm not sure if automating db/wal migration has been considered, it 
might be (too) difficult. But moving the db/wal devices to 
new/different devices doesn't seem to be a reoccuring issue (corner 
case?), so maybe having control over that process for each OSD 
individually is the safe(r) option in case something goes wrong. 


Sorry for the confusion. I was not talking about any migrations, just 
the initial creation of spinning rust OSDs with DB or WAL on fast storage.



Regards


Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io