[ceph-users] Re: Multisite sync: is metadata transferred in plain text?

2024-09-24 Thread Mary Zhang
Got it, thank you Janne. Best Regards, Mary On Mon, Sep 23, 2024, 1:30 AM Janne Johansson wrote: > > We have a multisite Ceph configuration, with http (not https) sync > endpoints. Are all sync traffic in plain text? > > For S3 v4 auth, there are things that "obfuscates" the login auth, but > m

[ceph-users] Re: [EXTERNAL] Multisite sync: is metadata transferred in plain text?

2024-09-24 Thread Mary Zhang
Thank you Alex. I was hoping session key might be used to encrypt meta data. Thanks, Mary On Mon, Sep 23, 2024, 1:24 AM Alex Hussein-Kershaw (HE/HIM) < alex...@microsoft.com> wrote: > Feels like you answered your own question here - why not just use HTTPS > for your multisite sync? > > I'm not a

[ceph-users] Multisite sync: is metadata transferred in plain text?

2024-09-20 Thread Mary Zhang
Hi, We have a multisite Ceph configuration, with http (not https) sync endpoints. Are all sync traffic in plain text? We have concerns about metadata. For example, when syncing a newly created user and its access key and secret key from Master zone to a secondary zone, are the keys also in plain t

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-30 Thread Mary Zhang
Sorry Frank, I typed the wrong name. On Tue, Apr 30, 2024, 8:51 AM Mary Zhang wrote: > Sounds good. Thank you Kevin and have a nice day! > > Best Regards, > Mary > > On Tue, Apr 30, 2024, 8:21 AM Frank Schilder wrote: > >> I think you are panicking way too much. Chan

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-30 Thread Mary Zhang
d just administrate your cluster with common storage > admin sense. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Mary Zhang > Sent: Tuesday, April 30, 2024 5

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-30 Thread Mary Zhang
o have a chance to recover data. > Look at the manual of ddrescue why it is important to stop IO from a > failing disk as soon as possible. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > ___

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-27 Thread Mary Zhang
one or more > hosts at a time, you don’t need to worry about a single disk. Just > take it out and remove it (forcefully) so it doesn’t have any clients > anymore. Ceph will immediately assign different primary OSDs and your > clients will be happy again. ;-) > > Zitat von Mary

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-26 Thread Mary Zhang
find a way to have your cake and eat it to in relation to this > "predicament" in this tracker issue: https://tracker.ceph.com/issues/44400 > but it was deemed "wont fix". > > Respectfully, > > *Wes Dillingham* > LinkedIn <http://www.linkedin.com/in/wesleyd

[ceph-users] Re: Remove an OSD with hardware issue caused rgw 503

2024-04-26 Thread Mary Zhang
PGs from this OSD, and in case of hardware > failure it might lead to slow requests. It might make sense to > forcefully remove the OSD without draining: > > - stop the osd daemon > - mark it as out > - osd purge [--force] [--yes-i-really-mean-it] > > Regards, > Euge

[ceph-users] Remove an OSD with hardware issue caused rgw 503

2024-04-24 Thread Mary Zhang
ailbility. Is our expectation reasonable? What's the best way to handle osd with hardware failures? Thank you in advance for any comments or suggestions. Best Regards, Mary Zhang ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe se