[ceph-users] Re: zap an osd and it appears again

2022-03-31 Thread Dhairya Parmar
Can you try using the --force option with your command?

On Thu, Mar 31, 2022 at 1:25 AM Alfredo Rezinovsky 
wrote:

> I want to create osds manually
>
> If I zap the osd  0 with:
>
> ceph orch osd rm 0 --zap
>
> as soon as the dev is available the orchestrator creates it again
>
> If I use:
>
> ceph orch apply osd --all-available-devices --unmanaged=true
>
> and then zap the osd.0 it also appears again.
>
> There is a real way to disable the orch apply persistency or disable it
> temporarily?
>
> --
> Alfrenovsky
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to keep a backup of a bucket

2022-03-31 Thread William Edwards

Szabo, Istvan (Agoda) schreef op 2022-03-31 08:44:

Hi,


Hi,



I have some critical data in couple of buckets I'd like to keep it
somehow safe, but I don't see any kind of snapshot solution in ceph
for objectgateway.


Some work seems to have been done in this area at one point: 
https://tracker.ceph.com/projects/ceph/wiki/Rgw_-_Snapshots



How you guys (if you do) backup RGW buckets or objects what is the
best way to keep some kind of cold data if ceph crash?


Related: 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SHY7OY24E4YI3WSQT4RP7QICYWKUM3PF/


Personally, I've a daily cron that loops through my buckets, and `rclone 
sync`s them: https://rclone.org/commands/rclone_sync/




Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---



This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by
copyright or other legal rules. If you have received it by mistake
please let us know by reply email and delete it from your system. It
is prohibited to copy this message or disclose its content to anyone.
Any confidentiality or privilege is not waived or lost by any mistaken
delivery or unauthorized disclosure of the message. All messages sent
to and from Agoda may be monitored to ensure compliance with company
policies, to protect the company's interests and to remove potential
malware. Electronic messages may be intercepted, amended, lost or
deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
With kind regards,

William Edwards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to keep a backup of a bucket

2022-03-31 Thread Burkhard Linke

Hi,


On 3/31/22 08:44, Szabo, Istvan (Agoda) wrote:

Hi,

I have some critical data in couple of buckets I'd like to keep it somehow 
safe, but I don't see any kind of snapshot solution in ceph for objectgateway.
How you guys (if you do) backup RGW buckets or objects what is the best way to 
keep some kind of cold data if ceph crash?



Bareos (https://www.bareos.com) has a plugin for performing S3 based 
backups (using S3 buckets as source, not as target). Not sure how good 
this plugin works, we haven't tried it yet. We are currently evaluating 
bareos as a replacement for filesystem based backups, and I works well 
so far. Just give it a try for S3 backups.



Regards,

Burkhard


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-31 Thread Ernesto Puerta
Hi Yuri,

As commented yesterday we're working to have the latest Grafana image
built, but dealing with this issue (https://github.com/ceph/ceph/pull/45578
).

Thanks!

Kind Regards,
Ernesto


On Wed, Mar 30, 2022 at 7:54 PM Yuri Weinstein  wrote:

> We merged rgw, cephadm and core PRs, but some work is still pending on fs
> and dashboard components.
>
> Seeking approvals for:
>
> smoke - Venky
> fs - Venky
> powercycle - Brag (SELinux denials)
> dashboard - Ernesto
> rook - Sebastian Han
>
> On Mon, Mar 28, 2022 at 2:47 PM Yuri Weinstein 
> wrote:
>
>> We are trying to release v17.2.0 as soon as possible.
>> And need to do a quick approval of tests and review failures.
>>
>> Still outstanding are two PRs:
>> https://github.com/ceph/ceph/pull/45673
>> https://github.com/ceph/ceph/pull/45604
>>
>> Build failing and I need help to fix it ASAP.
>> (
>>
>> https://shaman.ceph.com/builds/ceph/wip-yuri11-testing-2022-03-28-0907-quincy/61b142c76c991abe3fe77390e384b025e1711757/
>> )
>>
>> Details of this release are summarized here:
>>
>> https://tracker.ceph.com/issues/55089
>> Release Notes - https://github.com/ceph/ceph/pull/45048
>>
>> Seeking approvals for:
>>
>> smoke - Neha, Josh (the failure appears reproducible)
>> rgw - Casey
>> fs - Venky, Gerg
>> rbd - Ilya, Deepika
>> krbd  Ilya, Deepika
>> upgrade/octopus-x - Casey
>> powercycle - Brag (SELinux denials)
>> ceph-volume - Guillaume, David G
>>
>> Please reply to this email with approval and/or tracks of know issues/PRs
>> to address them.
>>
>> Thx
>> YuriW
>>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-31 Thread Casey Bodley
On Wed, Mar 30, 2022 at 9:19 AM Casey Bodley  wrote:
>
> On Mon, Mar 28, 2022 at 5:48 PM Yuri Weinstein  wrote:
> >
> > We are trying to release v17.2.0 as soon as possible.
> > And need to do a quick approval of tests and review failures.
> >
> > Still outstanding are two PRs:
> > https://github.com/ceph/ceph/pull/45673
> > https://github.com/ceph/ceph/pull/45604
> >
> > Build failing and I need help to fix it ASAP.
> > (
> > https://shaman.ceph.com/builds/ceph/wip-yuri11-testing-2022-03-28-0907-quincy/61b142c76c991abe3fe77390e384b025e1711757/
> > )
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/55089
> > Release Notes - https://github.com/ceph/ceph/pull/45048
> >
> > Seeking approvals for:
> >
> > smoke - Neha, Josh (the failure appears reproducible)
> > rgw - Casey
>
> approved for rgw, based on the latest results in
> https://pulpito.ceph.com/yuriw-2022-03-29_21:32:48-rgw-wip-yuri11-testing-2022-03-28-0907-quincy-distro-default-smithi/
>
> this test includes the arrow submodule PR
> https://github.com/ceph/ceph/pull/45604 which is now ready for merge.
> however, github now requires 6 reviews to merge this for quincy.
> should i just tag a few more people for approval?
>
> > fs - Venky, Gerg
> > rbd - Ilya, Deepika
> > krbd  Ilya, Deepika
> > upgrade/octopus-x - Casey
>
> i see ragweed boostrap failures from octopus, tracked by
> https://tracker.ceph.com/issues/53829. these are preventing the
> upgrade tests from completing

it looks like we fixed those ragweed failures in the rerun,
https://pulpito.ceph.com/yuriw-2022-03-30_18:13:29-upgrade:octopus-x-quincy-distro-default-smithi/

there are two failures on the rgw multisite tests. i'm happy to
approve based on this rerun

>
> > powercycle - Brag (SELinux denials)
> > ceph-volume - Guillaume, David G
> >
> > Please reply to this email with approval and/or tracks of know issues/PRs 
> > to address them.
> >
> > Thx
> > YuriW
> > ___
> > Dev mailing list -- d...@ceph.io
> > To unsubscribe send an email to dev-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-31 Thread Venky Shankar
Hi Yuri,

On Wed, Mar 30, 2022 at 11:24 PM Yuri Weinstein  wrote:
>
> We merged rgw, cephadm and core PRs, but some work is still pending on fs and 
> dashboard components.
>
> Seeking approvals for:
>
> smoke - Venky
> fs - Venky

I approved the latest batch for cephfs PRs:
https://trello.com/c/Iq3WtUK5/1494-wip-yuri-testing-2022-03-29-0741-quincy

There is one pending (blocker) PR:
https://github.com/ceph/ceph/pull/45689 - I'll let you know when the
backport is available.

> powercycle - Brag (SELinux denials)
> dashboard - Ernesto
> rook - Sebastian Han
>
> On Mon, Mar 28, 2022 at 2:47 PM Yuri Weinstein  wrote:
>>
>> We are trying to release v17.2.0 as soon as possible.
>> And need to do a quick approval of tests and review failures.
>>
>> Still outstanding are two PRs:
>> https://github.com/ceph/ceph/pull/45673
>> https://github.com/ceph/ceph/pull/45604
>>
>> Build failing and I need help to fix it ASAP.
>> (
>> https://shaman.ceph.com/builds/ceph/wip-yuri11-testing-2022-03-28-0907-quincy/61b142c76c991abe3fe77390e384b025e1711757/
>> )
>>
>> Details of this release are summarized here:
>>
>> https://tracker.ceph.com/issues/55089
>> Release Notes - https://github.com/ceph/ceph/pull/45048
>>
>> Seeking approvals for:
>>
>> smoke - Neha, Josh (the failure appears reproducible)
>> rgw - Casey
>> fs - Venky, Gerg
>> rbd - Ilya, Deepika
>> krbd  Ilya, Deepika
>> upgrade/octopus-x - Casey
>> powercycle - Brag (SELinux denials)
>> ceph-volume - Guillaume, David G
>>
>> Please reply to this email with approval and/or tracks of know issues/PRs to 
>> address them.
>>
>> Thx
>> YuriW
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io



-- 
Cheers,
Venky

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Quincy: mClock config propagation does not work properly

2022-03-31 Thread Sridhar Seshasayee
Hi Luis,

I was able to reproduce this issue locally and this looks like a bug. I
have raised
a tracker to help track the fix for this:
https://tracker.ceph.com/issues/55153.

This issue here is that with the 'custom' profile enabled, a change to the
config
parameters is written to the configuration db as you have noted, but the
values
are not coming into effect on the OSD(s).

I will look into this further and come back with a fix.

Thank you for trying out mclock and for your feedback.

-Sridhar


On Wed, Mar 30, 2022 at 8:40 PM Sridhar Seshasayee 
wrote:

> Hi Luis,
>
> As Neha mentioned, I am trying out your steps and investigating this
> further.
> I will get back to you in the next day or two. Thanks for your patience.
>
> -Sridhar
>
> On Thu, Mar 17, 2022 at 11:51 PM Neha Ojha  wrote:
>
>> Hi Luis,
>>
>> Thanks for testing the Quincy rc and trying out the mClock settings!
>> Sridhar is looking into this issue and will provide his feedback as
>> soon as possible.
>>
>> Thanks,
>> Neha
>>
>> On Thu, Mar 3, 2022 at 5:05 AM Luis Domingues 
>> wrote:
>> >
>> > Hi all,
>> >
>> > As we are doing some tests on our lab cluster, running Quincy 17.1.0,
>> we observed some strange behavior regarding the propagation of the mClock
>> parameters to the OSDs. Basically, when we change the profile is set on a
>> per-recorded one, and we change to custom, the change on the different
>> mClock parameters are not propagated.
>> >
>> > For more details, here is how we reproduce the issue on our lab:
>> >
>> > ** Step 1
>> >
>> > We start the OSDs, with this configuration set, using ceph config dump:
>> >
>> > ```
>> >
>> > osd advanced osd_mclock_profile custom
>> > osd advanced osd_mclock_scheduler_background_recovery_lim 512
>> > osd advanced osd_mclock_scheduler_background_recovery_res 128
>> > osd advanced osd_mclock_scheduler_background_recovery_wgt 3
>> > osd advanced osd_mclock_scheduler_client_lim 80
>> > osd advanced osd_mclock_scheduler_client_res 30
>> > osd advanced osd_mclock_scheduler_client_wgt 1 osd advanced
>> osd_op_queue mclock_scheduler *
>> > ```
>> >
>> > And we can observe that this is what the OSD is running, using ceph
>> daemon osd.X config show:
>> >
>> > ```
>> > "osd_mclock_profile": "custom",
>> > "osd_mclock_scheduler_anticipation_timeout": "0.00",
>> > "osd_mclock_scheduler_background_best_effort_lim": "99",
>> > "osd_mclock_scheduler_background_best_effort_res": "1",
>> > "osd_mclock_scheduler_background_best_effort_wgt": "1",
>> > "osd_mclock_scheduler_background_recovery_lim": "512",
>> > "osd_mclock_scheduler_background_recovery_res": "128",
>> > "osd_mclock_scheduler_background_recovery_wgt": "3",
>> > "osd_mclock_scheduler_client_lim": "80",
>> > "osd_mclock_scheduler_client_res": "30",
>> > "osd_mclock_scheduler_client_wgt": "1",
>> > "osd_mclock_skip_benchmark": "false",
>> > "osd_op_queue": "mclock_scheduler",
>> > ```
>> >
>> > At this point, is we change something, the change can be viewed on the
>> osd. Let's say we change the background recovery to 100:
>> >
>> > `ceph config set osd osd_mclock_scheduler_background_recovery_res 100`
>> >
>> > The change has been set properly on the OSDs:
>> >
>> > ```
>> > "osd_mclock_profile": "custom",
>> > "osd_mclock_scheduler_anticipation_timeout": "0.00",
>> > "osd_mclock_scheduler_background_best_effort_lim": "99",
>> > "osd_mclock_scheduler_background_best_effort_res": "1",
>> > "osd_mclock_scheduler_background_best_effort_wgt": "1",
>> > "osd_mclock_scheduler_background_recovery_lim": "512",
>> > "osd_mclock_scheduler_background_recovery_res": "100",
>> > "osd_mclock_scheduler_background_recovery_wgt": "3",
>> > "osd_mclock_scheduler_client_lim": "80",
>> > "osd_mclock_scheduler_client_res": "30",
>> > "osd_mclock_scheduler_client_wgt": "1",
>> > "osd_mclock_skip_benchmark": "false",
>> > "osd_op_queue": "mclock_scheduler",
>> > ```
>> >
>> > ** Step 2
>> >
>> > We change the profile to high_recovery_ops, and remove the old
>> configuration
>> >
>> > ```
>> > ceph config set osd osd_mclock_profile high_recovery_ops
>> > ceph config rm osd osd_mclock_scheduler_background_recovery_lim
>> > ceph config rm osd osd_mclock_scheduler_background_recovery_res
>> > ceph config rm osd osd_mclock_scheduler_background_recovery_wgt
>> > ceph config rm osd osd_mclock_scheduler_client_lim
>> > ceph config rm osd osd_mclock_scheduler_client_resceph config rm osd
>> osd_mclock_scheduler_client_wgt
>> > ```
>> >
>> > The config contains this now:
>> >
>> > ```
>> > osd advanced osd_mclock_profile high_recovery_ops
>> > osd advanced osd_op_queue mclock_scheduler *
>> > ```
>> >
>> > And we can see that the configuration was propagated to the OSDs:
>> >
>> > ```
>> > "osd_mclock_profile": "high_recovery_ops",
>> > "osd_mclock_scheduler_anticipation_timeout": "0.00",
>> > "osd_mclock_scheduler_background_best_effort_lim": "99",
>> > "osd_mclock_scheduler_background_best_effort_res": "1",
>> >

[ceph-users] Re: Best way to keep a backup of a bucket

2022-03-31 Thread Arno Lehmann

Hi Istvan,


I have some critical data in couple of buckets I'd like to keep it somehow 
safe, but I don't see any kind of snapshot solution in ceph for objectgateway.


I think a snaphsot is not a backup, but I know my views on this topic 
are not necessarly marketing compatible modern :-)



How you guys (if you do) backup RGW buckets or objects what is the best way to 
keep some kind of cold data if ceph crash?


The solution I propose at this time is zone syncing between different 
sites, if bandwidth allows.


Object versioning and locking on top of that will be quite valuable, too.

For S3 objects, I was recently working on a solution to integrate with a 
backup solution which would then provide some additional user 
friendlyness and provide some auditing and reporting. This was for a 
specific project and is not (yet) ready for any release, but may become 
a product, probably open source.


Me being totally clueless of Ceph, and the project being a paid one for 
a customer, I am reluctant to publish anything at this time, but if 
you're interested, we can definitely discuss :-)


Cheers,

Arno
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: March 2022 Ceph Tech Talk:

2022-03-31 Thread Neha Ojha
Recording of this talk is now available
https://www.youtube.com/watch?v=wZHcg0oVzhY.

Thanks,
Neha

On Thu, Mar 24, 2022 at 10:01 AM Neha Ojha  wrote:
>
> Starting now!
>
> On Fri, Mar 18, 2022 at 6:02 AM Mike Perez  wrote:
>>
>> Hi everyone
>>
>> On March 24 at 17:00 UTC, hear Kamoltat (Junior) Sirivadhna give a
>> Ceph Tech Talk on how Teuthology, Ceph's integration test framework,
>> works!
>>
>> https://ceph.io/en/community/tech-talks/
>>
>> Also, if you would like to present and share with the community what
>> you're doing with Ceph or development, please let me know as we are
>> looking for content. Thanks!
>>
>> --
>> Mike Perez
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to keep a backup of a bucket

2022-03-31 Thread Janne Johansson
Den tors 31 mars 2022 kl 18:31 skrev Szabo, Istvan (Agoda)
:
> Yeah, rclone is a very good tool, that is maybe the fastest method to move 
> objects across buckets.

Sorry for not bringing anything else to this discussion but:
"same here, rclone can be made to sync data very fast"

For few-or-single-bucket replication of S3 I think this is the easiest
and "best" solution.

> I have some critical data in couple of buckets I'd like to keep it
> somehow safe, but I don't see any kind of snapshot solution in ceph
> for objectgateway.
>>
> Personally, I've a daily cron that loops through my buckets, and `rclone
> sync`s them: https://rclone.org/commands/rclone_sync/



-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io