If you have multiple rgws in ha or smth, go one by one.
Rok
On Fri, 4 Jul 2025, 13:32 Boris, wrote:
> Hi,
> is there a way to reload the ceritificate in rgw without downtime? Or if I
> have multiple rgw daemons to do it one by one and wait for the last one to
> be active again?
>
>
>
> --
> Die
We are also having the same problem.
On Wed, May 14, 2025 at 10:57 PM Steve Anthony wrote:
> We also started seeing this issue on AlmaLinux 9.5 (presumably Rocky
> Linux and other RHEL derivatives would be impacted too). OpenSSL 3.5.0-1
> as mentioned in the thread seems to be coming from the Ce
What about something like this in rgw section in ceph.conf?
rgw_enable_ops_log = true
rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5
rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log
Rok
On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote:
> Same here, it worked
7;m aware, this is not really officially supported by AWS S3, but let's
> dream on ;) )
>
> and, again, I'm talking about "static websites" only here... not the usual
> full-fledged S3 RGW endpoint, where you i.e. need to provide your
> secrets b'cau
What would you like to do?
Serve your bucket objects as static files on web?
On Tue, Feb 11, 2025 at 1:31 PM Anthony Fecarotta
wrote:
> Interesting! Will mess with that today.
>
> Regards,
>
>
> * Anthony Fecarotta*
> Founder & President
> [image: phone-icon] anth...@linehaul.ai
> [image: p
E.g. delete any objects or pools or anything.
>
> The only way I can think that this is workable would be to restrict
> Ceph to an isolated network and re-export CephFS using NFS Ganesha or
> Samba.
>
> Cheers, Dan
>
> On Tue, Jan 7, 2025 at 8:03 AM Rok Jaklič wrote:
> &
Hi,
is it possible somehow to restrict client in cephfs to subdirectory without
cephx enabled?
We do not have any auth requirements enabled in ceph.
auth cluster required = none
auth service required = none
auth client required = none
Kind regards,
Rok
__
had to text-edit
> everything by hand :nailbiting:. One can readily diff the before and after
> decompiled text CRUSHmaps to ensure sanity before recompiling and injecting.
>
> I’ve done this myself multiple times since device classes became a thing.
>
>
>
> On Dec 23, 2024, at 5:05 P
k wrote:
> >
> > Don't try to delete a root, that will definitely break something.
> Instead, check the crush rules which don't use a device class and use the
> reclassify of the crushtool to modify the rules. This will trigger only a
> bit of data movement, but not as mu
>> backfills completes.
>>
>> If you do, be sure to disable the autoscaler for that pool.
>>
>> > Right now pg_num 512 pgp_num 512 is used and I am considering to change
>> it
>> > to 1024. Do you think that would be too aggressive maybe?
>>
&g
ge
> it
> > to 1024. Do you think that would be too aggressive maybe?
>
> Depends on how many OSDs you have and what the rest of the pools are
> like. Send us
>
> `ceph osd dump | grep pool`
>
> These days, assuming that your OSDs are BlueStore, chances are th
}
]
},
is it maybe another option just to reset pool crush_rule e.g.:
ceph osd pool set .mgr crush_rule replicated_ssd ?
Rok
On Mon, Dec 23, 2024 at 3:12 PM Eugen Block wrote:
> Don't try to delete a root, that will definitely break something.
> Instead, check the
I got a similar problem after changing pool class to use only hdd following
https://www.spinics.net/lists/ceph-users/msg84987.html. Data migrated
successfully.
I get warnings like:
2024-12-23T14:39:37.103+0100 7f949edad640 0 [pg_autoscaler WARNING root]
pool default.rgw.buckets.index won't scale
pping root -1...
skipping scaling
ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.934+0100 7f949edad640 0
[pg_autoscaler WARNING root] pool 11 contains an overlapping root -1...
skipping scaling
Rok
On Mon, Dec 23, 2024 at 6:45 AM Rok Jaklič wrote:
> autoscale_mode for pg is on for a pa
m 512 is used and I am considering to change it
to 1024. Do you think that would be too aggressive maybe?
Rok
On Sun, Dec 22, 2024 at 8:46 PM Alwin Antreich
wrote:
> Hi Rok,
>
> On Sun, 22 Dec 2024 at 20:19, Rok Jaklič wrote:
>
>> First I tried with osd reweight, waited a f
bably better to reduce it to 1 in steps, since now much
backfilling is already going on?
Output of commands in attachment.
Rok
On Sun, Dec 22, 2024 at 7:41 PM Alwin Antreich
wrote:
> Hi Rok,
>
> On Sun, 22 Dec 2024 at 16:08, Rok Jaklič wrote:
>
>> Thank you all for your sugge
aimis.juzeliu...@oxylabs.io> wrote:
> Hi Rok,
>
> Try running (122 instead of osd.122):
> ./plankton-swarm.sh source-osds 122 3
> bash swarm-file
>
> Will have to work on the naming conventions, apologies.
> The pgremapper tool also will be ab
>> You could also use the pgremapper to manually reassign PGs to different
>> OSDs. This gives you more control over PG movement. This works by setting
>> upmaps, the balancer needs to be off and the ceph version needs to be
>> throughout newer tha
l OSD is most likely the reason. You can temporarily increase
> the threshold to 0.97 or so, but you need to prevent that to happen.
> The cluster usually starts warning you at 85%.
>
> Zitat von Rok Jaklič :
>
> > Hi,
> >
> > for some reason radosgw stopped worki
Hi,
for some reason radosgw stopped working.
Cluster status:
[root@ctplmon1 ~]# ceph -v
ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy
(stable)
[root@ctplmon1 ~]# ceph -s
cluster:
id: 0a6e5422-ac75-4093-af20-528ee00cc847
health: HEALTH_ERR
6 OSD(s)
rarily)
> during backfill.
>
> Zitat von Rok Jaklič :
>
> > After a new rule has been set, is it normal that usage is growing
> > significantly while objects number stay pretty much the same?
> >
> > Rok
> >
> > On Mon, Dec 2, 2024 at 10:45 AM Eugen B
mclock?) and it will slowly drain
> the PGs from SSDs to HDDs to minimize client impact.
>
> Zitat von Rok Jaklič :
>
> > I didn't have any bad mappings.
> >
> > I'll wait until the backfill completes then try to apply new rules.
> >
> > Then I c
Hi,
I am trying to create nfs cluster with following command:
ceph nfs cluster create cephnfs
But I get an error like:
Error EPERM: osd pool create failed: 'pgp_num' must be greater than 0 and
lower or equal than 'pg_num', which in this case is 1 retval: -34
Any ideas why?
I also tried adding p
https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
On Mon, Dec 2, 2024 at 1:28 PM wrote:
> Hi,
> may i ask which commands did you use to achieve that?
>
> Thank you
>
> Am 2. Dezember 2024 11:04:19 MEZ s
client impact.
>
> Zitat von Rok Jaklič :
>
> > I didn't have any bad mappings.
> >
> > I'll wait until the backfill completes then try to apply new rules.
> >
> > Then I can probably expect some recovery will start so it can move
> > everythin
o me, I assume you didn’t have any
> bad mappings?
>
> Zitat von Rok Jaklič :
>
> > Thx.
> >
> > Can you explain mappings.txt a little bit?
> >
> > I assume that for every line in mappings.txt apply crush rule 1 for osds
> in
> > square brac
sure rule-ec-k3m2 ec-profile-k3m2
>
> And here's the result:
>
> ceph osd crush rule dump rule-ec-k3m2 | grep -A2 take
> "op": "take",
> "item": -2,
> "item_name": "default~hdd"
>
>
Hi,
we are already running the "default" rgw pool with some users.
Data is stored in pool:
pool 9 'default.rgw.buckets.data' erasure profile ec-32-profile size 5
min_size 4 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512
autoscale_mode on last_change 309346 lfor 0/127784/214408 flags
has
Hi,
is it possible to set/change following already used rule to only use hdd?
{
"rule_id": 1,
"rule_name": "ec32",
"type": 3,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"
Hi,
right now the cluster is doing recovery for the last two weeks and it seems
it will be doing so for the next week or so also.
Meanwhile a new quincy update came, which fixes some of the things for us
but we would need to upgrade to AlmaLinux 9.
Has anyone done maintainace or upgrade of nodes
like OOM
> killers or anything else related to the recovery? Are disks saturated?
> Is this cephadm managed? What's the current ceph status?
>
> Thanks,
> Eugen
>
> Zitat von Rok Jaklič :
>
> > Hi,
> >
> > we've just updated from pacific(16.2.
l 23 20:01:27 2024
2024-07-23T20:01:07.666+0200 7fc751496700 2 rgw data changes log:
RGWDataChangesLog::ChangesRenewThread: start
2024-07-23T20:01:27.534+0200 7fc740c75700 20 rgw notify: INFO: next queues
processing will happen at: Tue Jul 23 20:01:57 2024
On Tue, Jul 23, 2024 at 7:58 PM Rok Jak
Hi,
we've just updated from pacific(16.2.15) to quincy(17.2.7) and everything
seems to work, however after some time radosgw stops responding and we have
to restart it.
At first look, it seems that radosgw stops responding sometimes during
recovery.
Does this maybe have to do something with mclo
re_ssl
... you should be ready to go. :)
Rok
On Mon, Feb 12, 2024 at 6:43 PM Michael Worsham
wrote:
> So, just so I am clear – in addition to the steps below, will I also need
> to also install NGINX or HAProxy on the server to act as the front end?
>
>
>
> -- M
>
>
&
Hi,
recommended methods of deploying rgw are imho overly complicated. You can
get service up manually also with something simple like:
[root@mon1 bin]# cat /etc/ceph/ceph.conf
[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required = non
Hi,
shouldn't etag of a "parent" object change when "child" objects are added
on s3?
Example:
1. I add an object to test bucket: "example/" - size 0
"example/" has an etag XYZ1
2. I add an object to test bucket: "example/test1.txt" - size 12
"example/test1.txt" has an etag XYZ2
"examp
Hi,
I have set following permission to admin user:
radosgw-admin caps add --uid=admin --tenant=admin --caps="users=*;buckets=*"
Now I would like to upload some object with admin user to some other
user/tenant (tester1$tester1) to his bucket test1.
Other user has uid tester1 and tenant tester1 an
ples. Let me know if you need more
> information.
>
> Yuval
>
> On Tue, Nov 28, 2023 at 10:21 PM Rok Jaklič wrote:
>
>> Hi,
>>
>> I would like to get info if the bucket or object got updated.
>>
>> I can get this info with a changed etag of an object,
Hi,
I would like to get info if the bucket or object got updated.
I can get this info with a changed etag of an object, but not I cannot get
etag from bucket, so I am looking at
https://docs.ceph.com/en/latest/radosgw/notifications/
How do I create a topic and where do I send request with parame
ng for now
...
after this line ... rgw stopped responding. We had to restart it.
We were just about to upgrade to ceph 17.x... but we had postpone it
because of this.
Rok
On Fri, Oct 6, 2023 at 9:30 AM Rok Jaklič wrote:
> Hi,
>
> yesterday we changed RGW from civetweb to beast and a
Hi,
yesterday we changed RGW from civetweb to beast and at 04:02 RGW stopped
working; we had to restart it in the morning.
In one rgw log for previous day we can see:
2023-10-06T04:02:01.105+0200 7fb71d45d700 -1 received signal: Hangup from
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-
I can confirm this. ... as we did the upgrade from .10 also.
Rok
On Fri, Sep 8, 2023 at 5:26 PM David Orman wrote:
> I would suggest updating: https://tracker.ceph.com/issues/59580
>
> We did notice it with 16.2.13, as well, after upgrading from .10, so
> likely in-between those two releases.
afaik
> has slowed down existing attempts at diagnosing the issue.
>
> Mark
>
> On 9/7/23 05:55, Rok Jaklič wrote:
> > Hi,
> >
> > we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on
> > 120T/200T data.
> >
> > Is there
Hi,
we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on
120T/200T data.
Is there any tracker about the problem?
Does upgrade to 17.x "solves" the problem?
Kind regards,
Rok
On Wed, Sep 6, 2023 at 9:36 PM Ernesto Puerta wrote:
> Dear Cephers,
>
> Today brought us an even
Hi,
I want to place an existing pool with data to ssd-s.
I've created crush rule:
ceph osd crush rule create-replicated replicated_ssd default host ssd
If I apply this rule to the existing pool default.rgw.buckets.index with
180G of data with command:
ceph osd pool set default.rgw.buckets.index
ices Co., Ltd.
> e: istvan.sz...@agoda.com
> -------
>
> On 2023. Jun 23., at 19:12, Rok Jaklič wrote:
>
> Email received from the internet. If in doubt, don't click any link nor
> open any attachment !
>
>
>
We are experiencing something similar (slow GETs responses) when sending 1k
delete requests for example in ceph v16.2.13.
Rok
On Mon, Jun 12, 2023 at 7:16 PM grin wrote:
> Hello,
>
> ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy
> (stable)
>
> There is a single (test) ra
Hi,
are there any drawbacks of exposing multi-tenant deployment of RGWs
directly to users so they can use any S3 client to connect to service or
should we put something in front of RGWs?
How many users in multi-tenant deployment can CEPH handle?
Kind regards,
Rok
I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit.
but there is little information about those settings. Is there any
documentation in the wild about those settings?
Are they enabled by default?
On Thu, May 18, 2023 at 9:15 PM Tarrago, Eli (RIS-BCT) <
eli.tarr...@lexisnex
g set
>
>
> WHO: client. or client.rgw
>
> KEY: rgw_delete_multi_obj_max_num
>
> VALUE: 1
>
> Regards, Joachim
>
> ___
> ceph ambassador DACH
> ceph consultant since 2012
>
> Clyso GmbH - Premier Ceph Foundation Memb
ete_multi_obj_max_num
>
> rgw_delete_multi_obj_max_num - Max number of objects in a single multi-
> object delete request
> (int, advanced)
> Default: 1000
> Can update at runtime: true
> Services: [rgw]
>
> On Wed, 2023-05-17 at 10:51 +0200, Rok Jaklič wrote:
&g
Hi,
I would like to delete millions of objects in RGW instance with:
mc rm --recursive --force ceph/archive/veeam
but it seems it allows only 1000 (or 1002 exactly) removals per command.
How can I delete/remove all objects with some prefix?
Kind regards,
Rok
We deployed jitsi for the public sector during covid and it is still free
to use.
https://vid.arnes.si/
---
However, the landing page is in Slovene language and for future
reservations you need an AAI (SSO) account (which you get if you are a part
of a public organization (school, faculty, ...).
1, 2 times a year we are having similar problem in *not* ceph disk cluster,
where working -> but slow disk writes give us slow reads. We somehow
"understand it", since probably slow writes fill up queues and buffers.
On Thu, Mar 9, 2023 at 11:37 AM Andrej Filipcic
wrote:
>
> Thanks for the hint
Hi,
I try to configure ceph with rgw and unix socket (based on
https://docs.ceph.com/en/pacific/man/8/radosgw/?highlight=radosgw). I have
in ceph.conf something like this:
[client.radosgw.ctplmon3]
host = ctplmon3
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/lo
Solution was found by colleague and it was:
ms_mon_client_mode = crc
... because of
https://github.com/ceph/ceph/pull/42587/commits/7e22d2a31d277ab3eecff47b0864b206a32e2332
Rok
On Thu, Sep 8, 2022 at 6:04 PM Rok Jaklič wrote:
> What credentials should RGWs have?
>
> I have inte
Hi,
we try to copy a big file (over 400GB) using a minio client to the ceph
cluster. Copy or better transfer takes a lot of time (2 days for example)
because of "slow connection".
Usually somewhere near the end (but looks random) we get an error like:
Failed to copy `/360GB.bigfile.img`. The req
Every now and then someone comes up with a subject like this.
There is quite a long thread about pros and cons using docker and all tools
around ceph on
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TTTYKRVWJOR7LOQ3UCQAZQR32R7YADVY/#AT7YQV6RE5SMKDZHXL3ZI2G5BWFUUUXE
Long story sh
-13 error code represents permission denied
> b. You’ve commented out the keyring configuration in ceph.conf
>
> So do your RGWs have appropriate credentials?
>
> Eric
> (he/him)
>
> > On Sep 7, 2022, at 3:04 AM, Rok Jaklič wrote:
> >
> > Hi,
>
Hi,
after upgrading to ceph version 16.2.10 from 16.2.7 rados gw is not
working. We start rados gw with:
radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n
client.radosgw.ctplmon3
ceph.conf looks like:
[root@ctplmon3 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 0a6e5422-ac75-4093-af2
ven deleting the bucket seems to leave the objects in the rados pool
> forever.
>
> Ciao, Uli
>
> > Am 05.09.2022 um 15:19 schrieb Rok Jaklič :
> >
> > Hi,
> >
> > when I do:
> > radosgw-admin user stats --uid=X --tenant=Y --sync-stats
> >
&
Hi,
when I do:
radosgw-admin user stats --uid=X --tenant=Y --sync-stats
I get:
{
"stats": {
"size": 2620347285776,
"size_actual": 2620348436480,
"size_utilized": 0,
"size_kb": 2558932897,
"size_kb_actual": 2558934020,
"size_kb_utilized": 0,
Hi,
is it possible to get tenant and user id with some python boto3 request?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Can I reduce mon_initial_members to one host after already being set to two
hosts?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Actually, some of us tried to contribute to documentation but were stopped
with failed build checks for some reason.
While most of it is ok, at some places documentation is vague or missing
(maybe also the reason why this thread is so long also).
One example:
https://github.com/ceph/ceph/pull/409
Hi,
is it possible to limit access of the subuser that he sees (read, write)
only "his" bucket? And also be able to create a bucket inside that bucket?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to c
This thread would not be so long if docker/containers solved the problems,
but it did not. It solved some, but introduced new ones. So we cannot
really say its better now.
Again, I think focus should more on a working ceph with clean documentation
while leaving software management, packages to adm
Which mode is that and where can I set it?
This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/
?
On Tue, Jun 8, 2021 at 2:24 PM Janne Johansson wrote:
> Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič :
> > Hi,
> > I try to create buckets through rgw in
Hi,
I try to create buckets through rgw in following order:
- *bucket1* with *user1* with *access_key1* and *secret_key1*
- *bucket1* with *user2* with *access_key2* and *secret_key2*
when I try to create a second bucket1 with user2 I get *Error response code
BucketAlreadyExists.*
Why? Should no
In this giga, tera byte times all this dependency hell can now be avoided
with some static linking. For example, we do use statically linked mysql
binaries and it saved us numerous times. https://youtu.be/5PmHRSeA2c8?t=490
Rok
On Wed, Jun 2, 2021 at 9:57 PM Harry G. Coin wrote:
>
> On 6/2/21 2:
I agree, simplifying "deployment" by adding another layer of complexity
does bring much more problems and hard times when something goes wrong in
the runtime. Few additional steps at "install phase" and better
understanding of underlying architecture, commands, whatever ... have much
more pros tha
Hi,
is it normal that radosgw-admin user info --uid=user ... takes around 3s or
more?
Also other radosgw-admin are taking quite a lot of time.
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
5 hosts (with
> failure domain host) your PGs become undersized when a host fails and
> won't recover until the OSDs come back. Which ceph version is this?
>
>
> Zitat von Rok Jaklič :
>
> > For this pool I have set EC 3+2 (so in total I have 5 nodes) which one
> was
&g
For this pool I have set EC 3+2 (so in total I have 5 nodes) which one was
temporarily removed, but maybe this was the problem?
On Thu, May 27, 2021 at 3:51 PM Rok Jaklič wrote:
> Hi, thanks for quick reply
>
> root@ctplmon1:~# ceph pg dump pgs_brief | grep undersized
> dumped pgs
ph osd pool ls detail
>
> and the crush rule(s) for the affected pool(s).
>
>
> Zitat von Rok Jaklič :
>
> > Hi,
> >
> > I have removed one node, but now ceph seems to stuck in:
> > Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs
> >
Hi,
I have removed one node, but now ceph seems to stuck in:
Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs
degraded, 12 pgs undersized
How to "force" rebalancing? Or should I just wait a little bit more?
Kind regards,
rok
___
ceph
00 AM Janne Johansson
wrote:
> Den fre 21 maj 2021 kl 10:49 skrev Rok Jaklič :
> > It shows
> > sdb8:16 0 5.5T 0 disk /var/lib/ceph/osd/ceph-56
>
> That one says osd-56, you asked about why osd 85 was small in ceph osd df
>
>
> >> Den
$ID --mkfs --osd-uuid $UUID --data /dev/sdb
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID/
---
and there 100G block file resides.
On Fri, May 21, 2021 at 9:59 AM Janne Johansson wrote:
> Den fre 21 maj 2021 kl 09:41 skrev Rok Jaklič :
> > why would ceph osd df show in SIZE field small
Hi,
why would ceph osd df show in SIZE field smaller number than there is:
85hdd 0.8 1.0 100 GiB 96 GiB 95 GiB 289 KiB 952
MiB 4.3 GiB 95.68 3.37 10 up
instead of 100GiB there should be 5.5TiB.
Kind regards,
Rok
___
c
I agree. Documentation here is pretty vague. systemd services for osds on
ubuntu 20.04 and ceph pacific version 16.2.1 does not work either, so I
have to run it manually with
/usr/bin/ceph-osd -f --cluster ceph --id some-number --setuser ceph
--setgroup ceph
I think it would be much better if doc
Hi,
installation of cluster/osds went "by the book" https://docs.ceph.com/, but
now I want to setup Ceph Object Gateway, but documentation on
https://docs.ceph.com/en/latest/radosgw/ seems to lack information about
what and where to restart for example when setting [client.rgw.gateway-node1]
in /e
Hi,
I installed ceph object gateway and I have put one test object onto
storage. I can see it with rados -p mytest ls
How do I setup ceph that users can access (download,upload) files to this
pool?
Kind regards,
Rok
___
ceph-users mailing list -- ceph
82 matches
Mail list logo