[ceph-users] Re: reload SSL certificate in radosgw

2025-07-04 Thread Rok Jaklič
If you have multiple rgws in ha or smth, go one by one. Rok On Fri, 4 Jul 2025, 13:32 Boris, wrote: > Hi, > is there a way to reload the ceritificate in rgw without downtime? Or if I > have multiple rgw daemons to do it one by one and wait for the last one to > be active again? > > > > -- > Die

[ceph-users] Re: *** Spam *** Re: v18.2.7 Reef released

2025-06-02 Thread Rok Jaklič
We are also having the same problem. On Wed, May 14, 2025 at 10:57 PM Steve Anthony wrote: > We also started seeing this issue on AlmaLinux 9.5 (presumably Rocky > Linux and other RHEL derivatives would be impacted too). OpenSSL 3.5.0-1 > as mentioned in the thread seems to be coming from the Ce

[ceph-users] Re: Radosgw log Custom Headers

2025-02-12 Thread Rok Jaklič
What about something like this in rgw section in ceph.conf? rgw_enable_ops_log = true rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5 rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log Rok On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote: > Same here, it worked

[ceph-users] Re: RGW - S3 bucket browser and/or S3 explorer

2025-02-11 Thread Rok Jaklič
7;m aware, this is not really officially supported by AWS S3, but let's > dream on ;) ) > > and, again, I'm talking about "static websites" only here... not the usual > full-fledged S3 RGW endpoint, where you i.e. need to provide your > secrets b'cau

[ceph-users] Re: RGW - S3 bucket browser and/or S3 explorer

2025-02-11 Thread Rok Jaklič
What would you like to do? Serve your bucket objects as static files on web? On Tue, Feb 11, 2025 at 1:31 PM Anthony Fecarotta wrote: > Interesting! Will mess with that today. > > Regards, > > > * Anthony Fecarotta* > Founder & President > [image: phone-icon] anth...@linehaul.ai > [image: p

[ceph-users] Re: Cephfs path based restricition without cephx

2025-01-07 Thread Rok Jaklič
E.g. delete any objects or pools or anything. > > The only way I can think that this is workable would be to restrict > Ceph to an isolated network and re-export CephFS using NFS Ganesha or > Samba. > > Cheers, Dan > > On Tue, Jan 7, 2025 at 8:03 AM Rok Jaklič wrote: > &

[ceph-users] Cephfs path based restricition without cephx

2025-01-07 Thread Rok Jaklič
Hi, is it possible somehow to restrict client in cephfs to subdirectory without cephx enabled? We do not have any auth requirements enabled in ceph. auth cluster required = none auth service required = none auth client required = none Kind regards, Rok __

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2025-01-06 Thread Rok Jaklič
had to text-edit > everything by hand :nailbiting:. One can readily diff the before and after > decompiled text CRUSHmaps to ensure sanity before recompiling and injecting. > > I’ve done this myself multiple times since device classes became a thing. > > > > On Dec 23, 2024, at 5:05 P

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2024-12-23 Thread Rok Jaklič
k wrote: > > > > Don't try to delete a root, that will definitely break something. > Instead, check the crush rules which don't use a device class and use the > reclassify of the crushtool to modify the rules. This will trigger only a > bit of data movement, but not as mu

[ceph-users] Re: radosgw stopped working

2024-12-23 Thread Rok Jaklič
>> backfills completes. >> >> If you do, be sure to disable the autoscaler for that pool. >> >> > Right now pg_num 512 pgp_num 512 is used and I am considering to change >> it >> > to 1024. Do you think that would be too aggressive maybe? >> &g

[ceph-users] Re: radosgw stopped working

2024-12-23 Thread Rok Jaklič
ge > it > > to 1024. Do you think that would be too aggressive maybe? > > Depends on how many OSDs you have and what the rest of the pools are > like. Send us > > `ceph osd dump | grep pool` > > These days, assuming that your OSDs are BlueStore, chances are th

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2024-12-23 Thread Rok Jaklič
} ] }, is it maybe another option just to reset pool crush_rule e.g.: ceph osd pool set .mgr crush_rule replicated_ssd ? Rok On Mon, Dec 23, 2024 at 3:12 PM Eugen Block wrote: > Don't try to delete a root, that will definitely break something. > Instead, check the

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2024-12-23 Thread Rok Jaklič
I got a similar problem after changing pool class to use only hdd following https://www.spinics.net/lists/ceph-users/msg84987.html. Data migrated successfully. I get warnings like: 2024-12-23T14:39:37.103+0100 7f949edad640 0 [pg_autoscaler WARNING root] pool default.rgw.buckets.index won't scale

[ceph-users] Re: radosgw stopped working

2024-12-22 Thread Rok Jaklič
pping root -1... skipping scaling ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.934+0100 7f949edad640 0 [pg_autoscaler WARNING root] pool 11 contains an overlapping root -1... skipping scaling Rok On Mon, Dec 23, 2024 at 6:45 AM Rok Jaklič wrote: > autoscale_mode for pg is on for a pa

[ceph-users] Re: radosgw stopped working

2024-12-22 Thread Rok Jaklič
m 512 is used and I am considering to change it to 1024. Do you think that would be too aggressive maybe? Rok On Sun, Dec 22, 2024 at 8:46 PM Alwin Antreich wrote: > Hi Rok, > > On Sun, 22 Dec 2024 at 20:19, Rok Jaklič wrote: > >> First I tried with osd reweight, waited a f

[ceph-users] Re: radosgw stopped working

2024-12-22 Thread Rok Jaklič
bably better to reduce it to 1 in steps, since now much backfilling is already going on? Output of commands in attachment. Rok On Sun, Dec 22, 2024 at 7:41 PM Alwin Antreich wrote: > Hi Rok, > > On Sun, 22 Dec 2024 at 16:08, Rok Jaklič wrote: > >> Thank you all for your sugge

[ceph-users] Re: radosgw stopped working

2024-12-22 Thread Rok Jaklič
aimis.juzeliu...@oxylabs.io> wrote: > Hi Rok, > > Try running (122 instead of osd.122): > ./plankton-swarm.sh source-osds 122 3 > bash swarm-file > > Will have to work on the naming conventions, apologies. > The pgremapper tool also will be ab

[ceph-users] Re: radosgw stopped working

2024-12-22 Thread Rok Jaklič
>> You could also use the pgremapper to manually reassign PGs to different >> OSDs. This gives you more control over PG movement. This works by setting >> upmaps, the balancer needs to be off and the ceph version needs to be >> throughout newer tha

[ceph-users] Re: radosgw stopped working

2024-12-21 Thread Rok Jaklič
l OSD is most likely the reason. You can temporarily increase > the threshold to 0.97 or so, but you need to prevent that to happen. > The cluster usually starts warning you at 85%. > > Zitat von Rok Jaklič : > > > Hi, > > > > for some reason radosgw stopped worki

[ceph-users] radosgw stopped working

2024-12-21 Thread Rok Jaklič
Hi, for some reason radosgw stopped working. Cluster status: [root@ctplmon1 ~]# ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) [root@ctplmon1 ~]# ceph -s cluster: id: 0a6e5422-ac75-4093-af20-528ee00cc847 health: HEALTH_ERR 6 OSD(s)

[ceph-users] Re: EC pool only for hdd

2024-12-20 Thread Rok Jaklič
rarily) > during backfill. > > Zitat von Rok Jaklič : > > > After a new rule has been set, is it normal that usage is growing > > significantly while objects number stay pretty much the same? > > > > Rok > > > > On Mon, Dec 2, 2024 at 10:45 AM Eugen B

[ceph-users] Re: EC pool only for hdd

2024-12-20 Thread Rok Jaklič
mclock?) and it will slowly drain > the PGs from SSDs to HDDs to minimize client impact. > > Zitat von Rok Jaklič : > > > I didn't have any bad mappings. > > > > I'll wait until the backfill completes then try to apply new rules. > > > > Then I c

[ceph-users] NFS cluster

2024-12-12 Thread Rok Jaklič
Hi, I am trying to create nfs cluster with following command: ceph nfs cluster create cephnfs But I get an error like: Error EPERM: osd pool create failed: 'pgp_num' must be greater than 0 and lower or equal than 'pg_num', which in this case is 1 retval: -34 Any ideas why? I also tried adding p

[ceph-users] Re: EC pool only for hdd

2024-12-02 Thread Rok Jaklič
https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits On Mon, Dec 2, 2024 at 1:28 PM wrote: > Hi, > may i ask which commands did you use to achieve that? > > Thank you > > Am 2. Dezember 2024 11:04:19 MEZ s

[ceph-users] Re: EC pool only for hdd

2024-12-02 Thread Rok Jaklič
client impact. > > Zitat von Rok Jaklič : > > > I didn't have any bad mappings. > > > > I'll wait until the backfill completes then try to apply new rules. > > > > Then I can probably expect some recovery will start so it can move > > everythin

[ceph-users] Re: EC pool only for hdd

2024-12-01 Thread Rok Jaklič
o me, I assume you didn’t have any > bad mappings? > > Zitat von Rok Jaklič : > > > Thx. > > > > Can you explain mappings.txt a little bit? > > > > I assume that for every line in mappings.txt apply crush rule 1 for osds > in > > square brac

[ceph-users] Re: EC pool only for hdd

2024-11-30 Thread Rok Jaklič
sure rule-ec-k3m2 ec-profile-k3m2 > > And here's the result: > > ceph osd crush rule dump rule-ec-k3m2 | grep -A2 take > "op": "take", > "item": -2, > "item_name": "default~hdd" > >

[ceph-users] Additional rgw pool

2024-11-29 Thread Rok Jaklič
Hi, we are already running the "default" rgw pool with some users. Data is stored in pool: pool 9 'default.rgw.buckets.data' erasure profile ec-32-profile size 5 min_size 4 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on last_change 309346 lfor 0/127784/214408 flags has

[ceph-users] EC pool only for hdd

2024-11-27 Thread Rok Jaklič
Hi, is it possible to set/change following already used rule to only use hdd? { "rule_id": 1, "rule_name": "ec32", "type": 3, "steps": [ { "op": "set_chooseleaf_tries", "num": 5 }, { "op": "set_choose_tries", "

[ceph-users] Upgrade of OS and ceph during recovery

2024-11-27 Thread Rok Jaklič
Hi, right now the cluster is doing recovery for the last two weeks and it seems it will be doing so for the next week or so also. Meanwhile a new quincy update came, which fixes some of the things for us but we would need to upgrade to AlmaLinux 9. Has anyone done maintainace or upgrade of nodes

[ceph-users] Re: [RGW] radosgw does not respond after some time after upgrade from pacific to quincy

2024-07-25 Thread Rok Jaklič
like OOM > killers or anything else related to the recovery? Are disks saturated? > Is this cephadm managed? What's the current ceph status? > > Thanks, > Eugen > > Zitat von Rok Jaklič : > > > Hi, > > > > we've just updated from pacific(16.2.

[ceph-users] Re: [RGW] radosgw does not respond after some time after upgrade from pacific to quincy

2024-07-23 Thread Rok Jaklič
l 23 20:01:27 2024 2024-07-23T20:01:07.666+0200 7fc751496700 2 rgw data changes log: RGWDataChangesLog::ChangesRenewThread: start 2024-07-23T20:01:27.534+0200 7fc740c75700 20 rgw notify: INFO: next queues processing will happen at: Tue Jul 23 20:01:57 2024 On Tue, Jul 23, 2024 at 7:58 PM Rok Jak

[ceph-users] [RGW] radosgw does not respond after some time after upgrade from pacific to quincy

2024-07-23 Thread Rok Jaklič
Hi, we've just updated from pacific(16.2.15) to quincy(17.2.7) and everything seems to work, however after some time radosgw stops responding and we have to restart it. At first look, it seems that radosgw stops responding sometimes during recovery. Does this maybe have to do something with mclo

[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Rok Jaklič
re_ssl ... you should be ready to go. :) Rok On Mon, Feb 12, 2024 at 6:43 PM Michael Worsham wrote: > So, just so I am clear – in addition to the steps below, will I also need > to also install NGINX or HAProxy on the server to act as the front end? > > > > -- M > > &

[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Rok Jaklič
Hi, recommended methods of deploying rgw are imho overly complicated. You can get service up manually also with something simple like: [root@mon1 bin]# cat /etc/ceph/ceph.conf [global] fsid = 12345678-XXXx ... mon initial members = mon1,mon3 mon host = ip-mon1,ip-mon2 auth cluster required = non

[ceph-users] Etag change of a parent object

2023-12-13 Thread Rok Jaklič
Hi, shouldn't etag of a "parent" object change when "child" objects are added on s3? Example: 1. I add an object to test bucket: "example/" - size 0 "example/" has an etag XYZ1 2. I add an object to test bucket: "example/test1.txt" - size 12 "example/test1.txt" has an etag XYZ2 "examp

[ceph-users] Uploading file from admin to other users bucket in multi tenant mode

2023-11-29 Thread Rok Jaklič
Hi, I have set following permission to admin user: radosgw-admin caps add --uid=admin --tenant=admin --caps="users=*;buckets=*" Now I would like to upload some object with admin user to some other user/tenant (tester1$tester1) to his bucket test1. Other user has uid tester1 and tenant tester1 an

[ceph-users] Re: Bucket/object create/update/delete notification

2023-11-29 Thread Rok Jaklič
ples. Let me know if you need more > information. > > Yuval > > On Tue, Nov 28, 2023 at 10:21 PM Rok Jaklič wrote: > >> Hi, >> >> I would like to get info if the bucket or object got updated. >> >> I can get this info with a changed etag of an object,

[ceph-users] Bucket/object create/update/delete notification

2023-11-28 Thread Rok Jaklič
Hi, I would like to get info if the bucket or object got updated. I can get this info with a changed etag of an object, but not I cannot get etag from bucket, so I am looking at https://docs.ceph.com/en/latest/radosgw/notifications/ How do I create a topic and where do I send request with parame

[ceph-users] Re: Received signal: Hangup from killall

2023-10-09 Thread Rok Jaklič
ng for now ... after this line ... rgw stopped responding. We had to restart it. We were just about to upgrade to ceph 17.x... but we had postpone it because of this. Rok On Fri, Oct 6, 2023 at 9:30 AM Rok Jaklič wrote: > Hi, > > yesterday we changed RGW from civetweb to beast and a

[ceph-users] Received signal: Hangup from killall

2023-10-06 Thread Rok Jaklič
Hi, yesterday we changed RGW from civetweb to beast and at 04:02 RGW stopped working; we had to restart it in the morning. In one rgw log for previous day we can see: 2023-10-06T04:02:01.105+0200 7fb71d45d700 -1 received signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-

[ceph-users] Re: ceph_leadership_team_meeting_s18e06.mkv

2023-09-11 Thread Rok Jaklič
I can confirm this. ... as we did the upgrade from .10 also. Rok On Fri, Sep 8, 2023 at 5:26 PM David Orman wrote: > I would suggest updating: https://tracker.ceph.com/issues/59580 > > We did notice it with 16.2.13, as well, after upgrading from .10, so > likely in-between those two releases.

[ceph-users] Re: ceph_leadership_team_meeting_s18e06.mkv

2023-09-08 Thread Rok Jaklič
afaik > has slowed down existing attempts at diagnosing the issue. > > Mark > > On 9/7/23 05:55, Rok Jaklič wrote: > > Hi, > > > > we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on > > 120T/200T data. > > > > Is there

[ceph-users] Re: ceph_leadership_team_meeting_s18e06.mkv

2023-09-07 Thread Rok Jaklič
Hi, we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on 120T/200T data. Is there any tracker about the problem? Does upgrade to 17.x "solves" the problem? Kind regards, Rok On Wed, Sep 6, 2023 at 9:36 PM Ernesto Puerta wrote: > Dear Cephers, > > Today brought us an even

[ceph-users] Applying crush rule to existing live pool

2023-06-27 Thread Rok Jaklič
Hi, I want to place an existing pool with data to ssd-s. I've created crush rule: ceph osd crush rule create-replicated replicated_ssd default host ssd If I apply this rule to the existing pool default.rgw.buckets.index with 180G of data with command: ceph osd pool set default.rgw.buckets.index

[ceph-users] Re: radosgw hang under pressure

2023-06-26 Thread Rok Jaklič
ices Co., Ltd. > e: istvan.sz...@agoda.com > ------- > > On 2023. Jun 23., at 19:12, Rok Jaklič wrote: > > Email received from the internet. If in doubt, don't click any link nor > open any attachment ! > > >

[ceph-users] Re: radosgw hang under pressure

2023-06-23 Thread Rok Jaklič
We are experiencing something similar (slow GETs responses) when sending 1k delete requests for example in ceph v16.2.13. Rok On Mon, Jun 12, 2023 at 7:16 PM grin wrote: > Hello, > > ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy > (stable) > > There is a single (test) ra

[ceph-users] RGW: exposing multi-tenant

2023-06-13 Thread Rok Jaklič
Hi, are there any drawbacks of exposing multi-tenant deployment of RGWs directly to users so they can use any S3 client to connect to service or should we put something in front of RGWs? How many users in multi-tenant deployment can CEPH handle? Kind regards, Rok

[ceph-users] Re: Dedicated radosgw gateways

2023-05-18 Thread Rok Jaklič
I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit. but there is little information about those settings. Is there any documentation in the wild about those settings? Are they enabled by default? On Thu, May 18, 2023 at 9:15 PM Tarrago, Eli (RIS-BCT) < eli.tarr...@lexisnex

[ceph-users] Re: Deleting millions of objects

2023-05-18 Thread Rok Jaklič
g set > > > WHO: client. or client.rgw > > KEY: rgw_delete_multi_obj_max_num > > VALUE: 1 > > Regards, Joachim > > ___ > ceph ambassador DACH > ceph consultant since 2012 > > Clyso GmbH - Premier Ceph Foundation Memb

[ceph-users] Re: Deleting millions of objects

2023-05-17 Thread Rok Jaklič
ete_multi_obj_max_num > > rgw_delete_multi_obj_max_num - Max number of objects in a single multi- > object delete request > (int, advanced) > Default: 1000 > Can update at runtime: true > Services: [rgw] > > On Wed, 2023-05-17 at 10:51 +0200, Rok Jaklič wrote: &g

[ceph-users] Deleting millions of objects

2023-05-17 Thread Rok Jaklič
Hi, I would like to delete millions of objects in RGW instance with: mc rm --recursive --force ceph/archive/veeam but it seems it allows only 1000 (or 1002 exactly) removals per command. How can I delete/remove all objects with some prefix? Kind regards, Rok

[ceph-users] Re: Moving From BlueJeans to Jitsi for Ceph meetings

2023-03-22 Thread Rok Jaklič
We deployed jitsi for the public sector during covid and it is still free to use. https://vid.arnes.si/ --- However, the landing page is in Slovene language and for future reservations you need an AAI (SSO) account (which you get if you are a part of a public organization (school, faculty, ...).

[ceph-users] Re: rbd on EC pool with fast and extremely slow writes/reads

2023-03-14 Thread Rok Jaklič
1, 2 times a year we are having similar problem in *not* ceph disk cluster, where working -> but slow disk writes give us slow reads. We somehow "understand it", since probably slow writes fill up queues and buffers. On Thu, Mar 9, 2023 at 11:37 AM Andrej Filipcic wrote: > > Thanks for the hint

[ceph-users] rgw with unix socket

2022-10-17 Thread Rok Jaklič
Hi, I try to configure ceph with rgw and unix socket (based on https://docs.ceph.com/en/pacific/man/8/radosgw/?highlight=radosgw). I have in ceph.conf something like this: [client.radosgw.ctplmon3] host = ctplmon3 rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock log file = /var/lo

[ceph-users] Re: RGW problems after upgrade to 16.2.10

2022-09-21 Thread Rok Jaklič
Solution was found by colleague and it was: ms_mon_client_mode = crc ... because of https://github.com/ceph/ceph/pull/42587/commits/7e22d2a31d277ab3eecff47b0864b206a32e2332 Rok On Thu, Sep 8, 2022 at 6:04 PM Rok Jaklič wrote: > What credentials should RGWs have? > > I have inte

[ceph-users] Requested range is not satisfiable

2022-09-17 Thread Rok Jaklič
Hi, we try to copy a big file (over 400GB) using a minio client to the ceph cluster. Copy or better transfer takes a lot of time (2 days for example) because of "slow connection". Usually somewhere near the end (but looks random) we get an error like: Failed to copy `/360GB.bigfile.img`. The req

[ceph-users] Re: Manual deployment, documentation error?

2022-09-15 Thread Rok Jaklič
Every now and then someone comes up with a subject like this. There is quite a long thread about pros and cons using docker and all tools around ceph on https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TTTYKRVWJOR7LOQ3UCQAZQR32R7YADVY/#AT7YQV6RE5SMKDZHXL3ZI2G5BWFUUUXE Long story sh

[ceph-users] Re: RGW problems after upgrade to 16.2.10

2022-09-08 Thread Rok Jaklič
-13 error code represents permission denied > b. You’ve commented out the keyring configuration in ceph.conf > > So do your RGWs have appropriate credentials? > > Eric > (he/him) > > > On Sep 7, 2022, at 3:04 AM, Rok Jaklič wrote: > > > > Hi, >

[ceph-users] RGW problems after upgrade to 16.2.10

2022-09-07 Thread Rok Jaklič
Hi, after upgrading to ceph version 16.2.10 from 16.2.7 rados gw is not working. We start rados gw with: radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n client.radosgw.ctplmon3 ceph.conf looks like: [root@ctplmon3 ~]# cat /etc/ceph/ceph.conf [global] fsid = 0a6e5422-ac75-4093-af2

[ceph-users] Re: Wrong size actual?

2022-09-06 Thread Rok Jaklič
ven deleting the bucket seems to leave the objects in the rados pool > forever. > > Ciao, Uli > > > Am 05.09.2022 um 15:19 schrieb Rok Jaklič : > > > > Hi, > > > > when I do: > > radosgw-admin user stats --uid=X --tenant=Y --sync-stats > > &

[ceph-users] Wrong size actual?

2022-09-05 Thread Rok Jaklič
Hi, when I do: radosgw-admin user stats --uid=X --tenant=Y --sync-stats I get: { "stats": { "size": 2620347285776, "size_actual": 2620348436480, "size_utilized": 0, "size_kb": 2558932897, "size_kb_actual": 2558934020, "size_kb_utilized": 0,

[ceph-users] Tenant and user id

2022-02-16 Thread Rok Jaklič
Hi, is it possible to get tenant and user id with some python boto3 request? Kind regards, Rok ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] reducing mon_initial_members

2021-09-29 Thread Rok Jaklič
Can I reduce mon_initial_members to one host after already being set to two hosts? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: we're living in 2005.

2021-07-27 Thread Rok Jaklič
Actually, some of us tried to contribute to documentation but were stopped with failed build checks for some reason. While most of it is ok, at some places documentation is vague or missing (maybe also the reason why this thread is so long also). One example: https://github.com/ceph/ceph/pull/409

[ceph-users] Limiting subuser to his bucket

2021-07-21 Thread Rok Jaklič
Hi, is it possible to limit access of the subuser that he sees (read, write) only "his" bucket? And also be able to create a bucket inside that bucket? Kind regards, Rok ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to c

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-25 Thread Rok Jaklič
This thread would not be so long if docker/containers solved the problems, but it did not. It solved some, but introduced new ones. So we cannot really say its better now. Again, I think focus should more on a working ceph with clean documentation while leaving software management, packages to adm

[ceph-users] Re: ceph buckets

2021-06-08 Thread Rok Jaklič
Which mode is that and where can I set it? This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/ ? On Tue, Jun 8, 2021 at 2:24 PM Janne Johansson wrote: > Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič : > > Hi, > > I try to create buckets through rgw in

[ceph-users] ceph buckets

2021-06-08 Thread Rok Jaklič
Hi, I try to create buckets through rgw in following order: - *bucket1* with *user1* with *access_key1* and *secret_key1* - *bucket1* with *user2* with *access_key2* and *secret_key2* when I try to create a second bucket1 with user2 I get *Error response code BucketAlreadyExists.* Why? Should no

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-02 Thread Rok Jaklič
In this giga, tera byte times all this dependency hell can now be avoided with some static linking. For example, we do use statically linked mysql binaries and it saved us numerous times. https://youtu.be/5PmHRSeA2c8?t=490 Rok On Wed, Jun 2, 2021 at 9:57 PM Harry G. Coin wrote: > > On 6/2/21 2:

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-02 Thread Rok Jaklič
I agree, simplifying "deployment" by adding another layer of complexity does bring much more problems and hard times when something goes wrong in the runtime. Few additional steps at "install phase" and better understanding of underlying architecture, commands, whatever ... have much more pros tha

[ceph-users] time duration of radosgw-admin

2021-06-01 Thread Rok Jaklič
Hi, is it normal that radosgw-admin user info --uid=user ... takes around 3s or more? Also other radosgw-admin are taking quite a lot of time. Kind regards, Rok ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-user

[ceph-users] Re: rebalancing after node more

2021-05-27 Thread Rok Jaklič
5 hosts (with > failure domain host) your PGs become undersized when a host fails and > won't recover until the OSDs come back. Which ceph version is this? > > > Zitat von Rok Jaklič : > > > For this pool I have set EC 3+2 (so in total I have 5 nodes) which one > was &g

[ceph-users] Re: rebalancing after node more

2021-05-27 Thread Rok Jaklič
For this pool I have set EC 3+2 (so in total I have 5 nodes) which one was temporarily removed, but maybe this was the problem? On Thu, May 27, 2021 at 3:51 PM Rok Jaklič wrote: > Hi, thanks for quick reply > > root@ctplmon1:~# ceph pg dump pgs_brief | grep undersized > dumped pgs

[ceph-users] Re: rebalancing after node more

2021-05-27 Thread Rok Jaklič
ph osd pool ls detail > > and the crush rule(s) for the affected pool(s). > > > Zitat von Rok Jaklič : > > > Hi, > > > > I have removed one node, but now ceph seems to stuck in: > > Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs > >

[ceph-users] rebalancing after node more

2021-05-27 Thread Rok Jaklič
Hi, I have removed one node, but now ceph seems to stuck in: Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs degraded, 12 pgs undersized How to "force" rebalancing? Or should I just wait a little bit more? Kind regards, rok ___ ceph

[ceph-users] Re: ceph osd df size shows wrong, smaller number

2021-05-21 Thread Rok Jaklič
00 AM Janne Johansson wrote: > Den fre 21 maj 2021 kl 10:49 skrev Rok Jaklič : > > It shows > > sdb8:16 0 5.5T 0 disk /var/lib/ceph/osd/ceph-56 > > That one says osd-56, you asked about why osd 85 was small in ceph osd df > > > >> Den

[ceph-users] Re: ceph osd df size shows wrong, smaller number

2021-05-21 Thread Rok Jaklič
$ID --mkfs --osd-uuid $UUID --data /dev/sdb chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID/ --- and there 100G block file resides. On Fri, May 21, 2021 at 9:59 AM Janne Johansson wrote: > Den fre 21 maj 2021 kl 09:41 skrev Rok Jaklič : > > why would ceph osd df show in SIZE field small

[ceph-users] ceph osd df size shows wrong, smaller number

2021-05-21 Thread Rok Jaklič
Hi, why would ceph osd df show in SIZE field smaller number than there is: 85hdd 0.8 1.0 100 GiB 96 GiB 95 GiB 289 KiB 952 MiB 4.3 GiB 95.68 3.37 10 up instead of 100GiB there should be 5.5TiB. Kind regards, Rok ___ c

[ceph-users] Re: Configuring an S3 gateway

2021-04-22 Thread Rok Jaklič
I agree. Documentation here is pretty vague. systemd services for osds on ubuntu 20.04 and ceph pacific version 16.2.1 does not work either, so I have to run it manually with /usr/bin/ceph-osd -f --cluster ceph --id some-number --setuser ceph --setgroup ceph I think it would be much better if doc

[ceph-users] Ceph Object Gateway setup/tutorial

2021-03-02 Thread Rok Jaklič
Hi, installation of cluster/osds went "by the book" https://docs.ceph.com/, but now I want to setup Ceph Object Gateway, but documentation on https://docs.ceph.com/en/latest/radosgw/ seems to lack information about what and where to restart for example when setting [client.rgw.gateway-node1] in /e

[ceph-users] File listing with browser

2019-10-16 Thread Rok Jaklič
Hi, I installed ceph object gateway and I have put one test object onto storage. I can see it with rados -p mytest ls How do I setup ceph that users can access (download,upload) files to this pool? Kind regards, Rok ___ ceph-users mailing list -- ceph