>>> On Wed, 17 May 2023 16:52:28 -0500, Harry G Coin
>>> said:
> I have two autofs entries that mount the same cephfs file
> system to two different mountpoints. Accessing the first of
> the two fails with 'stale file handle'. The second works
> normally. [...]
Something pretty close to that w
Hi,
the config options you mention should work, but not in the ceph.conf.
You should set it via ‚ceph config set …‘ and then restart the daemons
(ceph orch daemon restart osd).
Zitat von Renata Callado Borges :
Dear all,
How are you?
I have a Pacific 3 nodes cluster, and the machines do
Hi Ben,
After chown tp 472, “systemctl daemon-reload” changes it back to 167.
I also notice that these are still from docker.io while the rest are from quay
/home/general# docker ps --no-trunc | grep docker
93b8c3aa33580fb6f4951849a6ff9c2e66270eb913b8579aca58371ef41f2d6c
docker.io/grafana/
Hi,
> On 18 May 2023, at 23:04, Rok Jaklič wrote:
>
> I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit.
>
> but there is little information about those settings. Is there any
> documentation in the wild about those settings?
This is Life Cycle (see S3 lifecycle policy do
I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit.
but there is little information about those settings. Is there any
documentation in the wild about those settings?
Are they enabled by default?
On Thu, May 18, 2023 at 9:15 PM Tarrago, Eli (RIS-BCT) <
eli.tarr...@lexisnex
Adding a bit more context to this thread.
I added an additional radosgw to each cluser. Radosgw 1-3 are customer facing.
Radosgw #4 is dedicated to syncing
Radosgw 1-3 now have an additional lines:
rgw_enable_lc_threads = False
rgw_enable_gc_threads = False
Radosgw4 has the additional line:
rgw
Hey all,
Here are the minutes from today's meeting:
Reef RC0 status?
- ETA is the last week of May
- Several users have volunteered to help with Reef scale testing; efforts
are documented in this etherpad: https://pad.ceph.com/p/reef_scale_testing
[Ben Gao] feature request: for non root user wit
Dear all,
How are you?
I have a Pacific 3 nodes cluster, and the machines do double-duty as
Ceph nodes and as Slurm clients. (I am well aware that this is not
desirable, but my client wants it like this anyway).
Our Slurm install uses the port 6818 for slurmd everywhere.
In one of our Ceph
Hey all,
Ceph Quarterly announcement [Josh and Zac]
One page digest that may published quarterly
Planning for 1st of June, September and December
Reef RC
https://pad.ceph.com/p/reef_scale_testing
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes#L17
ETA last week of May
Missing
thanks Ken! using copr sounds like a great way to unblock testing for
reef until everything lands in epel
for the teuthology part, i raised a pull request against teuthology's
install task to add support for copr repositories
(https://github.com/ceph/teuthology/pull/1844) and updated my ceph pr
th
Since 1000 is the hard coded limit in AWS, maybe you need to set
something on the client as well? "client.rgw" should work for setting
the config in RGW.
Daniel
On 5/18/23 03:01, Rok Jaklič wrote:
Thx for the input.
I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_d
Thank Casey
Currently, when I create a new bucket and specify the bucket location as
zone group 2, I expect the request to be handled by the master zone in zone
group 1, as it is the expected behavior. However, I noticed that regardless
of the specified bucket location, the zone group ID for all b
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
wrote:
>
> Hi
>
> I'm currently using Ceph version 16.2.7 and facing an issue with bucket
> creation in a multi-zone configuration. My setup includes two zone groups:
>
> ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and
>
> [...] We have this slow and limited delete issue also. [...]
That usually, apart from command list length limitations,
happens because so many Ceph storage backends have too low
committed IOPS (write, but not just) for mass metadata (and
equivalently small data) operations, never mind for runnin
Hi, I don’t have a good explanation for this yet, but I’ll soon get
the opportunity to play around with a decommissioned cluster. I’ll try
to get a better understanding of the LRC plugin, but it might take
some time, especially since my vacation is coming up. :-)
I have some thoughts about th
Thx for the input.
I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_delete_multi_obj_max_num 1
ceph config set client.radosgw.mon1 rgw_delete_multi_obj_max_num 1
ceph config set client.rgw rgw_delete_multi_obj_max_num 1
where client.radosgw.mon2 is the same as
16 matches
Mail list logo