Hi,
I used cephadm to setup my ceph cluster and now I noticed that it
installed everything in docker containers.
Is there any documentation or comparison about the differences between
containerized and non containerized installs? Where are the config
files in both setups? (I noticed that there are
>
> I used cephadm to setup my ceph cluster and now I noticed that it
> installed everything in docker containers.
> Is there any documentation or comparison about the differences between
> containerized and non containerized installs?
Check mailinglist history, quite a lot written about this. If
Thanks for the info.
Is there any bug report open?
On Mon, Sep 5, 2022 at 4:44 PM Ulrich Klein
wrote:
> Looks like the old problem of lost multipart upload fragments. Has been
> haunting me in all versions for more than a year. Haven‘t found any way of
> getting rid of them.
> Even deleting the
You could use `rgw-orphan-list` to determine rados objects that aren’t
referenced by any bucket indices. Those objects could be removed after
verification since this is an experimental feature.
Eric
(he/him)
> On Sep 5, 2022, at 10:44 AM, Ulrich Klein wrote:
>
> Looks like the old problem of
Hi Daniel,
My installation is also done with cephadm in docker containers. If you do all
your operations (for instance adding or removing services) with ceph orch, the
cephadm manages all de services perfectly. Pay close attention to the
documentation available on the internet. A lot of sources
Hm, there are a couple, like https://tracker.ceph.com/issues/44660, but none
with a resolution.
It’s a real problem for us because it accumulates “lost” space and screws up
space accounting.
But the ticket is classified as “3 - minor”, ie. apparently not seen as urgent
for the last couple of yea
I’m not sure anymore, but I think I tried that on a test system. Afterwards I
had to recreate the RGW pools and start over, so didn’t try it on a “real”
system.
But I can try again in about 2 weeks. It’s dead simple to recreate the problem
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io
Yes. Rotational drives can generally do 100-200IOPS (some outliers, of
course). Do you have all forms of caching disabled on your storage
controllers/disks?
On Tue, Sep 6, 2022 at 11:32 AM Vladimir Brik <
vladimir.b...@icecube.wisc.edu> wrote:
> Setting osd_mclock_force_run_benchmark_on_init to t
Hello Everyone,
We are looking at clustering Samba with CTDB to have highly available
access to CephFS for clients.
I wanted to see how others have implemented, and their experiences so far.
Would welcome all feedback, and of course if you happen to have any
documentation on what you did so that
Our cluster has not had any data written to it externally in several weeks, but
yet the overall data usage has been growing.
Is this due to heavy recovery activity? If so, what can be done (if anything)
to reduce the data generated during recovery.
We've been trying to move PGs away from high
10 matches
Mail list logo