Hi,
have you read https://docs.ceph.com/en/reef/cephadm/install/ ?
Bootstrapping a new cluster should be as easy as
# cephadm bootstrap --mon-ip **
if the nodes fulfill the requirements:
- Python 3
- Systemd
- Podman or Docker for running containers
- Time synchronization (such as chrony or
Hi community,
I'm running a large s3 running with ingress and backend rgw, total traffic
of my cluster ~30Gbps real time traffic.
I' am have a problem with lifecycle in rgw, object can't delete with
following log
Nov 27 13:46:29 ceph-osd-211 bash[1836289]: debug
2023-11-27T06:46:29.933+ 7f0d
Hi team,
Any update on this?
Thanks & Regards
Arihant Jain
On Mon, 27 Nov, 2023, 8:07 am AJ_ sunny, wrote:
> ++adding
> @ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io
>
> ++Adding d...@ceph.io
>
>
> Thanks,&, Regards
> Arihant Jain
>
> On Mon, 27 Nov, 2023, 7:48 am AJ_ s
On 11/27/23 13:12, zxcs wrote:
current, we using `ceph config set mds mds_bal_interval 3600` to set a
fixed time(1 hour).
we also have a question about how to set no balance for multi active mds.
means, we will enable multi active mds(to improve throughput) and no
balance for these mds.
an
current, we using `ceph config set mds mds_bal_interval 3600` to set a fixed
time(1 hour).
we also have a question about how to set no balance for multi active mds.
means, we will enable multi active mds(to improve throughput) and no balance
for these mds.
and if we set mds_bal_interval as big
Dear Cephers,
With improvement on dmclock in 17.2.7, we are considering upgrade from
17.2.5 to 17.2.7.
seeing this which is worrysome:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SG7CKALU3AIWEIVN7QENIY3KRETUQKM7/
any suggestions for an easy upgrade?
Regards,
Ben
On 11/24/23 21:37, Frank Schilder wrote:
Hi Xiubo,
thanks for the update. I will test your scripts in our system next week.
Something important: running both scripts on a single client will not produce a
difference. You need 2 clients. The inconsistency is between clients, not on
the same cl
with the same mds configuration, we see exactly the same(problem, log and
solution) with 17.2.5, constantly happening again and again in couples days
intervals. MDS servers are stuck somewhere, ceph status reports no issue
however. We need to restart some of the mds (if not all of them) to restore
++adding
@ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io
++Adding d...@ceph.io
Thanks,&, Regards
Arihant Jain
On Mon, 27 Nov, 2023, 7:48 am AJ_ sunny, wrote:
> Hi team,
>
> After doing above changes I am still getting the issue in which machine
> continuously went shutdow
I'm pulling my hair trying to get a simple cluster going. I first tried
Gluster but I have an old system that can't handle the latest version, so I
resorted to Ceph. However, I can't get my cluster to work. I tried to find
tutorials but everything uses tools on top of Ceph, whereas I'm trying to
us
10 matches
Mail list logo