ntion to the correct configs etc.
> Each cluster has its own UUID and separate MONs, it should just work.
> If not, let us know. ;-)
>
> Zitat von David Yang :
>
> > Hi Eugen
> >
> > Do you mean that it is possible to create multiple clusters on one
> > infrastr
up clusters and configure mirroring on both of them. You
> should already have enough space (enough OSDs) to mirror both pools,
> so that would work. You can colocate MONs with OSDs so you don't need
> additional hardware for that.
>
> Regards,
> Eugen
>
> Zitat von Da
Hello, everyone.
Under normal circumstances, we synchronize from PoolA of ClusterA to
PoolA of ClusterB (same pool name), which is also easy to configure.
Now the requirements are as follows:
ClusterA/Pool synchronizes to BackCluster/PoolA
ClusterB/Pool synchronizes to BackCluster/PoolB
After re
We have ceph clusters in multiple regions to provide rbd services.
We are currently preparing a remote backup plan, which is to
synchronize pools with the same name in each region to different pools
in one cluster.
For example:
Cluster A Pool synchronized to backup cluster poolA
Cluster B Pool sy
Hello everyone.
I have a cluster with 8321 pgs and recently I started to get pg not
deep-scrub warnings.
The reason is that I reduced max_scrub to avoid the impact of scrub on IO.
Here is my current scrub configuration:
~]# ceph tell osd.1 config show|grep scrub
"mds_max_scrub_ops_in_progress":
Hi Erich
When mds cache usage is very high, recovery is very slow.
So I use command to drop mds cache:
ceph tell mds.* cache drop 600
Lars Köppel 于2024年4月23日周二 16:36写道:
>
> Hi Erich,
>
> great that you recovered from this.
> It sounds like you had the same problem I had a few months ago.
> mds cr
This is great, we are currently using the smb protocol heavily to
export kernel-mounted cephfs.
But I encountered a problem. When there are many smb clients
enumerating or listing the same directory, the smb server will
experience high load, and the smb process will become D state.
This problem has
You can use the "ceph health detail" command to see which clients are
not responding.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It is recommended to disconnect the client first and then observe
whether the cluster's slow requests recover.
Erich Weiler 于2024年3月26日周二 05:02写道:
>
> Hi Y'all,
>
> I'm seeing this warning via 'ceph -s' (this is on Reef):
>
> # ceph -s
>cluster:
> id: 58bde08a-d7ed-11ee-9098-506b4b4d
The 2*10Gbps shared network seems to be full (1.9GB/s).
Is it possible to reduce part of the workload and wait for the cluster
to return to a healthy state?
Tip: Erasure coding needs to collect all data blocks when recovering
data, so it takes up a lot of network card bandwidth and processor
resour
th manual pinning
> directories to mds ranks.
>
> Best regards,
> Sake
>
>
> On 31 Dec 2023 09:01, David Yang wrote:
>
> I hope this message finds you well.
>
> I have a cephfs cluster with 3 active mds, and use 3-node samba to
> export through the kernel.
>
>
I hope this message finds you well.
I have a cephfs cluster with 3 active mds, and use 3-node samba to
export through the kernel.
Currently, there are 2 node mds experiencing slow requests. We have
tried restarting the mds. After a few hours, the replay log status
became active.
But the slow requ
VmScan : nos=0 gon=0 bsy=0 can=0 wt=0
Ops : pend=0 run=0 enq=0 can=0 rej=0
Ops : ini=0 dfr=0 rel=0 gc=0
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
David Yang 于2022年9月24日周六 15:08写道
cheOp: inv=0 upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
CacheEv: nsp=0 stl=0 rtr=0 cul=0
Does this function need to be used with cachefiles?
David Yang 于2022年9月23日周五 21:34写道:
> I found in some articles on the net that in their ceph.ko it depends on
>
: PKCS#7
signer:
sig_key:
sig_hashalgo: md4
David Yang 于2022年9月23日周五 12:17写道:
> hi,
> I am using kernel client to mount cephFS filesystem on Centos8.2.
> But my ceph's kernel module does not contain fscache.
>
>
> [root@host ~]# uname -r
> 5.4.163-1.el8.elrepo.x86_64
> [
hi,
I am using kernel client to mount cephFS filesystem on Centos8.2.
But my ceph's kernel module does not contain fscache.
[root@host ~]# uname -r
5.4.163-1.el8.elrepo.x86_64
[root@host ~]# lsmod|grep ceph
ceph 446464 0
libceph 368640 1 ceph
dns_resolver 16384 1 libceph
libcrc32c 16384 2xfs, lib
Dear all
I have a CephFS filesystem storage cluster with version pacific mounted on
a linux server using the kernel client.
Then share the stored mount directory to the windows client by deploying
the samba service.
Sometimes it is found that some workloads from Windows will have a lot of
metadata
Did you add the configuration directly to the conf?
I see that other people's posts need to be recompiled after adding rdma.
I'm also going to try rdma mode now, but haven't found any more info.
sascha a. 于2022年2月1日周二 20:31写道:
> Hey,
>
> I Recently found this RDMA feature of ceph. Which I'm curr
Hi, I have also encountered this problem before, I did not do other
operations, just added a ssd as large as possible to create a swap
partition.
At the most when osd is restored, a storage node uses up 2T of swap. Then
after the osd boots back to normal, the memory will be released and return
to
hi, buddy
I have a ceph file system cluster, using ceph version 15.2.14.
But the current status of the cluster is HEALTH_ERR.
health: HEALTH_ERR
Module 'devicehealth' has failed:
The content in the mgr log is as follows:
2021-09-05T13:20:32.922+0800 7f2b8621b700 0 log_channel(a
osd.66 up 0 1.0
Eneko Lacunza 于2021年8月11日周三 下午2:35写道:
> Hi David,
>
> You need to provide the details for each node; OSDs with their size and
> pool configuration.
>
> El 11/8/21 a las 5:30, David Yang escribió:
> > There is also a set of mo
There is also a set of mon+mgr+mds running on one of the storage nodes.
David Yang 于2021年8月11日周三 上午11:24写道:
> hi
> I have a cluster of 5 storage nodes + 2 (mon+mds+mgr) nodes for file
> system storage. The usage is very good.
>
> The cluster is now being expanded by adding storage
hi
I have a cluster of 5 storage nodes + 2 (mon+mds+mgr) nodes for file system
storage. The usage is very good.
The cluster is now being expanded by adding storage nodes.
But when the data was backfilled, I found that the total space of the
storage pool was decreasing.
I had to mark the newly ad
23 matches
Mail list logo