He All,
We set rgw_lc_max_rules to 1 but we seen the issue while the xml rule
length are > 1MB It return InvalidRange, the format is below,
http://s3.amazonaws.com/doc/2006-03-01/";>
Enabledtest_1/1.
.
.
.
Any reason why Ceph not allowed lc rule length > 1MB.
Thanks,
AmitG
__
Zitat, thanks for the tips.
I tried appending the key directly in the mount command
(secret=) and that produced the same error.
I took a look at the thread you suggested and I ran the commands that Paul at
Croit suggested even though I the ceph dashboard showed "cephs" as already set
as the ap
I just remembered there was a thread [1] about that a couple of weeks
ago. Seems like you need to add the capabilities to the client.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDVMYGCUTALACPFAJYITLOHJ/#I6LJR72AJGOCGINVOVEVSCKRIWV5TTZ2
Zitat von Eugen Block
Hi,
have you tried to mount with the secret only instead of a secret file?
mount -t ceph ceph-n4:6789:/ /ceph -o name=client.1,secret=
If that works your secret file is not right. If not you should check
if the client actually has access to the cephfs pools ('ceph auth
list').
Zitat von
I understand, so I expected slow requests (like X slow requests are blocked > 32 sec) but I was not expecting that heartbeats are missed or
OSD's were restarted.
Maybe this "hard recovery" was not tested enough.
Also I'm concerned, that this OSD restart caused data degradation and recovery - cl
I am still very new to ceph and I have just set up my first small test cluster.
I have Cephfs enabled (named cephfs) and everything is good in the dashboard. I
added an authorized user key for cephfs with:
ceph fs authorize cephfs client.1 / r / rw
I then copied the key to a file with:
ceph au
This is an expensive operation. You want to slow it down, not burden the OSDs.
> On Mar 21, 2020, at 5:46 AM, Jan Pekař - Imatic wrote:
>
> Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
>
>> On 21/03/2020 13.14, XuYun wrote:
>> Bluestore requires more than 4G memory p
We had a similar problem that caused by insufficient RAM: we have 6 OSDs and
32G RAM per host, and somehow swap partition was used by OS, which lead
sporadic performance problem.
> 2020年3月21日 下午8:45,Jan Pekař - Imatic 写道:
>
> Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used)
Hi, Marc,
Indeed PXE boot makes a lot sense in large cluster, cuting down OS deployment
and management burden, but only iff no single of failure is guaranteed...
best regards,
samuel
huxia...@horebdata.cn
From: Marc Roos
Date: 2020-03-21 14:13
To: ceph-users; huxiaoyu; martin.verges
Subjec
I would say it is not a 'proven technology' otherwise you would see a
wide spread implementation and adaptation of this method. However if you
really need the physical disk space, it is a solution. Although I also
would have questions on creating an extra redundant environment to
service remot
Hello, Martin,
I notice that Croit advocate the use of ceph cluster without OS disks, but with
PXE boot.
Do you use a NFS server to serve the root file system for each node? such as
hosting configuration files, user and password, log files, etc. My question is,
will the NFS server be a single
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
On 21/03/2020 13.14, XuYun wrote:
Bluestore requires more than 4G memory per OSD, do you have enough memory?
2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
Hello,
I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c
Bluestore requires more than 4G memory per OSD, do you have enough memory?
> 2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
>
> Hello,
>
> I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8)
> nautilus (stable)
>
> 4 nodes - each node 11 HDD, 1 SSD, 10Gbit network
>
> Clu
Hello,
I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8)
nautilus (stable)
4 nodes - each node 11 HDD, 1 SSD, 10Gbit network
Cluster was empty, fresh install. We filled cluster with data (small blocks)
using RGW.
Cluster is now used for testing so no client was us
14 matches
Mail list logo