Hi all,
Has anyone tried setting cache-tier to forward mode in luminous 12.2.1 ? Our
cluster cannot write to rados pool once the mode to set to forward. We setup
the cache-tier with forward mode and then do rados bench. However, the
throughput from rados bench is 0, and iostat shows no disk usa
Hi Ean,
I don't have any experience with less than 8 drives per OSD node, and
the setup heavily depends on what you want to use it for. Assuming
small proof of concept with not much requirement for performance (due
to low spindle count), I would do this:
On Mon, Jan 22, 2018 at 1:28 PM, Ean Pric
ceph osd
原始邮件 发件人: Karun
Josy收件人: Jean-Charles Lopez抄送:
ceph-users@lists.ceph.com发送时间: 2018年1月25日(周四)
04:42主题: Re: [ceph-users] Full RatioThank you!
Ceph version is 12.2
Also, can you let me know the format to set osd_backfill_full_r
On 25 January 2018 at 04:53, Warren Wang wrote:
> The other thing I can think of is if you have OSDs locking up and getting
> corrupted, there is a severe XFS bug where the kernel will throw a NULL
> pointer dereference under heavy memory pressure. Again, it's due to memory
> issues, but you wi
Thank you!
Ceph version is 12.2
Also, can you let me know the format to set osd_backfill_full_ratio ?
Is it " ceph osd set -backfillfull-ratio .89 " ?
Karun Josy
On Thu, Jan 25, 2018 at 1:29 AM, Jean-Charles Lopez
wrote:
> Hi,
>
> if you are using an older Ceph version note tha
Since upgrade to Ceph Luminous (12.2.2) from Jewel we get scrub mismatch
errors every day at the same time (19:25), how can we fix them? Seems to
be the same problem as described at
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023202.html
(can't reply to archived messages),
ceph osd pool application enable XXX rbd
-Original Message-
From: Steven Vacaroaia [mailto:ste...@gmail.com]
Sent: woensdag 24 januari 2018 19:47
To: David Turner
Cc: ceph-users
Subject: Re: [ceph-users] Luminous - bad performance
Hi ,
I have bundled the public NICs and added 2 more
Hi,
if you are using an older Ceph version note that the mon_osd_near_full_ration
and mon_osd_full_ration must be set in the config file on the MON hosts first
and then the MONs restarted one after the other one.
If using a recent version there is a command ceph osd set-full-ratio and ceph
os
Hi,
I am trying to increase the full ratio of OSDs in a cluster.
While adding a new node one of the new disk got backfilled to more than 95%
and cluster freezed. So I am trying to avoid it from happening again.
Tried pg set command but it is not working :
$ ceph pg set_nearfull_ratio 0.88
Error
Hi ,
I have bundled the public NICs and added 2 more monitors ( running on 2 of
the 3 OSD hosts)
This seem to improve things but still I have high latency
Also performance of the SSD pool is worse than HDD which is very confusing
SSDpool is using one Toshiba PX05SMB040Y per server ( for a total
Hello,
We are running Luminous 12.2.2. 6 OSD hosts with 12 1TB, and 64GB
RAM. Each host with a SSD for Bluestore's block.wal and block.db.
There are 5 monitor nodes as well with 32GB RAM. All servers have
Gentoo with kernel, 4.12.12-gentoo.
When I export an image using:
rbd export pool-name/volu
I know this may be a bit vague, but also suggests the "try a newer kernel"
approach. We had constant problems with hosts mounting a number of RBD volumes
formatted with XFS. The servers would start aggressively swapping even though
the actual memory in use was nowhere near even 50% and eventuall
Forgot to mention another hint. If kswapd is constantly using CPU, and your sar
-r ALL and sar -B stats look like it's trashing, kswapd is probably busy
evicting things from memory in order to make a larger order allocation.
The other thing I can think of is if you have OSDs locking up and getti
Hello all,
I was looking at the Client Config Reference page (
http://docs.ceph.com/docs/master/cephfs/client-config-ref/) and there was
mention of a flag --client_with_uid. The way I read it is that you can
specify the UID of a user on a cephfs and the user mounting the filesystem
will act as the
Hi Tom,
thanks for the detailed steps.
I think our problem literally vanished. A couple of days after my
email I noticed that the web interface suddenly listed only one
cephFS. Also the command "ceph fs status" doesn't return an error
anymore but shows the corret output.
I guess Ceph is
Hi Eugen,
>From my experiences, to truely delete and recreate the Ceph FS *cephfs*
file system I've done the following:
1. Remove the file system:
ceph fs rm cephfs --yes-i-really-mean-it
ceph fs rm_data_pool cephfs_data
ceph fs rm_data_pool cephfs cephfs_data
2. Remove the associated pools:
ce
Jorge,
I'd suggest to start with regular (non-SPDK) configuration and deploy
test cluster. Then do some benchmarking against it and check if nvme
drive is the actual bottleneck. I doubt it is though. I did some
experiments a while ago and didn't see any benefit from SPDK in my case
- probably
Hey, sorry if the question doesnt really make a lot of sense I am
talking from almost complete ignorace of the topic, but there is not a
lof info about it
I am planning about creating a cluster with 7~10 NL-SAS HDD and 1 nmve
per host.
The nmve would be used as rocksDB and journal for each osd (h
Hi,
For the above issue we found a work-around.
1)we created a directory 'ceph-osd.target.wants'
2)Created simlinks to OSD service for all OSDs. Sample below:
cn7.chn6us1c1.cdn /etc/systemd/system/ceph-osd.target.wants# ll
total 0
lrwxrwxrwx 1 root root 41 Jan 23 09:36 ceph-osd@102.service ->
/us
19 matches
Mail list logo