Thanks, Liang. But this doesn't help since Ceph 17. Setting the mclock
profile to "high recovery" speeds up a little bit. The main problem
remains: 95% of the recovery time is needed for just one PG. This was not
the case before Quincy.
郑亮 schrieb am Mo., 26. Dez. 2022, 03:52:
> Hi erich,
> You
Hi Isaiah,
A simple solution for multi-site redundancy is to have two nearby sites
with < 3ms latency and setup crush map [0] for datacenter level
redundancy instead of the default host level.
Performance was adequate in my testing for large number of small files
if the latency between all n
1. This is a guess, but check /var/[lib|run]/ceph for any lock files.
2. This is more straightforward to fix, add faster WAL/Block device/LV
for each OSD or create a fast storage pool just for metadata. Also,
experiment with MDS cache size/trim [0] settings.
[0]: https://docs.ceph.com/en/lates
Hi,
Just to add to the previous discussion, consumer SSDs like these can
unfortunately be significantly *slower* than plain old HDDs for Ceph. This is
because Ceph always uses SYNC writes to guarantee that data is on disk before
returning.
Unfortunately NAND writes are intrinsically quite slow
We are on: 17.2.4
Ceph fs volume ls output:
[
{
"name": "k8s_ssd"
},
{
"name": "inclust"
},
{
"name": "inclust_ssd"
}
]
I'd like to create a subvol in inclust_ssd volume. I can create
subvolume with same name in inclust without any problems.
B
Hi Pavin,
The following are additional developments.. There's one PG that's
stuck and unable to recover. I've attached relevant ceph -s / health
detail and pg stat outputs below.
- There were some remaining lock files as suggested in /var/run/ceph/
pertaining to rgw. I removed the service, d
Hello everyone,
After the upgrade from Pacific to Quincy the radosgw service is no longer
listening on network port, but the process is running. I get the following in
the log:
2022-12-29T02:07:35.641+ 7f5df868ccc0 0 ceph version 17.2.5
(98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
Hi,
Just try to read your logs:
> 2022-12-29T02:07:38.953+ 7f5df868ccc0 0 WARNING: skipping unknown
> framework: civetweb
You try to use the `civetweb`, it was absent in quincy release. You need to
update your configs and use `beast` instead
k
> On 29 Dec 2022, at 09:20, Andrei Mikhail
>> Thanks. I am planning to change all of my disks. But do you know enterprise
>> SSD Disk which is best in trade of between cost & iops performance?
In my prior response I meant to ask what your workload is like. RBD? RGW?
Write-heavy? Mostly reads? This influences what drives make sense.
—
Hello,
after reinstalling one node (ceph06) from Backup the OSDs on that node
do not show any Disk information with "ceph osd df tree":
https://pastebin.com/raw/7zeAx6EC
Any hint how i could fix this?
Thanks,
Mario
___
ceph-users mailing list -- ceph-
10 matches
Mail list logo