Hi Rainer,
On Fri, Sep 24, 2021 at 8:33 AM Rainer Krienke wrote:
>
> Hallo Dan,
>
> I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I
> am thinking if I should stay with this release or upgrade to octopus. So
> your info is very valuable...
>
> One more question: You describ
Hi,
Wonder how you guys do it due to we will always have limitation on the network
bandwidth of the loadbalancer.
Or if no balancer what to monitor if 1 rgw maxed out? I’m using 15rgw.
Ty
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscri
Hello Everyone,
If you have any suggestions on the cause, or what we can do I'd certainly
appreciate it.
I'm seeing the following on a newly stood up cluster using Podman on Ubuntu
20.04.3 HWE:
Thank you very much
Marco
Sep 24, 2021, 1:24:30 PM [ERR] cephadm exited with an error code: 1,
std
On 9/24/21 08:33, Rainer Krienke wrote:
Hallo Dan,
I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I
am thinking if I should stay with this release or upgrade to octopus. So
your info is very valuable...
One more question: You described that OSDs do an expected fsck and
It looks like the output from a ceph-volume command was too long to handle.
If you run "cephadm ceph-volume -- inventory --format=json" (add
"--with-lsm" if you've turned on device_enhanced_scan) manually on each
host do any of them fail in a similar fashion?
On Fri, Sep 24, 2021 at 1:37 PM Marco
Awesome! I had no idea that's where this was pulling it from! However...
Both of the SSDs do have rotational set to 0 :(
root@ceph05:/sys/block# cat sd{r,s}/queue/rotational
0
0
I found a line in cephadm.log that also agrees; this one is from docker:
"sys_api": {
"removable": "0",
"ro
Thanks Everyone! Updating the clients to 4.18.0.305.19.1 did indeed fix
the issue.
-Dave
On 2021-09-21 11:42 a.m., Dan van der Ster wrote:
> [△EXTERNAL]
>
>
>
> It's this:
> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftracker.ceph.com%2Fissues%2F51948&data=04%7C01%7Cdschu
With recent releases, 'ceph config' is probably a better option; do
keep in mind this sets things cluster-wide. If you're just wanting to
target specific daemons, then tell may be better for your use case.
# get current value
ceph config get osd osd_max_backfills
# set new value to 2, for example