hi,
i wanted to report back to you that splitting worked *exactly* as you
described by running "ceph osd pool set default.rgw.buckets.data pg_num 32"
the whole processes took
approximately 2 minutes to split the placement groups and re-peer them from
8 to 32 for 10 OSDs on 5 hosts.
I had an OSD c
Hello ceph community,
First of all, I want to thank the developers for coming up with such a
remarkable software. I think the cephadm was a great addition to
minimize the time invested in deployment, and it also works great as a
way to decouple the service from the underlying infrastructure.
I also had the same issue with but just with prometheus during the bootstraping
of my first node of Pacific 16.2.5. What I did is simply reboot and that was it.
‐‐‐ Original Message ‐‐‐
On Sunday, July 11th, 2021 at 9:58 PM, Robert W. Eckert
wrote:
> I had the same issue for Prometheu
Actually it’s also complaining about docker0 as well. Not sure how to change
the MTU on that one though. It’s not even up.
-Paul
> On Aug 4, 2021, at 5:24 PM, Paul Giralt wrote:
>
> Yes - you’re right. It’s complaining about eno1 and eno2 which I’m not using.
> I’ll change those and it will
Yes - you’re right. It’s complaining about eno1 and eno2 which I’m not using.
I’ll change those and it will probably make the error go away. I’m guessing
something changed between 16.2.4 and 16.2.5 because I didn’t start seeing this
error until after the upgrade.
-Paul
> On Aug 4, 2021, at 5
On 04.08.2021 22:06, Paul Giralt (pgiralt) wrote:
I did notice that docker0 has an MTU of 1500 as do the eno1 and eno2
interfaces which I’m not using. I’m not sure if that’s related to the
error. I’ve been meaning to try changing the MTU on the eno interfaces
just to see if that makes a differen
I’m seeing the same issue. I’m not familiar with where to access the
“Prometheus UI”. Can you point me to some instructions on how to do this and
I’ll gladly collect the output of that command.
FWIW, here are the interfaces on my machine:
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group
d
Hi,
We're running Ceph 16.2.5 Pacific and, in the ceph dashboard, we keep
getting a MTU mismatch alert. However, all our hosts have the same
network configuration:
=> bond0: mtu 9000 qdisc
noqueue state UPgroup default qlen 1000 => vlan.24@bond0:
mtu 9000 qdisc noqueue state UPgroup
defa
I'm replying to my own message as it appears we have "fixed" the issue.
Basically, we restarted all OSD hosts and all the presumed lost data
reappeared. It's likely that some OSDs were stuck unreachable but were
somehow never flagged as such in the cluster.
On 8/3/21 8:15 PM, J-P Methot wrote:
Hi J-P,
Could you please go to the Prometheus UI and share the output of the
following query "node_network_mtu_bytes"? That'd be useful to understand
the issue. If you can open a tracker issue here:
https://tracker.ceph.com/projects/dashboard/issues/new ?
In the meantime you should be able to mut
Hello,
When attempting to enable dashboard sso with a cert file, I only receive the
following error:
Error EINVAL: `./sp.crt` not found.
The file is however most definitely there. I have tried it in string format as
well. Not sure what I’m doing wrong.
Also tried this by exec into the mgr cont
Hi everyone,
Recently, I just noticed that there is a lot of log about Broken pipe error
from all RGW nodes.
Log:
2021-08-04T06:25:05.997+ 7f4f15f7b700 1 == starting new request
req=0x7f4fac3d7670 =
2021-08-04T06:25:05.997+ 7f4f15f7b700 0 ERROR:
client_io->complete_request()
On 8/4/21 10:12 AM, Szabo, Istvan (Agoda) wrote:
Hi,
I've just got the chance to have a look again and triple (or more) check the
config. (I have 3 zones but for this use case only 2 needed to be used).
Followed your suggestions but it errored with:
WARNING: cannot find source zone id for name
Hello Harry!
Is the work around still working for you? Until now I didn't found an permanent
fix. After a few days the "deployment"
starts again.
Best,
Alex
Am Sonntag, dem 11.07.2021 um 19:58 + schrieb Robert W. Eckert:
> I had the same issue for Prometheus and Grafana, the same work
Hello Ceph list!
Is there a known problem with NFS4 ACLs and CephFS? After I have set the
permissions with a Windows Samba client
everything seems to be fine. But if I try:
$getfattr -n security.NTACL -d /xyz
/xyz: security.NTACL: No such attribute
CephFS (Ceph 16.2.5) is mountet via the Ub
Hi,
I've just got the chance to have a look again and triple (or more) check the
config. (I have 3 zones but for this use case only 2 needed to be used).
Followed your suggestions but it errored with:
WARNING: cannot find source zone id for name=*
WARNING: cannot find source zone id for name=*
16 matches
Mail list logo