Thank you Igor!
Tony
From: Igor Fedotov
Sent: November 1, 2022 04:34 PM
To: Tony Liu; ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] Re: Is it a bug that OSD crashed when it's full?
Hi Tony,
first of all let me share my understanding of the is
If its the same issue, I'd check the fragmentation score on the entire cluster
asap. You may have other osds close to the limit and its harder to fix when all
your osds cross the line at once. If you drain this one, it may push the other
ones into the red zone if your too close, making the probl
Hi Tony,
first of all let me share my understanding of the issue you're facing.
This recalls me an upstream ticket and I presume my root cause analysis
from there (https://tracker.ceph.com/issues/57672#note-9) is applicable
in your case as well.
So generally speaking your OSD isn't 100% full
I managed to solve this problem.
To document the resolution: The firewall was blocking communication. After
disabling everything related to it and restarting the machine everything
went back to normal.
Em ter., 1 de nov. de 2022 às 10:46, Murilo Morais
escreveu:
> Good morning everyone!
>
> Tod
I have a ceph cluster that shows a different space utilization in its status
screen than in its bucket stats. When I copy the contents of this cluster to a
different ceph cluster, the bucket stats totals are as expected and match the
bucket stats totals.
output of "ceph -s" on the first ceph cl
That is correct, just omit the wal_devices and they will be placed on
the db_devices automatically.
Zitat von "Fox, Kevin M" :
I haven't done it, but had to read through the documentation a
couple months ago and what I gathered was:
1. if you have a db device specified but no wal device, it
It looks like I hit some flavour of https://tracker.ceph.com/issues/51034.
Since when I set `bluefs_buffered_io=false` the issue (that I could
reproduce pretty consistently) disappeared.
Oleksiy
On Tue, Nov 1, 2022 at 3:02 AM Eugen Block wrote:
> As I said, I would recommend to really wipe the
The actual question is that, is crash expected when OSD is full?
My focus is more on how to prevent this from happening.
My expectation is that OSD rejects write request when it's full, but not crash.
Otherwise, no point to have ratio threshold.
Please let me know if this is the design or a bug.
T
I haven't done it, but had to read through the documentation a couple months
ago and what I gathered was:
1. if you have a db device specified but no wal device, it will put the wal on
the same volume as the db.
2. the recommendation seems to be to not have a separate volume for db and wal
if on
Hej,
we are using ceph version 17.2.0 on Ubuntu 22.04.1 LTS.
We've got several servers with the same setup and are facing a problem
with OSD deployment and db-/wal-device placement.
Each server consists of ten rotational disks (10TB each) and two NVME
devices (3TB each).
We would like to d
Good morning everyone!
Today there was an atypical situation in our Cluster where the three
machines came to shut down.
On powering up the cluster went up and formed quorum with no problems, but
the PGs are all in Working, I don't see any disk activity on the machines.
No PG is active.
[ceph:
If the GB per pg is high, the balancer module won't be able to help.
Your pg count per osd also looks low (30's), so increasing pgs per pool
would help with both problems.
You can use the pg calculator to determine which pools need what
On Tue, Nov 1, 2022, 08:46 Denis Polom wrote:
> Hi
>
> I
Hi
I observed on my Ceph cluster running latest Pacific that same size OSDs
are utilized differently even if balancer is running and reports status
as perfectly balanced.
{
"active": true,
"last_optimize_duration": "0:00:00.622467",
"last_optimize_started": "Tue Nov 1 12:49:36 20
As I said, I would recommend to really wipe the OSDs clean
(ceph-volume lvm zap --destroy /dev/sdX), maybe reboot (on VMs it was
sometimes necessary during my tests if I had too many failed
attempts). And then also make sure you don't have any leftovers in the
filesystem (under /var/lib/cep
14 matches
Mail list logo