Hello,
Data on a volume should be the same independently on how they are being
accessed.
I would think the volume was previously initialized with an LVM layer, did
"lvs" shows any logical volume on the system ?
On Sun, Feb 4, 2024, 08:56 duluxoz wrote:
> Hi All,
>
> All of this is using the la
Hello,
Following an upgrade from Nautilus (14.2.22) to Pacific (16.2.13), we
encounter an issue with a cache pool becoming completely stuck,
relevant messages below:
pg xx.x has invalid (post-split) stats; must scrub before tier agent
can activate
In OSD logs, scrubs are starting in a loop witho
not
available yet as PGs needs to be scrubbed in order for the cache tier can be
activated.
As we are struggling to make this cluster works again, any help would be
greatly appreciated.
Cédric
> On 20 Feb 2024, at 20:22, Cedric wrote:
>
> Thanks Eugen, sorry about the missed rep
setting (and unsetting after a while) noscrub and
> nodeep-scrub has any effect. Have you tried that?
>
> Zitat von Cedric :
>
> > Update: we have run fsck and re-shard on all bluestore volume, seems
> > sharding were not applied.
> >
> > Unfortunately scrubs and de
is your current
> setting on that?
>
> ceph config get osd osd_scrub_invalid_stats
> true
>
> The config reference states:
>
> > Forces extra scrub to fix stats marked as invalid.
>
> But the default seems to be true, so I'd expect it's true in your case
&
question, are all services running pacific already and on
> the same version (ceph versions)?
Yes, all daemon runs 16.2.13
>
> Zitat von Cedric :
>
> > Yes the osd_scrub_invalid_stats is set to true.
> >
> > We are thinking about the use of "ceph pg_mark_unfound_
Hello,
Sorry for the late reply, so yes we finally find a solution, which was to split
apart the cache_pool on dedicated OSD. It had the effect to clear off slow ops
and allow the cluster to serves clients again, after 5 days of lock down,
hopefully the majority of VM resume well, thanks to the
u have any pointers, well they will be greatly appreciated.
Cheers
On Wed, Feb 28, 2024 at 9:50 PM Eugen Block wrote:
>
> Hi,
>
> great that you found a solution. Maybe that also helps to get rid of
> the cache-tier entirely?
>
> Zitat von Cedric :
>
> > Hello,
>
Did the balancer has enabled pools ? "ceph balancer pool ls"
Actually I am wondering if the balancer do something when no pools are
added.
On Mon, Mar 4, 2024, 11:30 Ml Ml wrote:
> Hello,
>
> i wonder why my autobalancer is not working here:
>
> root@ceph01:~# ceph -s
> cluster:
> id:
What about drives IOPS ? Hdd tops at an average 150, you can use iostat
-xmt to get these values (also last column show disk utilization which is
very usefull)
On Sun, May 26, 2024, 09:37 Mazzystr wrote:
> I can't explain the problem. I have to recover three discs that are hdds.
> I figured on
Also osd_max_backfills and osd_recovery_max_active can plays a role, but I
wonder if they still has effect with the new mpq feature.
On Sun, May 26, 2024, 09:37 Mazzystr wrote:
> I can't explain the problem. I have to recover three discs that are hdds.
> I figured on just replacing one to give
Not sure you need (or you should) prepare the block device manualy, ceph
can handle these tasks. Did you try to cleanup and retry by providing
/dev/sda6 with the ceph orch daemon add ?
On Sun, May 26, 2024, 10:50 duluxoz wrote:
> Hi All,
>
> Is the following a bug or some other problem (I can't
FYI you can also set the balancer mode to crush-compat, this way even if
the balancer is re enabled for any reason error messages will not occurs.
https://docs.ceph.com/en/pacific/rados/operations/balancer/
On Thu, Dec 12, 2024, 15:28 Janne Johansson wrote:
> I have clusters that have been upgr
Encountered this issue recently, restarting mgrs did the trick.
Cheers
On Sat, Jan 25, 2025, 06:26 Devender Singh wrote:
> Thanks for you reply… but those command not working as its an always
> module..but strange still showing error,
>
> # ceph mgr module enable orchestrator
> module 'orchest
Could it be related to automatic OSD deployment ?
https://docs.ceph.com/en/reef/cephadm/services/#disabling-automatic-deployment-of-daemons
On Fri, Feb 14, 2025, 08:40 Eugen Block wrote:
> Good morning,
>
> this week I observed something new, I think. At least I can't recall
> having seen tha
15 matches
Mail list logo