>
> The question here is a rather simple one:
> when you add to an existing Ceph cluster a new node having disks twice
> (12TB) the size of the existing disks (6TB), how do you let Ceph evenly
> distribute the data across all disks ?
Ceph already does this. If you have 6TB+12TB in one node you wi
Hi,
you can change the report interval with this config option (default 2
seconds):
$ ceph config get mgr mgr_tick_period
2
$ ceph config set mgr mgr_tick_period 10
Regards,
Eugen
Zitat von Chris Palmer :
I have just checked 2 quincy 17.2.6 clusters, and I see exactly the
same. The pgma
Hi Stefan,
the jobs ended and the warning disappeared as expected. However, a new job
started and the warning showed up again. There is something very strange going
on and, maybe, you can help out here:
We have a low client CAPS limit configured for performance reasons:
# ceph config dump | gr
Thanks, Eugen. This is a useful setting.
/Z
On Thu, 19 Oct 2023 at 10:43, Eugen Block wrote:
> Hi,
>
> you can change the report interval with this config option (default 2
> seconds):
>
> $ ceph config get mgr mgr_tick_period
> 2
>
> $ ceph config set mgr mgr_tick_period 10
>
> Regards,
> Euge
I just tried what sending SIGSTOP and SIGCONT do. After stopping the process 3
caps were returned. After resuming the process these 3 caps were allocated
again. There seems to be a large number of stale caps that are not released.
While the process was stopped the kworker thread continued to sho
Hello Eugen,
before answering your questions:
Creating a new simple replicated crush rule and setting it to the
metadata pool was the solution.
> - Your cache tier was on SSDs which need to be removed.
Yes, the cache tier pool and the metadata pool for the RBDs.
> - Cache tier was removed s
Greetings -
Forgive me if this is an elementary question - am fairly new to running CEPH.
Have searched but didn’t see anything specific that came up.
Is there any way to disable the disk space warnings (CephNodeDiskspaceWarning)
for specific drives or filesystems on my CEPH servers?
Runni
We are still finishing off:
- revert PR https://github.com/ceph/ceph/pull/54085, needs smoke suite rerun
- removed s3tests https://github.com/ceph/ceph/pull/54078 merged
Venky, Casey FYI
On Wed, Oct 18, 2023 at 9:07 PM Venky Shankar wrote:
>
> On Tue, Oct 17, 2023 at 12:23 AM Yuri Weinstein wr
Hi Yuri,
On Thu, Oct 19, 2023 at 9:32 PM Yuri Weinstein wrote:
>
> We are still finishing off:
>
> - revert PR https://github.com/ceph/ceph/pull/54085, needs smoke suite rerun
> - removed s3tests https://github.com/ceph/ceph/pull/54078 merged
>
> Venky, Casey FYI
https://github.com/ceph/ceph/pul
> [...] (>10k OSDs, >60 PB of data).
6TBs on average per OSD? Hopully SSDs or RAID10 (or low-number,
3-5) RAID5.
> It is entirely dedicated to object storage with S3 interface.
> Maintenance and its extension are getting more and more
> problematic and time consuming.
Ah the joys of a single lar
Hi,
I assume it's this prometheus alert rule definition:
- alert: "CephNodeDiskspaceWarning"
annotations:
description: "Mountpoint {{ $labels.mountpoint }} on {{
$labels.nodename }} will be full in less than 5 days based on the 48
hour trailing fill rate."
Yuri pointed me to a new failure in the quincy-p2p suite from 17.2.7
testing: https://tracker.ceph.com/issues/63257
RADOS is currently investigating.
On Thu, Oct 19, 2023 at 12:21 PM Venky Shankar wrote:
> Hi Yuri,
>
> On Thu, Oct 19, 2023 at 9:32 PM Yuri Weinstein
> wrote:
> >
> > We are stil
Hello Nicolas,
On Wed, Sep 27, 2023 at 9:32 AM Nicolas FONTAINE wrote:
>
> Hi everyone,
>
> Is there a way to specify which MGR and which MDS should be the active one?
With respect to the MDS, if your reason for asking is because you want
to have the better provisioned MDS as the active then I d
Igor, I noticed that there's no roadmap for the next 16.2.x release. May I
ask what time frame we are looking at with regards to a possible fix?
We're experiencing several OSD crashes caused by this issue per day.
/Z
On Mon, 16 Oct 2023 at 14:19, Igor Fedotov wrote:
> That's true.
> On 16/10/2
Hi Yuri,
On Thu, Oct 19, 2023 at 10:48 PM Venky Shankar wrote:
>
> Hi Yuri,
>
> On Thu, Oct 19, 2023 at 9:32 PM Yuri Weinstein wrote:
> >
> > We are still finishing off:
> >
> > - revert PR https://github.com/ceph/ceph/pull/54085, needs smoke suite rerun
> > - removed s3tests https://github.com/
15 matches
Mail list logo