Hello Jaemin,
it's mclock now.
Read this:
https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/
and apply that:
https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
Best,
Malte
On 28.06.24 08:54, Jaemin Joo
On 27-06-2024 10:56, Frédéric Nass wrote:
Hi Torkil, Ruben,
Hi Frédéric
I see two theoretical ways to do this without additional OSD service. One that
probably doesn't work :-) and another one that could work depending on how the
orchestrator prioritize its actions based on services crite
Hello Torkil,
I didn't want to suggest using multiple OSD services from the start as you were
trying to avoid adding more.
Here, we've been using per hosts (listing hosts and not using wildcard pattern)
OSDs specs, as buying new hardware over time, our cluster became more
heterogeneous than b
Hi Dietmar,
I understand the option to be set is 'wsync', not 'nowsync'. See
https://docs.ceph.com/en/latest/man/8/mount.ceph/
nowsync enables async dirops, which is what triggers the assertion in
https://tracker.ceph.com/issues/61009
The reason why you don't see it in /proc/mounts is because
Hi Dhairya,
I would be more than happy to share our corrupted journal. Has the host
key changed for drop.ceph.com? The fingerprint I'm being sent is
7T6dSMcUUa5refV147WEZR99UgW8Y1qYEXZr8ppvog4 which is different to the
one in our /usr/share/ceph/known_hosts_drop.ceph.com.
Thank you for your
Hello,
I’m using NFSGW on a Reef release, I managed to mount cephfs volumes over NFSv4
with no problems.
Now I need to export a CephFS volume using NFSv3 (I need to mount it from a
client that only supports v3), and I tried everything I can imagine to make it
work.
I unchecked the flag create
- Le 26 Juin 24, à 10:50, Torkil Svensgaard tor...@drcmr.dk a écrit :
> On 26/06/2024 08:48, Torkil Svensgaard wrote:
>> Hi
>>
>> We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
>> 3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
>> bluestore and t
>>
>> But this in a spec doesn't match it:
>>
>> size: '7000G:'
>>
>> This does:
>>
>> size: '6950G:'
There definitely is some rounding within Ceph, and base 2 vs base 10
shenanigans.
>
> $ cephadm shell ceph-volume inventory /dev/sdc --format json | jq
> .sys_api.human_readable_size
>
- Le 28 Juin 24, à 15:27, Anthony D'Atri anthony.da...@gmail.com a écrit :
>>>
>>> But this in a spec doesn't match it:
>>>
>>> size: '7000G:'
>>>
>>> This does:
>>>
>>> size: '6950G:'
>
> There definitely is some rounding within Ceph, and base 2 vs base 10
> shenanigans.
>
>>
>> $ ce
On Fri, Jun 28, 2024 at 6:02 PM Ivan Clayson wrote:
> Hi Dhairya,
>
> I would be more than happy to share our corrupted journal. Has the host
> key changed for drop.ceph.com? The fingerprint I'm being sent is
> 7T6dSMcUUa5refV147WEZR99UgW8Y1qYEXZr8ppvog4 which is different to the one
> in our /us
We came to the same conclusions as Alexander when we studied replacing Ceph's
iSCSI implementation with Ceph's NFS-Ganesha implementation: HA was not working.
During failovers, vmkernel would fail with messages like this:
2023-01-14T09:39:27.200Z Wa(180) vmkwarning: cpu18:2098740)WARNING: NFS41:
11 matches
Mail list logo