Just for completeness for anyone that is following this thread. Igor
added that setting in Octopus, so unfortunately I am unable to use it
as I am still on Nautilus.
Thanks,
Rich
On Wed, 6 Apr 2022 at 10:01, Richard Bade wrote:
>
> Thanks Igor for the tip. I'll see if I can use this to reduce th
Thanks, this should help me with some debugging around the setting
Igor suggested.
Rich
On Tue, 5 Apr 2022 at 21:20, Rudenko Aleksandr wrote:
>
> OSD uses sysfs device parameter "rotational" for detecting device type
> (HDD/SSD).
>
> You can see it:
>
> ceph osd metadata {osd_id}
>
> On 05.04.
Thanks Igor for the tip. I'll see if I can use this to reduce the
number of tweaks I need.
Rich
On Tue, 5 Apr 2022 at 21:26, Igor Fedotov wrote:
>
> Hi Richard,
>
> just FYI: one can use "bluestore debug enforce settings=hdd" config
> parameter to manually enforce HDD-related settings for a Blu
32GB for a dedicated node that only runs mon / mgr daemons; no OSDs. I’ve
experienced a cluster that grew over time such that 32GB was enough to run
steady-state, but as OSDs and PGs were added to the cluster it was no longer
enough to *boot* the daemons and I had to do emergency upgrades to 6
On 4/5/2022 2:47 AM, Ernesto Puerta wrote:
Hi Josh,
I'm stuck with the Grafana (ceph/ceph-grafana) image issue. I'm
discussing this with Dan & David just to see how to move forward:
* Our Docker hub credentials are no longer working (it seems we don't
push cephadm images to Docker hub a
Hallo everybody,
the official documentation recommends for the monitor nodes 32GB for a
small clusters. Is that per node?
Like i would need 3 nodes with 32GB RAM each in addition to the OSD nodes?
my cluster will consist of 3 replicated OSD nodes (12 OSD each), how can
i calculate the required a
Hi,
I've setup a ceph cluster using cephadmin on three ubuntu servers. Everything
went great until I tried to activate a osd prepared on a lvm.
I have prepared 4 volumes with the command:
ceph-volume lvm prepare --data vg/lv
Now I try to activate one of them with the command (followed by th
Yes, I did end up destroying and recreating the monitor.
As I wanted to use the same IP it was somewhat tedious as I had to
restart every OSD so they will catch the new value for mon_host.
Is there any way to tell all OSD that mon_host has a new value without
restarting them?
On 4/4/22 16
Hello Robert,
thank you for your reply, so what am I missing?
I thought, that if I have 3-nodes, each 16TB on 4 OSDs, so 16 OSDs having in total 44T, that would leed me at size of 3/2 to:
Either nearly 14TB total pool size knowing, that in case of a lost node, there will be no re-distribution du
Den tis 5 apr. 2022 kl 11:26 skrev Ali Akil :
> Hallo everybody,
> I have two questions regarding bluestore. I am struggling to understand
> the documentation :/
>
> I am planning to deploy 3 ceph nodes with 10xHDDs for OSD data, Raid 0
> 2xSSDs for block.db with replication on host level.
>
> Firs
Hi Josh,
I'm stuck with the Grafana (ceph/ceph-grafana) image issue. I'm discussing
this with Dan & David just to see how to move forward:
- Our Docker hub credentials are no longer working (it seems we don't
push cephadm images to Docker hub anymore).
- The Quay.io credentials (Dan's) d
Hi Richard,
just FYI: one can use "bluestore debug enforce settings=hdd" config
parameter to manually enforce HDD-related settings for a BlueStore
Thanks,
Igor
On 4/5/2022 1:07 AM, Richard Bade wrote:
Hi Everyone,
I just wanted to share a discovery I made about running bluestore on
top of
Hallo everybody,
I have two questions regarding bluestore. I am struggling to understand
the documentation :/
I am planning to deploy 3 ceph nodes with 10xHDDs for OSD data, Raid 0
2xSSDs for block.db with replication on host level.
First Question :
Is it possible to deploy block.db on RAID 0 p
OSD uses sysfs device parameter "rotational" for detecting device type
(HDD/SSD).
You can see it:
ceph osd metadata {osd_id}
On 05.04.2022, 11:49, "Richard Bade" wrote:
Hi Frank, yes I changed the device class to HDD but there seems to be some
smarts in the background that apply the
Hi Frank, yes I changed the device class to HDD but there seems to be some
smarts in the background that apply the different settings that are not
based on the class but some other internal mechanism.
However, I did apply the class after creating the osd, rather than during.
If someone knows how to
Hi,
Am 05.04.22 um 02:53 schrieb Felix Joussein:
As the command outputs below show, ceph-iso_metadata consume 19TB
accordingly to ceph df, how ever, the mounted ceph-iso filesystem is
only 9.2TB big.
The values nearly add up.
ceph-vm has 2.7 TiB stored and 8.3 TiB used (3x replication).
cep
Hi everyone.
I try to understand conception of RBD group snapshots, but I can’t.
Regular snaps (not a group one) we can use like regular RBD images. We can
export it or use it directly in qemu-img or we can create new image based on
snap (clone).
But if we talk about group snaps – we can’t do any
17 matches
Mail list logo