Just in case anybody is interested: Using dm-cache works and boosts
performance -- at least for my use case.
The "challenge" was to get 100 (identical) Linux-VMs started on a three
node hyperconverged cluster. The hardware is nothing special, each node
has a Supermicro server board with a sing
Has anyone used almalinux 9 to install ceph. Have you encountered problems?
Other tips on this installation are also welcome.
I have installed Ceph on AlmaLinux 9.1 (both Ceph and later Ceph/Rook)
on a three node VM cluster and then a three node bare metal cluster
(with 4 OSDs each) without
I'm running Quincy and my journal fills with messages that I consider
"debug" level such as:
* ceph-mgr[1615]: [volumes INFO mgr_util] scanning for idle connections..
* ceph-mon[1617]: pgmap v1176995: 145 pgs: 145 active+clean; ...
* ceph-mgr[1615]: [dashboard INFO request] ...
* ceph-mgr
I'm running Quincy and my journal fills with messages that I consider
"debug" level such as:
* ceph-mgr[1615]: [volumes INFO mgr_util] scanning for idle connections..
* ceph-mon[1617]: pgmap v1176995: 145 pgs: 145 active+clean; ...
* ceph-mgr[1615]: [dashboard INFO request] ...
* ceph-mgr
//access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/operations_guide/management-of-monitoring-stack-using-the-ceph-orchestrator
Zitat von Michael Lipp :
Hi,
I've just setup a test cluster with cephadm using quincy. Things
work nicely. However, I'm not sure how to "
Hi,
I've just setup a test cluster with cephadm using quincy. Things work
nicely. However, I'm not sure how to "handle" alertmanager and prometheus.
Both services obviously aren't crucial to the working of the storage,
fine. But there seems to be no built-in fall-over concept.
By default, t