1 1 25 MiB 320 ...
sparci-ec 2 32 0 B 0 ...
sparci-rbd 3 32 19 B 1 ...
Have I missed out some extra installation steps needed on the ceph machines?
Cheers
Mevludin
--
Mevludin Blazevic
University of Koblenz-Landau
Computin
Hi all,
after performing "ceph orch host drain" on one of our host with only the
mgr container left, I encounter that another mgr daemon is indeed
deployed on another host, but the "old" does not get removed from the
drain command. The same happens if I edit the mgr service via UI to
define d
Hi all,
i am planning to set up on my ceph cluster an RBD pool for virtual
machines created on my Cloudstack environment. In parallel, a Ceph FS
pool should be used as a secondary storage for VM snapshots, ISOs etc.
Are there any performance issues when using both RBD and CephFS or is it
bett
Hi all,
on my ceph admin machine, a lot of large files are produced by
prometheus, e.g.:
./var/lib/ceph/8c774934-1535-11ec-973e-525400130e4f/prometheus.cephadm/data/wal/00026165
./var/lib/ceph/8c774934-1535-11ec-973e-525400130e4f/prometheus.cephadm/data/wal/00026166
./var/lib/ceph/8c774934-153
Hi all,
I'm running Pacific with cephadm.
After installation, ceph automatically provisoned 5 ceph monitor nodes across
the cluster. After a few OSDs crashed due to a hardware related issue with the
SAS interface, 3 monitor services are stopped and won't restart again. Is it
related to the OS
. But without any logs or more details
it's just guessing.
Regards,
Eugen
Zitat von Mevludin Blazevic :
Hi all,
I'm running Pacific with cephadm.
After installation, ceph automatically provisoned 5 ceph monitor
nodes across the cluster. After a few OSDs crashed due to a hardware
long
to the ceph user. Can you check
ls -l /var/lib/ceph/FSID/mon.sparci-store1/
Compare the keyring file with the ones on the working mon nodes.
Zitat von Mevludin Blazevic :
Hi Eugen,
I assume the mon db is stored on the "OS disk". I could not find any
error related lines in cephad
g file with the ones on the working mon nodes.
Zitat von Mevludin Blazevic :
Hi Eugen,
I assume the mon db is stored on the "OS disk". I could not find any
error related lines in cephadm.log, here is what journalctl -xe tells
me:
Dec 13 11:24:21 sparci-store1
ceph-8c774934-1535-11e
Hi all,
in Ceph Pacific 6.2.5, the MDS failover function does not working. The
one host with the active MDS hat to be rebooted and after that, the
standby deamons did not jump in. The fs was not accessible, instead all
mds remain until now to standby. Also the cluster remains in Ceph Error
du
ndby seq 1
join_fscid=1 addr
[v2:192.168.50.133:1a90/49cb4e4,v1:192.168.50.133:1a91/49cb4e4] compat
{c=[1],r=[1],i=[1]}]
dumped fsmap epoch 60
Am 13.12.2022 um 20:11 schrieb Patrick Donnelly:
On Tue, Dec 13, 2022 at 2:02 PM Mevludin Blazevic
wrote:
Hi all,
in Ceph Pacific 6.2.5, the MD
Hi all,
while trying to perform an update from Ceph Pacific to the current Patch
version, errors occure due to failed osd deamon which are still present
and installed on some Ceph hosts although I purged the corresponding OSD
using the GUI. I am using a Red Hat environment, what is the save wa
the save way to tell ceph to also delete specific deamon ID
(not OSD IDs)?
Regards,
Mevludin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Mevludin Blazevic, M.Sc.
University of Koblenz
Update: It was removed after 6min from the dashboard
Am 14.12.2022 um 12:11 schrieb Stefan Kooman:
On 12/14/22 11:40, Mevludin Blazevic wrote:
Hi,
the strange thing is that on 2 different host, an OSD deamon with the
same ID is present, by doing ls on /var/lib/ceph/FSID, e.g. I am
afraid
remove these daemons or what could be the
preferred workaround?
Regards,
Mevludin
Am 13.12.2022 um 20:32 schrieb Patrick Donnelly:
On Tue, Dec 13, 2022 at 2:21 PM Mevludin Blazevic
wrote:
Hi,
thanks for the quick response!
CEPH STATUS:
cluster:
id: 8c774934-1535-11ec-973e
ue, but it seems none of the running standby daemons is
responding.
Am 15.12.2022 um 19:08 schrieb Patrick Donnelly:
On Thu, Dec 15, 2022 at 7:24 AM Mevludin Blazevic
wrote:
Hi,
while upgrading to ceph pacific 6.2.7, the upgrade process stuck exactly
at the mds daemons. Before, I have tried t
rting the other MONs did resolve it,
have you tried that?
[1] https://tracker.ceph.com/issues/52760
Zitat von Mevludin Blazevic :
Its very strange. The keyring of the ceph monitor is the same as on
one of the working monitor hosts. The failed mon and the working mons
also have the same
Hi all,
I have a similar question regarding a cluster configuration consisting
of HDDs, SSDs and NVMes. Let's say I would setup a OSD configuration in
a yaml file like this:
service_type:osd
service_id:osd_spec_default
placement:
host_pattern:'*'
spec:
data_devices:
model:HDD-Model-XY
db_devi
Hi all,
for a ceph cluster with RAM size of 256 GB per node, I would increase
the osd_memory_target from default 4GB up to 12GB. Through the ceph
dashboard, different values are given to set the new value (global, mon,
..., osd). Is there any difference between them? From my point of view,
I
Hi all,
after I performed a minor RHEL package upgrade (8.7 -> 8.7) in one of
our Ceph hosts, I get a Ceph warning describing that cephadm "Can't
communicate with remote host `...`, possibly because python3 is not
installed there: [Errno 12] Cannot allocate memory, although Python3 is
instal
Ok, the hosts seems to be online again, but it took quite a long time..
Am 08.05.2023 um 13:22 schrieb Mevludin Blazevic:
Hi all,
after I performed a minor RHEL package upgrade (8.7 -> 8.7) in one of
our Ceph hosts, I get a Ceph warning describing that cephadm "Can't
communicat
Dear Ceph users,
I have a small Ceph cluster where each host consist of a small amount of
SSDs and a larger number of HDDs. Is there a way to use the SSDs as
performance optimization such as putting OSD Journals to the SSDs and/or
using SSDs for caching?
Best regards,
Mevludin
--
Mevludin
be a better and more stable
option, although it is unlikely that you will be able to automate this
with Ceph toolset.
Best regards,
Z
On Fri, Oct 22, 2021 at 12:30 PM Mevludin Blazevic
mailto:mblaze...@uni-koblenz.de>> wrote:
Dear Ceph users,
I have a small Ceph cluster wher
Hi all,
after updating host addr via "ceph orch host set-addr x y", the alert
OSD_UNREACHABLE appears, althouth the OSDs are accessible. The issue
persists after I delete OSDs and add them again after zapping the disc.
How to resolve this?
Best, Mejdi
23 matches
Mail list logo