a, 4.7
TiB used, 12 TiB / 16 TiB avail; 2.7 KiB/s rd, 1.3 MiB/s wr, 56 op/s
I am running Ceph Quincy 17.2.5 on a test system with dedicated
1Gbit/9000MTU storage network, while the public ceph network
1GBit/1500MTU is shared with the vm network.
I am looking forward to you suggestio
Hi,
On 2022-11-19 17:32, Anthony D'Atri wrote:
I’m not positive that the options work with hyphens in them. Try
ceph tell osd.* injectargs '--osd_max_backfills 1
--osd_recovery_max_active 1 --osd_recovery_max_single_start 1
--osd_recovery_op_priority=1'
Did so.
With Quincy the following sh
Regards
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany
On 2022-12-09 21:10, Murilo Morais wrote:
Hi Martin, thanks for replying.
I'm using v17.2.3.
E
error
I verified that the hardware of the new nvme is working fine.
--
Regards,
ppa. Martin Konold
--
Viele Grüße
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt
7f99aa28f3c0 1 bdev(0x5565c261fc00
/var/lib/ceph/osd/ceph-43/block) close
2023-09-11T16:30:04.940+0200 7f99aa28f3c0 -1 osd.43 0 OSD:init: unable
to mount object store
2023-09-11T16:30:04.940+0200 7f99aa28f3c0 -1 ** ERROR: osd init failed:
(5) Input/output error
--
Regards,
ppa. Martin Konold
your response.
Regards
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany
On 2023-09-11 22:08, Igor Fedotov wrote:
Hi Martin,
could you please share the
ts, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
448 unknown
--
Kind Regards
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany
On 2022-04-01 11:17, Janne Johansson wrote:
Den fre 1 apr. 2022 kl 11:15 skrev Konold, Martin
:
Hi,
running Ceph 16.2.7 on a
. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things real
Amtsgericht Stuttgart, HRB 23690
Geschäftsführer: Andreas Mack
Im Köller 3, 70794 Filderstadt, Germany
On 2022-04-02 03:36, York Huang wrote:
Hi,
How about this "osd: 7 osds: 6 up (since 3h), 6 in (since 6w)&qu
oup 0 (all jobs):
READ: bw=210MiB/s (220MB/s), 210MiB/s-210MiB/s (220MB/s-220MB/s),
io=12.3GiB (13.2GB), run=60001-60001msec
Disk stats (read/write):
sdd: ios=3224017/2, sectors=25792136/3, merge=0/0, ticks=168114/14,
in_queue=168141, util=99.03%
This was hdd (3/2 replication).
--
--mart
Hi,
I am working on a small 3 node ceph cluster which used to work as
expected.
When creating a new ceph osd the ceph-volume command throws some errors
and filestore instead of bluestore is created. (The drive was clean
using blkdiscard before and no traces left in /etc from previous
attemp
Am 2025-05-16 18:46, schrieb Eugen Block:
Hi,
which Ceph version is this? It's apparently not managed by cephadm.
It is Ceph 19.2.1 from Proxmox running on Debian 12 (bookworm).
Regards
--
--martin konold
ppa. Martin Konold
--
Martin Konold - Prokurist, CTO
KONSEC GmbH - make things
Am 2025-05-16 18:01, schrieb Anthony D'Atri:
I wouldn’t think blkdiscard would necessarily fully clean. I would try
sgdisk —zap-all or Ceph-volume lvm zap
I gave this a try in addition to a reboot but no changes. Still not
created a bluestore osd as intended.
I guess this is the culprit:
2
13 matches
Mail list logo