Hello.
Thanks to advice from bauen1 I now have OSDs on Debian/Nautilus and have
been able to move on to MDS and CephFS. Also, looking around in the
Dashboard I noticed the options for Crush Failure Domain and further
that it's possible to select 'OSD'.
As I mentioned earlier our cluster is
Hi,
Just now was possible continue this. Below is the information required.
Thanks advance,
Gesiel
Em seg., 20 de jan. de 2020 às 15:06, Mike Christie
escreveu:
> On 01/20/2020 10:29 AM, Gesiel Galvão Bernardes wrote:
> > Hi,
> >
> > Only now have I been able to act on this problem. My environ
Hi.
This is the cluster informastion.
-- /var/log/ceph/ceph.osd.1.log ---
2020-02-01 03:47:20.635504 7f86f4e40700 1 heartbeat_map is_healthy
'OSD::osd_op_tp thread 0x7f86fe35e700' had timed out after 15
2020-02-01 03:47:20.635521 7f86f4f41700 1 heartbeat_m
Servers: 6 (include 7osds) total 42osdsl
OS: Centos7
Ceph: 10.2.5
Hi, everyone
The cluster is used for VM image storage and object storage.
And I have a bucket which has more than 20 million objects.
Now, I have a problem that cluster blocks operation.
Suddenly cluster blocked operations, then
Hi Philipp,
More nodes is better, more availability, more CPU and more RAM. But, I'm agree
that your 1GbE link will be most limiting factor, especially if there are some
SSDs. I suggest you upgrade your networking to 10GbE (or 25GbE since it will
cost you nearly same with 10GbE). Upgrading you
Hi Francois,
I'm afraid that you need more rooms to have such availability. For data pool,
you will need 5 rooms due to your 3+2 erasure profile and for metadata you will
need 3 rooms due to your 3 replication rule. If you have only 2 rooms, there is
possibility of corrupted data whenever you l
Hi Willi,
Since you still need iSCSI/NFS/Samba for Windows clients, I think it better to
have virtual ZFS storage backed by Ceph (RBD). I have experience of running
FreeNAS virtually with some volumes for making ZFS pool. The performance is
pretty satisfying, almost 10Gbps iSCSI throughput on 2
Helllo,
answering to myself in case some else sutmbles upon this thread in the
future. I was able to remove the unexpected snap, here is the recipe:
How to remove the unexpected snapshots:
1.) Stop the OSD
ceph-osd -i 14 --flush-journal
... flushed journal /var/lib/ceph/osd/ceph-14/journal fo
So I recently updated CEPH and rebooted the OSD node's the two OSD's are now
even more unbalanced and CEPH is actually currently moving more PG's too the
OSD's in question (OSD 7,8), any ideas?
ceph balancer status
{
"last_optimize_duration": "0:00:00.000289",
"plans": [],
"mod
Update: When repairing the PG I get a different error:
osd.14 80.69.45.76:6813/4059849 27 : cluster [INF] 7.374 repair starts
osd.14 80.69.45.76:6813/4059849 28 : cluster [ERR] 7.374 recorded data
digest 0xebbbfb83 != on disk 0x43d61c5d on
7/a29aab74/rbd_data.59cb9c679e2a9e3.3096/29c4
Hello,
for those sumbling upon a similar issue: I was able to mitigate the
issue, by setting
=== 8< ===
[osd.14]
osd_pg_max_concurrent_snap_trims = 0
=
in ceph.conf. You don't need to restart the osd, osd crash crash +
systemd will do it for you :)
Now the osd in question does no tr
11 matches
Mail list logo