Hi,
I have invested in SAMSUNG PM983 (MZ1LB960HAJQ-7) x3 to run a fast pool on.
However I am only getting 150mb/sec from these.
vfio results directly on the NVMe's:
https://docs.google.com/spreadsheets/d/1LXupjEUnNdf011QNr24pkAiDBphzpz5_MwM0t9oAl54/edit?usp=sharing
Config and Results of cep
I activated autoscaler on all my pools but found this error msg when it came to
the cache_pool.
ceph-mgr.pve22.log.3:2020-05-01 23:59:24.014 7f5120eda700 0 mgr[pg_autoscaler]
pg_num adjustment on cache_pool to 512 failed: (-1, '', 'splits in cache pools
must be followed by scrubs and leave suff
Hi,
I finally got my Samsung PM983 [1] to use as journal for about 6 drives plus
drive cache replacing a consumer SSD - Kingston SV300.
But I can't for the life of me figure out how to move an existing journal to
this NVME on my Nautilus cluster.
# Created a new big partition on the NVME
sgdi
Hi Wido,
It was one of the first thing I checked yes, and it was synched properly. I
have the full logs but since everything works now, I am unsure if I should
upload them to the tracker ?
Thanks,
A
___
ceph-users mailing list -- ceph-users@ceph.io
To
Final update.
I switched the below from false and everything magically started working!
cephx_require_signatures = true
cephx_cluster_require_signatures = true
cephx_sign_messages = true
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
Hi,
I am still having issues accessing my cephfs and managed to pull out more
interesting logs, I also have enabled logs to 20/20 that I intend to upload as
soon as my ceph tracker account gets accepted.
Oct 17 16:35:22 pve21 kernel: libceph: read_partial_message 8ae0e636
signature chec
Hi list,
Had a power outage killing the whole cluster. Cephfs will not start at all, but
RBD works just fine.
I did have 4 unfound objects that I eventually had to rollback or delete which
I don't really understand as I should've had a copy of the those pbjects on the
other drives?
2/3 mons and
Hi,
I am trying to figure out why my portainer and pi-hole in docker keeps getting
broken databases. All other docker applications are working flawlessly but not
these.
I am running Ubuntu 18.04 + kernel ceph mount for the data directory.
Have looked at how others do it, and they seem to all u