[ceph-users] Re: cephfs ha mount expectations

2022-10-26 Thread mj
O. So that should perhaps help a bit. Is there any difference between an 'uncontrolled' ceph server (accidental) reboot, and a controlled reboot, where we (for example) first failover the MDS in a controlled, gentle way? MJ Op 26-10-2022 om 14:40 schreef Eugen Block: Just one comment

[ceph-users] cephfs ha mount expectations

2022-10-26 Thread mj
timeout the client switches monitor node, and /mnt/ha-pool/ will respond again? Of course we hope the answer is: in such a setup, cephfs clients should not notice a reboot at all. :-) All the best! MJ ___ ceph-users mailing list -- ceph-users

[ceph-users] Re: SATA SSD recommendations.

2021-11-22 Thread mj
) MJ Op 22-11-2021 om 16:25 schreef Luke Hall: They seem to work quite nicely, and their wearout (after one year) is still at 1% for our use. Thanks, that's really useful to know. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: SATA SSD recommendations.

2021-11-22 Thread mj
Hi, We were in the same position as you, and replaced our 24 4TB harddisks with Samsung PM883 . They seem to work quite nicely, and their wearout (after one year) is still at 1% for our use. MJ Op 22-11-2021 om 13:57 schreef Luke Hall: Hello, We are looking to replace the 36 aging 4TB

[ceph-users] failing dkim

2021-10-25 Thread mj
and or appreciated here. MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How to make HEALTH_ERR quickly and pain-free

2021-10-23 Thread mj
Op 21-01-2021 om 11:57 schreef George Shuklin: I have hell of the question: how to make HEALTH_ERR status for a cluster without consequences? I'm working on CI tests and I need to check if our reaction to HEALTH_ERR is good. For this I need to take an empty cluster with an empty pool and d

[ceph-users] Re: Do people still use LevelDBStore?

2021-10-17 Thread mj
Hi Janne and Dan, Thanks for sharing your insights! MJ Op 15-10-2021 om 16:07 schreef Janne Johansson: Den fre 15 okt. 2021 kl 13:38 skrev mj : Op 15-10-2021 om 11:03 schreef Dan van der Ster: Bluestore will be rocks (100% of the time IIUC). FileStore may be level -- check `ceph daemon

[ceph-users] Re: Do people still use LevelDBStore?

2021-10-15 Thread mj
"rocksdb_write_pre_and_post_time": { Does it make sense? MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Do people still use LevelDBStore?

2021-10-15 Thread mj
though at this moment, we don't find much info on converting from leveldb to rocksdb on the pve wiki) MJ Op 15-10-2021 om 09:30 schreef Dan van der Ster: For a mon: # cat /var/lib/ceph/mon/ceph-xxx/kv_backend rocksdb For an OSD, look for leveldb/rocksdb messages in the log. -- dan On Fr

[ceph-users] Re: Do people still use LevelDBStore?

2021-10-15 Thread mj
This is a very basic question, but how to quickly check if you still run levelbd instead of rocksdb? Op 14-10-2021 om 16:55 schreef Konstantin Shalygin: +1 we convert all levelbd monstore to rocksdb on luminous k Sent from my iPhone On 14 Oct 2021, at 10:42, Dan van der Ster wrote: +1 f

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mj
Hi, On 6/4/21 12:57 PM, mhnx wrote: I wonder that when a osd came back from power-lost, all the data scrubbing and there are 2 other copies. PLP is important on mostly Block Storage, Ceph should easily recover from that situation. That's why I don't understand why I should pay more for PLP and o

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mj
99 Samsung 860 pro (512GB) = 5 Years or 600 TBW - $99 But do these not lack power-loss protection..? We are running the Samsung PM883, as I was told that these would do much better as OSDs. MJ ___ ceph-users mailing list -- ceph-users@ceph.io To uns

[ceph-users] Re: Suitable 10G Switches for ceph storage - any recommendations?

2021-05-19 Thread mj
plete thread on the subject, including many more recommendations is here: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5DH57H4VO2772LTXDVD4APMPK3DRZDKD/#5DH57H4VO2772LTXDVD4APMPK3DRZDKD Best, MJ On 5/19/21 2:10 PM, Max Vernimmen wrote: Hermann, I think there was a discussi

[ceph-users] Re: 10G stackabe lacp switches

2021-02-22 Thread mj
can confirm or deny..? (other than: "You need to buy it, and then try it") Thanks! MJ On 21/02/2021 11:37, Martin Verges wrote: Hello MJ, Arista has a good documentation available for example at https://www.arista.com/en/um-eos/eos-multi-chassis-link-aggregation or https://eos.arist

[ceph-users] Re: 10G stackabe lacp switches

2021-02-20 Thread mj
accepted to ask arista-specific (MLAG) config questions here on this list... Have a nice weekend all! MJ On 2/15/21 1:41 PM, Sebastian Trojanowski wrote: 3y ago I bought it on ebay to my home lab for 750$ with transport and duty and additional tax, so it's possible https://www.ebay.com/

[ceph-users] Re: 10G stackabe lacp switches

2021-02-15 Thread mj
On 2/15/21 1:38 PM, Eneko Lacunza wrote: Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2 simple switches (Mikrotik for example) and in Proxmox use an active-pasive bond, with default interface in all nodes to the same switch. Since we are now on SSD OSDs only, and our aim

[ceph-users] Re: 10G stackabe lacp switches

2021-02-15 Thread mj
from UTP to SFP+, I guess? Will check it out. Thanks again! MJ On 2/15/21 12:50 PM, Stefan Kooman wrote: On 2/15/21 12:16 PM, mj wrote: As we would like to be able to add more storage hosts, we need to loose the meshed network setup. My idea is to add two stacked 10G ethernet switches t

[ceph-users] 10G stackabe lacp switches

2021-02-15 Thread mj
re, and also performance-wise we're happy with what we currently have. Last december I wrote to mikrotik support, asking if they will support stacking / LACP any time soon, and their answer was: probably 2nd half of 2021. So, anyone here with interesting insights to share for ceph 10G etherne

[ceph-users] using secondary OSDs for reading

2021-02-09 Thread mj
tiple copies of the same data, to try and use the nearest copy? Curious :-) MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: osd recommended scheduler

2021-02-02 Thread mj
Is there also something we need to change accordingly in ceph.conf? We simply added to rc.local: echo cfq > /sys/block/sda/queue/scheduler echo cfq > /sys/block/sdf/queue/scheduler Anything else to do, besides changing cfq to noop in the above..? Thanks for the tip! MJ On 2/2

[ceph-users] Re: Samsung PM883 3.84TB SSD performance

2021-01-19 Thread mj
Hi both, Thanks for both quick replies, and both (of course) 100% spot-on! With 4k, IOPS is around 18530 :-) Thank you both, and apologies for the noise! Best, MJ On 19/01/2021 14:57, Marc Roos wrote: You should test with 4k not 4M. -Original Message- From: mj Sent: 19 January

[ceph-users] Samsung PM883 3.84TB SSD performance

2021-01-19 Thread mj
, read cache: enabled, supports DPO and FUA [4145961.915996] sd 0:0:16:0: [sdd] Attached SCSI disk Anyone with a idea what we could be doing wrong? Or are these disks really unsuitable for OSD use? MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsu

[ceph-users] Re: osd gradual reweight question

2021-01-11 Thread mj
already set the osd_op_queue_cutoff and recovery/backfill settings to 1. Thank you both for your answers! We'll continue with the gradual weight decreases. :-) MJ On 1/9/21 12:28 PM, Frank Schilder wrote: One reason for such observations is swap usage. If you have swap configured, you s

[ceph-users] osd gradual reweight question

2021-01-08 Thread mj
adual step-by-step decrease? I would assume the impact to be similar, only the time it takes to reach HEALTH_OK to be longer. Thanks, MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] add OSDs to cluster

2020-12-01 Thread mj
ally not be any degraded data redundancy, right..? MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] ssd suggestion

2020-11-23 Thread mj
r not...) they perform. We just wanted to ask here: anyone with suggestions on alternative SSDs we should consider? Or other tips we should take into consideration..? Thanks, MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an ema

[ceph-users] Re: Cluster network and public network

2020-05-12 Thread mj
ddress 192.168.0.5 netmask 255.255.255.0 and add our cluster IP as a second IP, like auto bond0:1 iface bond0:1 inet static address 192.168.10.160 netmask 255.255.255.0 On all nodes, reboot, and everything will work? Or are there ceph spec

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-16 Thread mj
as a VM on our proxmox/ceph cluster. I see the advantage of having vfs_ceph_snapshots of the samba user-data. But then again: re-sharing data using samba vfs_ceph adds a layer of complexity to the setup. Anyone here running samba with vfs_ceph? Experiences? MJ

[ceph-users] Re: Ceph Performance of Micron 5210 SATA?

2020-03-11 Thread mj
lable) works very good. What can be done to make proxmox display the WEAR level? Best, MJ On 06/03/2020 10:53, Marc Roos wrote: If you are asking, maybe run this? [global] ioengine=libaio invalidate=1 ramp_time=30 iodepth=1 runtime=180 time_based direct=1 filename=/dev/sdf [write-4k-seq] s

[ceph-users] Re: Ceph Performance of Micron 5210 SATA?

2020-03-06 Thread mj
B/sec) for long lasting *continuous* writing? (after filling up a write buffer or such) But given time to empty that buffer again, it should again write with the normal higher speed? So in applications with enough variation between reading and writing, they could still perform good enough?

[ceph-users] Re: Ceph Performance of Micron 5210 SATA?

2020-03-06 Thread mj
645579 Max latency(s): 0.336118 Min latency(s): 0.0117049 Do let me know what else you'd want me to do. MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Performance of Micron 5210 SATA?

2020-03-05 Thread mj
I have just ordered two of them to try. (the 3.47GB ION's) If you want, next week I could perhaps run some commands on them..? MJ On 3/5/20 9:38 PM, Hermann Himmelbauer wrote: Hi, Does someone know if the following harddisk has a decent performance in a ceph cluster: Micron 5210 ION 1

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-13 Thread mj
osd.21 up 1.0 1.0 22 hdd 3.64000 osd.22 up 1.0 1.0 23 hdd 3.63689 osd.23 up 1.0 1.0 MJ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-12 Thread mj
On 2/12/20 11:23 AM, mj wrote: Better layout for the disks usage stats: https://pastebin.com/8V5VDXNt ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-12 Thread mj
des. How can the reported disk stats for node2 be SO different than the other two nodes, whereas for the rest everything seems to be running as it should? Or are we missing something? Thanks! MJ ___ ceph-users mailing list -- ceph-users@ceph.io To u

[ceph-users] Re: Benchmark results for Seagate Exos2X14 Dual Actuator HDDs

2020-01-16 Thread mj
More details, different capacities etc: https://www.seagate.com/nl/nl/support/internal-hard-drives/enterprise-hard-drives/exos-X/ MJ On 1/16/20 9:51 AM, Konstantin Shalygin wrote: On 1/15/20 11:58 PM, Paul Emmerich wrote: we ran some benchmarks with a few samples of Seagate's new HDDs

[ceph-users] Re: Benchmark results for Seagate Exos2X14 Dual Actuator HDDs

2020-01-16 Thread mj
Hi, Interesting technology! It seems they have only one capacity: 14TB? Or are they planning different sizes as well? Also the linked pdf mentions just this one disk. And obviouly the price would be interesting to know... MJ On 1/16/20 9:51 AM, Konstantin Shalygin wrote: On 1/15/20 11:58