O. So that should perhaps
help a bit.
Is there any difference between an 'uncontrolled' ceph server
(accidental) reboot, and a controlled reboot, where we (for example)
first failover the MDS in a controlled, gentle way?
MJ
Op 26-10-2022 om 14:40 schreef Eugen Block:
Just one comment
timeout the client switches monitor node, and /mnt/ha-pool/
will respond again?
Of course we hope the answer is: in such a setup, cephfs clients should
not notice a reboot at all. :-)
All the best!
MJ
___
ceph-users mailing list -- ceph-users
)
MJ
Op 22-11-2021 om 16:25 schreef Luke Hall:
They seem to work quite nicely, and their wearout (after one year) is
still at 1% for our use.
Thanks, that's really useful to know.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Hi,
We were in the same position as you, and replaced our 24 4TB harddisks
with Samsung PM883 .
They seem to work quite nicely, and their wearout (after one year) is
still at 1% for our use.
MJ
Op 22-11-2021 om 13:57 schreef Luke Hall:
Hello,
We are looking to replace the 36 aging 4TB
and or appreciated here.
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Op 21-01-2021 om 11:57 schreef George Shuklin:
I have hell of the question: how to make HEALTH_ERR status for a cluster
without consequences?
I'm working on CI tests and I need to check if our reaction to
HEALTH_ERR is good. For this I need to take an empty cluster with an
empty pool and d
Hi Janne and Dan,
Thanks for sharing your insights!
MJ
Op 15-10-2021 om 16:07 schreef Janne Johansson:
Den fre 15 okt. 2021 kl 13:38 skrev mj :
Op 15-10-2021 om 11:03 schreef Dan van der Ster:
Bluestore will be rocks (100% of the time IIUC).
FileStore may be level -- check `ceph daemon
"rocksdb_write_pre_and_post_time": {
Does it make sense?
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
though at this moment, we don't find much info on converting from
leveldb to rocksdb on the pve wiki)
MJ
Op 15-10-2021 om 09:30 schreef Dan van der Ster:
For a mon:
# cat /var/lib/ceph/mon/ceph-xxx/kv_backend
rocksdb
For an OSD, look for leveldb/rocksdb messages in the log.
-- dan
On Fr
This is a very basic question, but how to quickly check if you still run
levelbd instead of rocksdb?
Op 14-10-2021 om 16:55 schreef Konstantin Shalygin:
+1 we convert all levelbd monstore to rocksdb on luminous
k
Sent from my iPhone
On 14 Oct 2021, at 10:42, Dan van der Ster wrote:
+1 f
Hi,
On 6/4/21 12:57 PM, mhnx wrote:
I wonder that when a osd came back from power-lost, all the data
scrubbing and there are 2 other copies.
PLP is important on mostly Block Storage, Ceph should easily recover
from that situation.
That's why I don't understand why I should pay more for PLP and o
99
Samsung 860 pro (512GB) = 5 Years or 600 TBW - $99
But do these not lack power-loss protection..?
We are running the Samsung PM883, as I was told that these would do much
better as OSDs.
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
plete thread on the subject, including many more recommendations
is here:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5DH57H4VO2772LTXDVD4APMPK3DRZDKD/#5DH57H4VO2772LTXDVD4APMPK3DRZDKD
Best,
MJ
On 5/19/21 2:10 PM, Max Vernimmen wrote:
Hermann,
I think there was a discussi
can
confirm or deny..? (other than: "You need to buy it, and then try it")
Thanks!
MJ
On 21/02/2021 11:37, Martin Verges wrote:
Hello MJ,
Arista has a good documentation available for example at
https://www.arista.com/en/um-eos/eos-multi-chassis-link-aggregation or
https://eos.arist
accepted to ask arista-specific (MLAG) config
questions here on this list...
Have a nice weekend all!
MJ
On 2/15/21 1:41 PM, Sebastian Trojanowski wrote:
3y ago I bought it on ebay to my home lab for 750$ with transport and
duty and additional tax, so it's possible
https://www.ebay.com/
On 2/15/21 1:38 PM, Eneko Lacunza wrote:
Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2
simple switches (Mikrotik for example) and in Proxmox use an
active-pasive bond, with default interface in all nodes to the same switch.
Since we are now on SSD OSDs only, and our aim
from UTP to SFP+, I guess?
Will check it out.
Thanks again!
MJ
On 2/15/21 12:50 PM, Stefan Kooman wrote:
On 2/15/21 12:16 PM, mj wrote:
As we would like to be able to add more storage hosts, we need to
loose the meshed network setup.
My idea is to add two stacked 10G ethernet switches t
re, and
also performance-wise we're happy with what we currently have.
Last december I wrote to mikrotik support, asking if they will support
stacking / LACP any time soon, and their answer was: probably 2nd half
of 2021.
So, anyone here with interesting insights to share for ceph 10G etherne
tiple copies of the same data, to try and use
the nearest copy?
Curious :-)
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is there also something we need to change accordingly in ceph.conf?
We simply added to rc.local:
echo cfq > /sys/block/sda/queue/scheduler
echo cfq > /sys/block/sdf/queue/scheduler
Anything else to do, besides changing cfq to noop in the above..?
Thanks for the tip!
MJ
On 2/2
Hi both,
Thanks for both quick replies, and both (of course) 100% spot-on!
With 4k, IOPS is around 18530 :-)
Thank you both, and apologies for the noise!
Best,
MJ
On 19/01/2021 14:57, Marc Roos wrote:
You should test with 4k not 4M.
-Original Message-
From: mj
Sent: 19 January
, read cache: enabled,
supports DPO and FUA
[4145961.915996] sd 0:0:16:0: [sdd] Attached SCSI disk
Anyone with a idea what we could be doing wrong? Or are these disks
really unsuitable for OSD use?
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
already set the osd_op_queue_cutoff and recovery/backfill settings
to 1.
Thank you both for your answers! We'll continue with the gradual weight
decreases. :-)
MJ
On 1/9/21 12:28 PM, Frank Schilder wrote:
One reason for such observations is swap usage. If you have swap configured,
you s
adual step-by-step decrease?
I would assume the impact to be similar, only the time it takes to reach
HEALTH_OK to be longer.
Thanks,
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ally not be any degraded data redundancy, right..?
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
r not...) they perform.
We just wanted to ask here: anyone with suggestions on alternative SSDs
we should consider? Or other tips we should take into consideration..?
Thanks,
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
ddress 192.168.0.5
netmask 255.255.255.0
and add our cluster IP as a second IP, like
auto bond0:1
iface bond0:1 inet static
address 192.168.10.160
netmask 255.255.255.0
On all nodes, reboot, and everything will work?
Or are there ceph spec
as a VM on our proxmox/ceph cluster.
I see the advantage of having vfs_ceph_snapshots of the samba user-data.
But then again: re-sharing data using samba vfs_ceph adds a layer of
complexity to the setup.
Anyone here running samba with vfs_ceph? Experiences?
MJ
lable) works very good.
What can be done to make proxmox display the WEAR level?
Best,
MJ
On 06/03/2020 10:53, Marc Roos wrote:
If you are asking, maybe run this?
[global]
ioengine=libaio
invalidate=1
ramp_time=30
iodepth=1
runtime=180
time_based
direct=1
filename=/dev/sdf
[write-4k-seq]
s
B/sec) for
long lasting *continuous* writing? (after filling up a write buffer or such)
But given time to empty that buffer again, it should again write with
the normal higher speed?
So in applications with enough variation between reading and writing,
they could still perform good enough?
645579
Max latency(s): 0.336118
Min latency(s): 0.0117049
Do let me know what else you'd want me to do.
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have just ordered two of them to try. (the 3.47GB ION's)
If you want, next week I could perhaps run some commands on them..?
MJ
On 3/5/20 9:38 PM, Hermann Himmelbauer wrote:
Hi,
Does someone know if the following harddisk has a decent performance in
a ceph cluster:
Micron 5210 ION 1
osd.21 up 1.0 1.0
22 hdd 3.64000 osd.22 up 1.0 1.0
23 hdd 3.63689 osd.23 up 1.0 1.0
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 2/12/20 11:23 AM, mj wrote:
Better layout for the disks usage stats:
https://pastebin.com/8V5VDXNt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
des.
How can the reported disk stats for node2 be SO different than the other
two nodes, whereas for the rest everything seems to be running as it should?
Or are we missing something?
Thanks!
MJ
___
ceph-users mailing list -- ceph-users@ceph.io
To u
More details, different capacities etc:
https://www.seagate.com/nl/nl/support/internal-hard-drives/enterprise-hard-drives/exos-X/
MJ
On 1/16/20 9:51 AM, Konstantin Shalygin wrote:
On 1/15/20 11:58 PM, Paul Emmerich wrote:
we ran some benchmarks with a few samples of Seagate's new HDDs
Hi,
Interesting technology!
It seems they have only one capacity: 14TB? Or are they planning
different sizes as well? Also the linked pdf mentions just this one disk.
And obviouly the price would be interesting to know...
MJ
On 1/16/20 9:51 AM, Konstantin Shalygin wrote:
On 1/15/20 11:58
37 matches
Mail list logo