4336ff:::rbd_data.e2d302dd699130.69b3:6aa5 in 6aa5=[6aa5]:{}
And nothing happens, it's still in a failed_repair state.
пт, 25 июн. 2021 г. в 00:36, Vladimir Prokofev :
> Hello.
>
> Today we've experienced a complete CEPH cluster outage - total loss of
> power in the who
Hello.
Today we've experienced a complete CEPH cluster outage - total loss of
power in the whole infrastructure.
6 osd nodes and 3 monitors went down at the same time. CEPH 14.2.10
This resulted in unfound objects, which were "reverted" in a hurry with
ceph pg mark_unfound_lost revert
In retrosp
find the code here
> https://github.com/digitalocean/ceph_exporter
> useful. Note that there are multiple branches, which can be confusing.
>
> > On Jun 15, 2021, at 4:21 PM, Vladimir Prokofev wrote:
> >
> > Good day.
> >
> > I'm writing some code for parsin
Good day.
I'm writing some code for parsing output data for monitoring purposes.
The data is that of "ceph status -f json", "ceph df -f json", "ceph osd
perf -f json" and "ceph osd pool stats -f json".
I also need support for all major CEPH releases, starting with Jewel till
Pacific.
What I've st
There IS an 'rbd mv', and also 'rbd cp' does the exact copy.
Or am I missing something in the question?
ср, 17 февр. 2021 г. в 18:55, Marc :
>
> > -Original Message-
> > From: Marc
> > Sent: 17 February 2021 15:51
> > To: 'ceph-users@ceph.io'
> > Subject: [ceph-users] Re: rbd move betwe
Hello.
I'm trying to write some Python code for analysis of my RBD images storage
usage, rbd and rados package versions are 14.2.16.
Basically I want the same data that I can acquire from shell 'rbd du
' and 'rbd info ' commands, but through Python API.
At the moment I can connect to the cluster,
Hi.
Just want to notice that if you google for ceph python lib examples it
leads to 404
https://www.google.ru/search?hl=ru&q=ceph+python+rbd
https://docs.ceph.com/en/latest/rbd/api/librbdpy/
Some 3rd party sites and chinese version works fine though
http://docs.ceph.org.cn/rbd/librbdpy/
https://
Just shooting in the dark here, but you may be affected by similar issue I
had a while back, it was discussed here:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZOPBOY6XQOYOV6CQMY27XM37OC6DKWZ7/
In short - they've changed setting bluefs_buffered_io to false in the
recent Nautilu
en what difference seq. and rand. read make on OSD disk?
> Is it rand. read on OSD disk for both cases?
> Then how to explain the performance difference between seq. and rand.
> read inside VM? (seq. read IOPS is 20x than rand. read, Ceph is
> with 21 HDDs on 3 nodes, 7 on each)
>
&
Not exactly. You can also tune network/software.
Network - go for lower latency interfaces. If you have 10G go to 25G or
100G. 40G will not do though, afaik they're just 4x10G so their latency is
the same as in 10G.
Software - it's closely tied to your network card queues and processor
cores. In sh
In my case I only have 16GB RAM per node with 5 OSD on each of them, so I
actually have to tune osd_memory_target=2147483648 because with the default
value of 4GB my osd processes tend to get killed by OOM.
That is what I was looking into before the correct solution. I
disabled osd_memory_target li
Maneul, thank you for your input.
This is actually huge, and the problem is exactly that.
On a side note I will add, that I observed lower memory utilisation on OSD
nodes since the update, and a big throughput on block.db devices(up to
100+MB/s) that was not there before, so logically that meant t
. Any ideas how safe that
procedure is? I suppose it should be safe since there was no change in the
actual data storage scheme?
вт, 4 авг. 2020 г. в 14:33, Vladimir Prokofev :
> > What Kingston SSD model?
>
> === START OF INFORMATION SECTION ===
> Model Family: SandForce Driven
is:Tue Aug 4 14:31:36 2020 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
вт, 4 авг. 2020 г. в 14:17, Eneko Lacunza :
> Hi Vladimir,
>
> What Kingston SSD model?
>
> El 4/8/20 a las 12:22, Vladimir Prokofev escribió:
> > Here'
urnals too, Intel SSD journals are not that affected, though they too
experience increased load.
Nevertheless, there're now a lot of read IOPS on block.db devices after
upgrade that were not there before.
I wonder how 600 IOPS can destroy SSDs performance that hard.
вт, 4 авг. 2020 г. в 12:54, Vl
Good day, cephers!
We've recently upgraded our cluster from 14.2.8 to 14.2.10 release, also
performing full system packages upgrade(Ubuntu 18.04 LTS).
After that performance significantly dropped, main reason beeing that
journal SSDs are now have no merges, huge queues, and increased latency.
Ther
ommended if you want to host up to 3 levels of
> rocksdb in the SSD.
>
> Thanks,
> Orlando
>
> -Original Message-
> From: Igor Fedotov
> Sent: Wednesday, February 5, 2020 7:04 AM
> To: Vladimir Prokofev ; ceph-users@ceph.io
> Subject: [ceph-users] Re: Fwd: Blue
Cluster upgraded from 12.2.12 to 14.2.5. All went smooth, except BlueFS
spillover warning.
We create OSDs with ceph-deploy, command goes like this:
ceph-deploy osd create --bluestore --data /dev/sdf --block-db /dev/sdb5
--block-wal /dev/sdb6 ceph-osd3
where block-db and block-wal are SSD partitions
18 matches
Mail list logo