Hi Javier,
I can't reproduce this locally, so the server and it's BIOS could be a factor.
gather facts grabs data from sysfs (/sys/class/dmi/id), so we could start there.
can you try issuing a cat against the following entries in the above path?
sys_vendor
product_family
product_name
bios_versio
https://tracker.ceph.com/issues/53062
Can someone help me understand the scope of the OMAP key bug linked above? I’ve
been using 16.2.6 for three months and I don’t _think_ I’ve seen any related
problems.
I upgraded my Nautilus (then 14.2.21) clusters to Pacific (16.2.4) in mid-June.
One of my
There are certain concerns if the drives within a cluster/pool are very
different in size, but if they’re the same nominal size class, that isn’t an
issue.
>
> Of course you can, it's to be expected
>
>
> k
>
>> On 18 Jan 2022, at 13:55, Flavio Piccioni wrote:
>>
>> we have a 30 nodes / 1
> Run status group 1 (all jobs):
>READ: bw=900KiB/s (921kB/s), 900KiB/s-900KiB/s (921kB/s-921kB/s),
> io=159MiB (167MB), run=180905-180905msec
>
so it is not 200MB/s but 0.9MB/s
ceph (obviously) does and never will come near to native disk speeds.
https://yourcmc.ru/wiki/Ceph_performance
Hi Frank,
Thanks for your feedback.
What version of Ceph are you running?
Have you tried using the module since Mimic? The telemetry module shouldn't
have any impact on performance, since the report is generated only daily by
default, and should consume very little system resources.
Can you ple
Hello,
My 4 PGs are now active !
David Casier from aevoo.fr has succeeded in mounting the OSD I had
recently purged.
The problem seems to be a bug in the number of retries in the Ceph crushmap.
In fact, I thought my PGs were replicated in 3 rooms, but it was not the
case.
I run ceph 15.2.15
Hello Marc,
here is the profile and the output:
[global]
ioengine=libaio
invalidate=1
ramp_time=30
iodepth=1
runtime=180
time_based
direct=1
filename=/dev/sdd
[randwrite-4k-d32-rand]
stonewall
bs=4k
rw=randwrite
iodepth=32
[randread-4k-d32-rand]
stonewall
bs=4k
rw=randread
iodepth=32
[write-4096
Of course you can, it's to be expected
k
> On 18 Jan 2022, at 13:55, Flavio Piccioni wrote:
>
> we have a 30 nodes / 180 osds cluster and we need to add new hosts/osd.
> Is it possible to add new osds in a cluster using a slightly different
> (updated) hardware?
> Cpus will increase frequencie
Hi all,
we have a 30 nodes / 180 osds cluster and we need to add new hosts/osd.
Is it possible to add new osds in a cluster using a slightly different
(updated) hardware?
Cpus will increase frequencies for about 300mhz, and maybe also HDDs will
increase iops a bit.
All new resources will increase a
Hi all,
I am pretty sure that this is a kernel issue related to centos stream and
probably Dell PowerEdge C6420, but I want to let you know about this it
just in case someone is going to upgrade centos stream to the latest kernel
4.18.0-358.el8.x86_64 and finds the same problem.
Yesterday I was i
Hi,
just noticed several dead links to the original ceph crush paper by sage weil.
dead link no 1 is:
https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf returns 404
not found.
linked from:
https://ceph.com/en/discover/technology/
link name on that site is: "RADOS: A Scalable,
Hi Frank,
If you have one active MDS, the stray dir objects in the meta pool are named:
600.
601.
...
609.
So you can e.g. `rados listomapvals -p con-fs2-meta1 600.` to
get an idea about the stray files.
Each of those stray dirs hold up to mds_bal_fragment_size_m
12 matches
Mail list logo