Hi!

I understand that this question is not quite for this mailing list, but 
nonetheless, experts who may be encountered this have gathered here.

I have 24 servers, and on each, after six months of work, the following began 
to happen:

[root@S-26-5-1-2 cph]# uname -a
Linux S-26-5-1-2 5.2.11-1.el7.elrepo.x86_64 #1 SMP Thu Aug 29 08:10:52 EDT 2019 
x86_64 x86_64 x86_64 GNU/Linux

[root@S-26-5-1-2 cph]# dd if=/dev/zero of=/dev/sdc bs=1M count=1000 oflag=sync
1048576000 bytes (1.0 GB) copied, 3.76334 s, 279 MB/s

[root@S-26-5-1-2 cph]# dd if=/dev/zero of=/dev/sdd bs=1M count=1000 oflag=sync
1048576000 bytes (1.0 GB) copied, 4.54834 s, 231 MB/s

sdc - SSD disk. sdd - HDD.

It can be seen that ssd works somehow slowly, and hdd - too quickly.

Reboot - nothing changes.

And only poweroff/poweron cycle change behavior to normal:

[root@S-26-5-1-2 cph]# dd if=/dev/zero of=/dev/sdc bs=1M count=1000 oflag=sync
1048576000 bytes (1.0 GB) copied, 3.24042 s, 324 MB/s

[root@S-26-5-1-2 cph]# dd if=/dev/zero of=/dev/sdd bs=1M count=1000 oflag=sync
1048576000 bytes (1.0 GB) copied, 13.7709 s, 76.1 MB/s

Absoluteli nothing in system and ceph log (this servers used for OSD) about 
that.

Perhaps someone has encountered similar behavior?

WBR,
    Fyodor.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to