We ran some quick/simple tests using an unpatched centos vm on patched and unpatched hypervisors.

CPU bound test (HPL) showed a 2% hit.
i/o bound test (fio) showed 30%.

This is before patching the VM, which I expect should have *some* additive effect (we'll run the same tests).

And also before patching the ceph storage nodes (again we'll run the same tests). I had the same thought about selectively disabling some of the kpti using the sysctls on osd nodes, but it will be interesting to see the effect first.

Graham

On 01/05/2018 07:24 AM, Xavier Trilla wrote:
Ok, that's good news, being able to disable the patches in real time is going 
to really help with the performance testing.

ATM we won't patch our OSD machines -we've had several issues in the past with 
XFS and some kernels in machines with plenty of OSDs- so I won't have 
information about how does it affect OSD performance. But we will rollout some 
upgrades during the next days to our hypervisors, and I'll run some tests to 
see if librbd performance is affected.

I'm quite worried about latency. We run a pure SSD cluster, and we've invested 
a lot of time and effort to get latency under 1ms. Losing a 30% because of 
this, would be really bad news.

I'll post our test results as soon as I have them, but if anybody else has done 
some testing and can provide some information as well, I think it would be 
really useful.

Thanks!
Xavier

-----Mensaje original-----
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Stijn De 
Weirdt
Enviado el: viernes, 5 de enero de 2018 13:00
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects 
performance?

or do it live https://access.redhat.com/articles/3311301

     # echo 0 > /sys/kernel/debug/x86/pti_enabled
     # echo 0 > /sys/kernel/debug/x86/ibpb_enabled
     # echo 0 > /sys/kernel/debug/x86/ibrs_enabled

stijn

On 01/05/2018 12:54 PM, David wrote:
Hi!

nopti or pti=off in kernel options should disable some of the kpti.
I haven't tried it yet though, so give it a whirl.

https://en.wikipedia.org/wiki/Kernel_page-table_isolation
<https://en.wikipedia.org/wiki/Kernel_page-table_isolation>

Kind Regards,

David Majchrzak


5 jan. 2018 kl. 11:03 skrev Xavier Trilla <xavier.tri...@silicontower.net>:

Hi Nick,

I'm actually wondering about exactly the same. Regarding OSDs, I agree, there 
is no reason to apply the security patch to the machines running the OSDs -if 
they are properly isolated in your setup-.

But I'm worried about the hypervisors, as I don't know how meltdown or Spectre 
patches -AFAIK, only Spectre patch needs to be applied to the host hypervisor, 
Meltdown patch only needs to be applied to guest- will affect librbd 
performance in the hypervisors.

Does anybody have some information about how Meltdown or Spectre affect ceph 
OSDs and clients?

Also, regarding Meltdown patch, seems to be a compilation option, meaning you 
could build a kernel without it easily.

Thanks,
Xavier.

-----Mensaje original-----
De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre
de Nick Fisk Enviado el: jueves, 4 de enero de 2018 17:30
Para: 'ceph-users' <ceph-users@lists.ceph.com>
Asunto: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

Hi All,

As the KPTI fix largely only affects the performance where there are a large 
number of syscalls made, which Ceph does a lot of, I was wondering if anybody 
has had a chance to perform any initial tests. I suspect small write latencies 
will the worse affected?

Although I'm thinking the backend Ceph OSD's shouldn't really be at risk from 
these vulnerabilities, due to them not being direct user facing and could have 
this work around disabled?

Nick

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Graham Allan
Minnesota Supercomputing Institute - g...@umn.edu
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to