On Wed, Dec 06, 2023 at 10:37:33AM -0600, Madhavan T. Venkataraman wrote:
>
>
> On 11/30/23 05:33, Peter Zijlstra wrote:
> > On Wed, Nov 29, 2023 at 03:07:15PM -0600, Madhavan T. Venkataraman wrote:
> >
> >> Kernel Lockdown
> >> ---
>
On Wed, Nov 29, 2023 at 03:07:15PM -0600, Madhavan T. Venkataraman wrote:
> Kernel Lockdown
> ---
>
> But, we must provide at least some security in V2. Otherwise, it is useless.
>
> So, we have implemented what we call a kernel lockdown. At the end of kernel
> boot, Heki establishes
On Mon, Nov 27, 2023 at 10:48:29AM -0600, Madhavan T. Venkataraman wrote:
> Apologies for the late reply. I was on vacation. Please see my response below:
>
> On 11/13/23 02:19, Peter Zijlstra wrote:
> > On Sun, Nov 12, 2023 at 09:23:24PM -0500, Mickaël Salaün wrote:
> &
On Mon, Nov 27, 2023 at 11:05:23AM -0600, Madhavan T. Venkataraman wrote:
> Apologies for the late reply. I was on vacation. Please see my response below:
>
> On 11/13/23 02:54, Peter Zijlstra wrote:
> > On Sun, Nov 12, 2023 at 09:23:25PM -0500, Mickaël Salaün wrote:
> &
On Sun, Nov 12, 2023 at 09:23:25PM -0500, Mickaël Salaün wrote:
> From: Madhavan T. Venkataraman
>
> Implement a hypervisor function, kvm_protect_memory() that calls the
> KVM_HC_PROTECT_MEMORY hypercall to request the KVM hypervisor to
> set specified permissions on a list of guest pages.
>
> U
On Sun, Nov 12, 2023 at 09:23:24PM -0500, Mickaël Salaün wrote:
> From: Madhavan T. Venkataraman
>
> X86 uses a function called __text_poke() to modify executable code. This
> patching function is used by many features such as KProbes and FTrace.
>
> Update the permissions counters for the text
On Wed, May 24, 2023 at 02:39:50PM -0700, Sean Christopherson wrote:
> On Wed, May 24, 2023, Peter Zijlstra wrote:
> > On Wed, May 24, 2023 at 01:16:03PM -0700, Sean Christopherson wrote:
> > > Of course, the only accesses outside of mmu_lock are reads, so on x86 that
> > &
On Wed, May 24, 2023 at 01:16:03PM -0700, Sean Christopherson wrote:
> Atomics aren't memory barriers on all architectures, e.g. see the various
> definitions of smp_mb__after_atomic().
>
> Even if atomic operations did provide barriers, using an atomic would be
> overkill
> and a net negative.
On Wed, May 24, 2023 at 11:42:15AM +0530, Kautuk Consul wrote:
> My comment was based on the assumption that "all atomic operations are
> implicit memory barriers". If that assumption is true then we won't need
It is not -- also see Documentation/atomic_t.txt.
Specifically atomic_read() doesn't
nces between MFENCE and LOCK prefix, but as
already noted above, those should not have been using smp_mb() in the
first place and should be converted to mb()
So:
Acked-by: Peter Zijlstra (Intel)
> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
> index bfb28ca
On Sun, Dec 20, 2015 at 05:07:19PM +, Andrew Cooper wrote:
>
> Very much +1 for fixing this.
>
> Those names would be fine, but they do add yet another set of options in
> an already-complicated area.
>
> An alternative might be to have the regular smp_{w,r,}mb() not revert
> back to nops if
On Thu, Dec 17, 2015 at 04:34:57PM +0200, Michael S. Tsirkin wrote:
> On Thu, Dec 17, 2015 at 03:02:12PM +0100, Peter Zijlstra wrote:
> > > commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9
> > > Author: Alexander Duyck
> > > Date:
On Thu, Dec 17, 2015 at 04:33:44PM +0200, Michael S. Tsirkin wrote:
> On Thu, Dec 17, 2015 at 02:57:26PM +0100, Peter Zijlstra wrote:
> >
> > You could of course go fix that instead of mutilating things into
> > sort-of functional state.
>
> Yes, we'd just need
On Thu, Dec 17, 2015 at 03:26:29PM +0200, Michael S. Tsirkin wrote:
> > Note that virtio_mb() is weirdly inconsistent with virtio_[rw]mb() in
> > that they use dma_* ops for weak_barriers, while virtio_mb() uses
> > smp_mb().
>
> It's a hack really. I think I'll clean it up a bit to
> make it more
On Thu, Dec 17, 2015 at 03:16:20PM +0200, Michael S. Tsirkin wrote:
> On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote:
> > > +static inline void virtio_store_mb(
On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote:
> Seems to give a speedup on my box but I'm less sure about this one. E.g. as
> xchng faster than mfence on all/most intel CPUs? Anyone has an opinion?
Would help if you Cc people who would actually know this :-)
Yes, we've rece
On Thu, Dec 17, 2015 at 12:29:03PM +0200, Michael S. Tsirkin wrote:
> +static inline __virtio16 virtio_load_acquire(bool weak_barriers, __virtio16
> *p)
> +{
> + if (!weak_barriers) {
> + rmb();
> + return READ_ONCE(*p);
> + }
> +#ifdef CONFIG_SMP
> + return smp
On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote:
> +static inline void virtio_store_mb(bool weak_barriers,
> +__virtio16 *p, __virtio16 v)
> +{
> +#ifdef CONFIG_SMP
> + if (weak_barriers)
> + smp_store_mb(*p, v);
> + else
> +#en
On Thu, Oct 22, 2015 at 04:18:31PM +0200, Andrea Arcangeli wrote:
> The risk of memory corruption is still zero no matter what happens
> here, in the extremely rare case the app will get a SIGBUS or a
That might still upset people, SIGBUS isn't something an app can really
recover from.
> I'm not
On Thu, Oct 22, 2015 at 03:20:15PM +0200, Andrea Arcangeli wrote:
> If schedule spontaneously wakes up a task in TASK_KILLABLE state that
> would be a bug in the scheduler in my view. Luckily there doesn't seem
> to be such a bug, or at least we never experienced it.
Well, there will be a wakeup,
On Thu, May 14, 2015 at 07:31:11PM +0200, Andrea Arcangeli wrote:
> @@ -255,21 +259,23 @@ int handle_userfault(struct vm_area_struct *vma,
> unsigned long address,
>* through poll/read().
>*/
> __add_wait_queue(&ctx->fault_wqh, &uwq.wq);
> - for (;;) {
> - set
On Wed, 2012-05-23 at 08:23 -0700, Dave Hansen wrote:
> On 05/23/2012 01:48 AM, Peter Zijlstra wrote:
> > On Wed, 2012-05-23 at 16:34 +0800, Liu ping fan wrote:
> >> > so we need to migrate some of vcpus from node-B to node-A, or to
> >> > node-C.
> > This i
On Wed, 2012-05-23 at 17:58 +0800, Liu ping fan wrote:
> > Please go do something else, I'll do this.
>
OK so that was to say never, as in dynamic cpu:node relations aren't
going to happen. but tip/sched/numa contain the bits needed to make
vnuma work.
On Wed, 2012-05-23 at 16:10 +0800, Liu ping fan wrote:
> the movement of vcpu
> threads among host nodes will break the topology initialized by -numa
> option.
You want to remap vcpu to nodes? Are you bloody insane? cpu:node maps
are assumed static, you cannot make that a dynamic map and pray thi
On Wed, 2012-05-23 at 16:34 +0800, Liu ping fan wrote:
> so we need to migrate some of vcpus from node-B to node-A, or to
> node-C.
This is absolutely broken, you cannot do that.
A guest task might want to be node affine, it looks at the topology sets
a cpu affinity mask and expects to stay on th
On Wed, 2012-05-23 at 14:32 +0800, Liu Ping Fan wrote:
> From: Liu Ping Fan
>
> The guest's scheduler can not see the numa info on the host and
> this will result to the following scene:
> Supposing vcpu-a on nodeA, vcpu-b on nodeB, when load balance,
> the tasks' pull and push between these vc
On Sat, 2012-02-04 at 11:08 +0900, Takuya Yoshikawa wrote:
> The latter needs a fundamental change: I heard (from Avi) that we can
> change mmu_lock to mutex_lock if mmu_notifier becomes preemptible.
>
> So I was planning to restart this work when Peter's
> "mm: Preemptibility"
>
On Thu, 2011-12-22 at 09:01 -0200, Marcelo Tosatti wrote:
>
> > No virt is crap, it needs to die, its horrid, and any solution aimed
> > squarely at virt only is shit and not worth considering, that simple.
>
> Removing this phrase from context (feel free to object on that basis
> to the followin
On Wed, 2011-11-23 at 16:03 +0100, Andrea Arcangeli wrote:
> Hi!
>
> On Mon, Nov 21, 2011 at 07:51:21PM -0600, Anthony Liguori wrote:
> > Fundamentally, the entity that should be deciding what memory should be
> > present
> > and where it should located is the kernel. I'm fundamentally opposed
On Wed, 2011-11-30 at 21:52 +0530, Dipankar Sarma wrote:
>
> Also, if at all topology changes due to migration or host kernel decisions,
> we can make use of something like VPHN (virtual processor home node)
> capability on Power systems to have guest kernel update its topology
> knowledge. You ca
On Mon, 2011-11-21 at 20:03 +0200, Avi Kivity wrote:
>
> Does ms_mbind() require that its vmas in its area be completely
> contained in the region, or does it split vmas on demand? I suggest the
> latter to avoid exposing implementation details.
as implemented (which is still rather incomplete)
On Mon, 2011-11-21 at 21:30 +0530, Bharata B Rao wrote:
>
> In the original post of this mail thread, I proposed a way to export
> guest RAM ranges (Guest Physical Address-GPA) and their corresponding host
> host virtual mappings (Host Virtual Address-HVA) from QEMU (via QEMU monitor).
> The idea
On Mon, 2011-11-21 at 20:48 +0530, Bharata B Rao wrote:
> I looked at Peter's recent work in this area.
> (https://lkml.org/lkml/2011/11/17/204)
>
> It introduces two interfaces:
>
> 1. ms_tbind() to bind a thread to a memsched(*) group
> 2. ms_mbind() to bind a memory region to memsched group
>
On Wed, 2011-11-09 at 10:33 -0200, Arnaldo Carvalho de Melo wrote:
>
> Ingo, would that G+ page be useful for that?
>
*groan*
Can we please keep things sane?
On Tue, 2011-11-08 at 13:59 +0100, Ingo Molnar wrote:
>
> > Also the self monitor stuff, perf-tool doesn't use that for obvious
> > reasons.
>
> Indeed, and that's PAPI's strong point.
>
> We could try to utilize it via some clever LD_PRELOAD trickery?
Wouldn't be really meaningful, a perf-tes
On Tue, 2011-11-08 at 13:15 +0100, Ingo Molnar wrote:
>
> The one notable thing that isnt being tested in a natural way is the
> 'group of events' abstraction - which, ironically, has been added on
> the perfmon guys' insistence. No app beyond the PAPI self-test makes
> actual use of it though,
On Tue, 2011-11-08 at 11:22 +0100, Ingo Molnar wrote:
>
> We do even more than that, the perf ABI is fully backwards *and*
> forwards compatible: you can run older perf on newer ABIs and newer
> perf on older ABIs.
The ABI yes, the tool no, the tool very much relies on some newer ABI
parts. Su
On Wed, 2010-12-01 at 14:42 -0500, Rik van Riel wrote:
> On 12/01/2010 02:35 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
>
> >> Even if we equalized the amount of CPU time each VCPU
> >> ends up getting across some time inter
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
> On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
>
> >> The pause loop exiting& directed yield pat
On Wed, 2010-12-01 at 23:30 +0530, Srivatsa Vaddagiri wrote:
> On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
> > >
> > > yield_task_fair(...)
> > > {
> > >
> >
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
> >> Directed yield and fairness don't mix well either. You can end up
> >> feeding the other tasks mor
On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
>
> yield_task_fair(...)
> {
>
> + ideal_runtime = sched_slice(cfs_rq, curr);
> + delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
> + rem_time_slice = ideal_runtime - delta_exec;
> +
> + curren
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
> Directed yield and fairness don't mix well either. You can end up
> feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup then
inter-guest scheduling fairness should be ma
On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
> Not if yield() remembers what timeslice was given up and adds that back when
> thread is finally ready to run. Figure below illustrates this idea:
>
>
> A0/4C0/4 D0/4 A0/4 C0/4 D0/4 A0/4 C0/4 D0/4 A0/4
> p0 ||-L|--
44 matches
Mail list logo