On 15/1/21 11:33 pm, Mark Rutland wrote:
> On Thu, Jan 14, 2021 at 04:07:55PM -0600, Madhavan T. Venkataraman wrote:
>> Hi all,
>>
>> My name is Madhavan Venkataraman.
>
> Hi Madhavan,
>
>> Microsoft is very interested in Live Patching support for ARM64.
>> On behalf of Microsoft, I would like to
On 16/3/21 10:41 am, Johannes Weiner wrote:
> Fix a sleep in atomic section problem: wb_writeback() takes a spinlock
> and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular
> rstat flushing function called from in there does lockbreaking and may
> sleep. Switch to the atomic variant
On 11/3/21 9:00 am, Hugh Dickins wrote:
> On Thu, 11 Mar 2021, Singh, Balbir wrote:
>> On 9/3/21 7:28 pm, Michal Hocko wrote:
>>> On Tue 09-03-21 09:37:29, Balbir Singh wrote:
>>>> On 4/3/21 6:40 pm, Zhou Guanghui wrote:
>>> [...]
&g
On 9/3/21 7:28 pm, Michal Hocko wrote:
> On Tue 09-03-21 09:37:29, Balbir Singh wrote:
>> On 4/3/21 6:40 pm, Zhou Guanghui wrote:
> [...]
>>> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> /*
>>> - * Because page_memcg(head) is not set on compound tails, set it now.
>>> + * Because page_memcg(head) is no
On 4/3/21 6:40 pm, Zhou Guanghui wrote:
> Rename mem_cgroup_split_huge_fixup to split_page_memcg and explicitly
> pass in page number argument.
>
> In this way, the interface name is more common and can be used by
> potential users. In addition, the complete info(memcg and flag) of
> the memcg nee
On 26/2/21 12:21 am, Muchun Song wrote:
> Every HugeTLB has more than one struct page structure. We __know__ that
> we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> to store metadata associated with each HugeTLB.
>
> There are a lot of struct page structures associated wi
On 26/2/21 12:21 am, Muchun Song wrote:
> Hi all,
>
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.
>
> In order to reduce the difficulty of the first version of code review.
> From this version, we disa
On 26/2/21 12:21 am, Muchun Song wrote:
> Move bootmem info registration common API to individual bootmem_info.c.
> And we will use {get,put}_page_bootmem() to initialize the page for the
> vmemmap pages or free the vmemmap pages to buddy in the later patch.
> So move them out of CONFIG_MEMORY_HOTP
On Fri, 2021-01-08 at 23:10 +1100, Balbir Singh wrote:
> Implement a mechanism that allows tasks to conditionally flush
> their L1D cache (mitigation mechanism suggested in [2]). The previous
> posts of these patches were sent for inclusion (see [3]) and were not
> included due to the concern for t
On Sat, 2021-01-16 at 11:21 +0800, kernel test robot wrote:
>
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/pti
> head: 767d46ab566dd489733666efe48732d523c8c332
> commit: b6724f118d44606fddde391ba7527526b3cad211 [4/5] prctl: Hook L1D
> flushing in via prctl
> config:
On Fri, 2020-12-04 at 22:07 +0100, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On Fri, Nov 27 2020 at 17:59, Balbir Singh wrote:
>
>
On Fri, 2020-12-04 at 22:21 +0100, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On Fri, Nov 27 2020 at 17:59, Balbir Singh wrote:
> >
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote:
> From: Peter Zijlstra
>
> pick_next_entity() is passed curr == NULL during core-scheduling. Due to
> this, if the rbtree is empty, the 'left' variable is set to NULL within
> the function. This can cause crashes within the function.
>
> This
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote:
> From: Peter Zijlstra
>
> Because sched_class::pick_next_task() also implies
> sched_class::set_next_task() (and possibly put_prev_task() and
> newidle_balance) it is not state invariant. This makes it unsuitable
> for remote task selection.
>
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote:
> From: Peter Zijlstra
>
> In preparation of playing games with rq->lock, abstract the thing
> using an accessor.
>
Could you clarify games? I presume the intention is to redefine the scope of
the lock based on whether core sched is enabled
On 1/10/20 9:49 am, Singh, Balbir wrote:
> On 1/10/20 7:38 am, Thomas Gleixner wrote:
>
>>
>>
>>
>> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>>> On Wed, Sep 30 202
On 1/10/20 7:38 am, Thomas Gleixner wrote:
>
>
>
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote
On 1/10/20 7:38 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30
On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30
On 29/7/20 11:14 pm, Tom Lendacky wrote:
>
>
> On 7/28/20 7:11 PM, Balbir Singh wrote:
>> Use the existing PR_GET/SET_SPECULATION_CTRL API to expose the L1D
>> flush capability. For L1D flushing PR_SPEC_FORCE_DISABLE and
>> PR_SPEC_DISABLE_NOEXEC are not supported.
>>
>> There is also no seccomp
On Tue, 2020-06-02 at 16:28 -0700, Linus Torvalds wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On Tue, Jun 2, 2020 at 4:01 PM
On Tue, 2020-06-02 at 12:14 -0700, Linus Torvalds wrote:
>
> On Tue, Jun 2, 2020 at 11:29 AM Thomas Gleixner wrote:
> >
> > It's trivial enough to fix. We have a static key already which is
> > telling us whether SMT scheduling is active.
>
> .. but should we do it here, in switch_mm() in the f
On Mon, 2020-06-01 at 19:35 -0700, Linus Torvalds wrote:
>
> On Mon, Jun 1, 2020 at 6:06 PM Balbir Singh wrote:
> >
> > I think apps can do this independently today as in do the flush
> > via software fallback in the app themselves.
>
> Sure, but they can't force the kernel to do crazy things f
On Mon, 2020-05-25 at 21:04 +1000, Stephen Rothwell wrote:
> Hi all,
>
> Today's linux-next merge of the akpm-current tree got a conflict in:
>
> arch/x86/mm/tlb.c
>
> between commit:
>
> 83ce56f712af ("x86/mm: Refactor cond_ibpb() to support other use cases")
>
> from the tip tree and com
> @@ -1057,7 +1063,7 @@ static int xen_translate_vdev(int vdevice, int *minor,
> unsigned int *offset)
> case XEN_SCSI_DISK5_MAJOR:
> case XEN_SCSI_DISK6_MAJOR:
> case XEN_SCSI_DISK7_MAJOR:
> - *offset = (*minor / PARTS_PER_DISK) +
> +
On Tue, 2020-05-19 at 08:39 -0700, Randy Dunlap wrote:
>
> Hi--
>
> Comments below. Sorry about the delay.
>
> On 4/5/20 8:19 PM, Balbir Singh wrote:
> > Add documentation of l1d flushing, explain the need for the
> > feature and how it can be used.
> >
> > Signed-off-by: Balbir Singh
> > ---
On Tue, 2020-04-07 at 11:26 -0700, Kees Cook wrote:
>
>
> On Mon, Apr 06, 2020 at 01:19:45PM +1000, Balbir Singh wrote:
> > Implement a mechanism to selectively flush the L1D cache. The goal is to
> > allow tasks that are paranoid due to the recent snoop assisted data sampling
> > vulnerabilites,
On Tue, 2020-05-19 at 23:26 +, Anchal Agarwal wrote:
> Signed-off--by: Thomas Gleixner
The Signed-off-by line needs to be fixed (hint: you have --)
Balbir Singh
On Wed, 2020-05-13 at 17:27 +0200, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you can confirm the sender
> and know the content is safe.
>
>
>
> Balbir Singh writes:
>
> > Implement a mechanism to se
On Wed, 2020-05-13 at 15:53 +0200, Thomas Gleixner wrote:
>
>
> Balbir Singh writes:
> > +++ b/arch/x86/kernel/l1d_flush.c
> > @@ -0,0 +1,36 @@
>
> Lacks
>
> +// SPDX-License-Identifier: GPL-2.0-only
>
Agreed, it should match the license in arch/x86/kvm/vmx/vmx.c
Thanks,
Balbir
On Wed, 2020-05-13 at 15:35 +0200, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you can confirm the sender
> and know the content is safe.
>
>
>
> Balbir Singh writes:
>
> > Subject: [PATCH v6 1/6] arc
On Wed, 2020-05-13 at 17:04 +0200, Thomas Gleixner wrote:
>
>
> Balbir Singh writes:
> >
> > + if (prev_mm & LAST_USER_MM_L1D_FLUSH)
> > + arch_l1d_flush(0); /* Just flush, don't populate the
> > TLB */
>
> Bah. I fundamentally hate tail comments. They are just disturbing the
>
On Wed, 2020-05-13 at 18:16 +0200, Thomas Gleixner wrote:
> Balbir Singh writes:
>
> This part:
>
> > --- a/include/uapi/linux/prctl.h
> > +++ b/include/uapi/linux/prctl.h
> > @@ -238,4 +238,8 @@ struct prctl_mm_map {
> > #define PR_SET_IO_FLUSHER57
> > #define PR_GET_IO_FLUSHER
On Wed, 2020-05-13 at 15:33 +0200, Thomas Gleixner wrote:
>
>
> Balbir Singh writes:
> > +With an increasing number of vulnerabilities being reported around
> > data
> > +leaks from L1D, a new user space mechanism to flush the L1D cache
> > on
> > +context switch is added to the kernel. This sho
On Mon, 2020-05-04 at 11:39 -0700, Kees Cook wrote:
>
> On Mon, May 04, 2020 at 02:13:42PM +1000, Balbir Singh wrote:
> > Implement a mechanism to selectively flush the L1D cache. The goal
> > is to
> > allow tasks that are paranoid due to the recent snoop assisted data
> > sampling
> > vulnerabil
On Sat, 2020-04-25 at 11:49 +1000, Balbir Singh wrote:
> On Fri, 2020-04-24 at 13:59 -0500, Tom Lendacky wrote:
> >
> > On 4/23/20 9:01 AM, Balbir Singh wrote:
> > > Split out the allocation and free routines to be used in a follow
> > > up set of patches (to reuse for L1D flushing).
> > >
> > >
On Thu, 2019-10-03 at 15:13 -0400, Tyler Ramer wrote:
> Always shutdown the controller when nvme_remove_dead_controller is
> reached.
>
> It's possible for nvme_remove_dead_controller to be called as part of a
> failed reset, when there is a bad NVME_CSTS. The controller won't
> be comming back on
On Fri, 2019-10-04 at 11:36 -0400, Tyler Ramer wrote:
> Here's a failure we had which represents the issue the patch is
> intended to solve:
>
> Aug 26 15:00:56 testhost kernel: nvme nvme4: async event result 00010300
> Aug 26 15:01:27 testhost kernel: nvme nvme4: controller is down; will
> reset:
On 5/21/19 7:54 AM, Jaskaran Khurana wrote:
> Adds in-kernel pkcs7 signature checking for the roothash of
> the dm-verity hash tree.
>
> The verification is to support cases where the roothash is not secured by
> Trusted Boot, UEFI Secureboot or similar technologies.
> One of the use cases for
On 1/23/19 2:09 AM, Torsten Duwe wrote:
> Hi Balbir!
>
Hi, Torsten!
> On Tue, Jan 22, 2019 at 02:39:32PM +1300, Singh, Balbir wrote:
>>
>> On 1/19/19 5:39 AM, Torsten Duwe wrote:
>>> + */
>>> +ftrace_common_return:
>>> + /* restore function
On 1/19/19 5:39 AM, Torsten Duwe wrote:
> Once gcc8 adds 2 NOPs at the beginning of each function, replace the
> first NOP thus generated with a quick LR saver (move it to scratch reg
> x9), so the 2nd replacement insn, the call to ftrace, does not clobber
> the value. Ftrace will then generate
On 7/25/18 1:15 AM, Johannes Weiner wrote:
> Hi Balbir,
>
> On Tue, Jul 24, 2018 at 07:14:02AM +1000, Balbir Singh wrote:
>> Does the mechanism scale? I am a little concerned about how frequently
>> this infrastructure is monitored/read/acted upon.
>
> I expect most users to poll in the freque
On 7/19/18 3:40 AM, Bruce Merry wrote:
> On 18 July 2018 at 17:49, Shakeel Butt wrote:
>> On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote:
>>> That sounds promising. Is there any way to tell how many zombies there
>>> are, and is there any way to deliberately create zombies? If I can
>>> pro
Hello All,
I am not on the list, so please reply to me
with the list with your comments.
I was going through some code in serial.c and noticed
that there are page allocations/deallocations in
rs_open and startup (serial.c). These allocations
could fail. This affects reliablity in some min
44 matches
Mail list logo