In preparation for the ring-buffer memory mapping, allocate compound
pages for the ring-buffer sub-buffers to enable us to map them to
user-space with vm_insert_pages().
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 25476ead681b
during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
CC:
Signed-off-by: Vincent Donnefort
diff --git a/include/linux
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
CC:
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
On Thu, May 02, 2024 at 03:30:32PM +0200, David Hildenbrand wrote:
> On 30.04.24 13:13, Vincent Donnefort wrote:
> > In preparation for allowing the user-space to map a ring-buffer, add
> > a set of mapping functions:
> >
> >ring_buffer_{map,unmap}()
> >
&g
On Tue, May 07, 2024 at 10:34:02PM -0400, Steven Rostedt wrote:
> On Tue, 30 Apr 2024 12:13:51 +0100
> Vincent Donnefort wrote:
>
> > +#ifdef CONFIG_MMU
> > +static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
> > +
On Fri, May 10, 2024 at 11:15:59AM +0200, David Hildenbrand wrote:
> On 09.05.24 13:05, Vincent Donnefort wrote:
> > On Tue, May 07, 2024 at 10:34:02PM -0400, Steven Rostedt wrote:
> > > On Tue, 30 Apr 2024 12:13:51 +0100
> > > Vincent Donnefort wrote:
> &
[...]
> > > +
> > > + while (s < nr_subbufs && p < nr_pages) {
> > > + struct page *page = virt_to_page(cpu_buffer->subbuf_ids[s]);
> > > + int off = 0;
> > > +
> > > + for (; off < (1 << (subbuf_order)); off++, page++) {
> > > + if (p >= nr_pages)
> > > +
ing_buffer_meta_header
v1 -> v2:
* Hide data_pages from the userspace struct
* Fix META_PAGE_MAX_PAGES
* Support for order > 0 meta-page
* Add missing page->mapping.
Vincent Donnefort (5):
ring-buffer: Allocate sub-buffers with __GFP_COMP
ring-buffer: Introducing ring-buffer mappin
In preparation for the ring-buffer memory mapping, allocate compound
pages for the ring-buffer sub-buffers to enable us to map them to
user-space with vm_insert_pages().
Acked-by: David Hildenbrand
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace
during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
CC:
Signed-off-by: Vincent Donnefort
diff --git a/include/linux
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
CC:
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
ubbuf = next_reader_subbuf(fd, map, &read);
kbuffer_load_subbuffer(kbuf, data + map->bpage_size * subbuf);
while (kbuf->curr < read)
kbuffer_next_event(kbuf, NULL);
read_page(tep, kbuf);
}
munmap(dat
following their unique ID, assigned during the
first mapping.
Once mapped, no bpage can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping the buffer pages.
Signed-off-by: Vincent Donnefort
oduced ioctl:
TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index 9536f0b7c094..e44563cf5ede 100644
--- a/include/uapi/linux/
> 0 and a struct buffer_page (often refered as "bpage")
already exists. We have then an unnecessary duplicate subbuffer ==
bpage.
Remove all references to sub-buffer and replace them with either bpage
or ring_buffer_page.
Signed-off-by: Vincent Donnefort
---
I forgot this patch whe
kbuffer_load_subbuffer(kbuf, data + map->subbuf_size * subbuf);
while (kbuf->curr < read)
kbuffer_next_event(kbuf, NULL);
read_subbuf(tep, kbuf);
}
munmap(data, data_len);
munmap(meta, page_size);
close(fd);
their unique ID, assigned during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
Signed-off-by: Vincent Donnefort
diff
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index f950648b0ba9..8c49489c5867 100644
--- a/include/uapi/linux/trace_mmap.h
+
On Tue, Dec 19, 2023 at 03:39:24PM -0500, Steven Rostedt wrote:
> On Tue, 19 Dec 2023 18:45:54 +
> Vincent Donnefort wrote:
>
> > The tracing ring-buffers can be stored on disk or sent to network
> > without any copy via splice. However the later doesn't allow real
On Wed, Dec 20, 2023 at 08:29:32AM -0500, Steven Rostedt wrote:
> On Wed, 20 Dec 2023 13:06:06 +
> Vincent Donnefort wrote:
>
> > > @@ -771,10 +772,20 @@ static void rb_update_meta_page(struct
> > > ring_buffer_per_cpu *cpu_buffer)
> > > static void rb_w
sub-buffers.
Also update the ring-buffer map_test to verify that padding.
Signed-off-by: Vincent Donnefort
--
This is based on the mm-unstable branch [1] as it depends on David's work [2]
for allowing the zero-page in vm_insert_page().
[1] https://git.kernel.org/pub/scm/linux/kernel/git
sub-buffers.
Also update the ring-buffer map_test to verify that padding.
Signed-off-by: Vincent Donnefort
--
This is based on the mm-unstable branch [1] as it depends on David's work [2]
for allowing the zero-page in vm_insert_page().
[1] https://git.kernel.org/pub/scm/linux/kernel/git
Improve the ring-buffer meta-page test coverage by checking for the
entire padding region to be 0 instead of just looking at the first 4
bytes.
Signed-off-by: Vincent Donnefort
--
Hi,
I saw you have sent "Align meta-page to sub-buffers for improved TLB usage" to
linux-next, so here&
Handle the case where the meta-page content is bigger than the system
page-size. This prepares the ground for extending features covered by
the meta-page.
Signed-off-by: Vincent Donnefort
diff --git a/tools/testing/selftests/ring-buffer/map_test.c
b/tools/testing/selftests/ring-buffer
means sections such as .text and .rodata are scanned by kmemleak.
Refine the scanned areas for modules by limiting it to MOD_TEXT and
MOD_INIT_TEXT mod_mem regions.
CC: Song Liu
CC: Catalin Marinas
Signed-off-by: Vincent Donnefort
diff --git a/kernel/module/debug_kmemleak.c b/kernel/module/d
On Sat, Sep 07, 2024 at 03:12:13PM +0100, Catalin Marinas wrote:
> On Fri, Sep 06, 2024 at 04:38:56PM +0100, Vincent Donnefort wrote:
> > commit ac3b43283923 ("module: replace module_layout with module_memory")
> > introduced a set of memory regions for the module la
On Mon, Sep 09, 2024 at 09:52:57AM +0100, Catalin Marinas wrote:
> On Mon, Sep 09, 2024 at 08:40:34AM +0100, Vincent Donnefort wrote:
> > On Sat, Sep 07, 2024 at 03:12:13PM +0100, Catalin Marinas wrote:
> > > On Fri, Sep 06, 2024 at 04:38:56PM +0100, Vincent Donnefort wro
ata. This means sections such as .text and .rodata are scanned by
kmemleak.
Refine the scanned areas for modules by limiting it to MOD_TEXT and
MOD_INIT_TEXT mod_mem regions.
CC: Song Liu
Reviewed-by: Catalin Marinas
Signed-off-by: Vincent Donnefort
---
v1 -> v2:
- Collect Reviewed-by
On Tue, Apr 20, 2021 at 04:58:00PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 20, 2021 at 04:39:04PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 20, 2021 at 04:20:56PM +0200, Peter Zijlstra wrote:
> > > On Tue, Apr 20, 2021 at 10:46:33AM +0100, Vincent Donnefort wrote:
> &
From: Vincent Donnefort
This patch-set intends to unify steps call throughout hotplug and
hotunplug.
It also improves the "fail" interface, which can now be reset and will
reject states for which a failure can't be recovered.
v2:
- Reject all DEAD steps in the fail interface.
From: Vincent Donnefort
The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
triggered by the CPUHP_BRINGUP_CPU step. If the latter fails, no atomic
state can be rolled back.
DEAD callbacks too can't fail and disallow recovery. As a consequence,
during hotunplug, the
From: Vincent Donnefort
Currently, the only way of resetting the fail injection is to trigger a
hotplug, hotunplug or both. This is rather annoying for testing
and, as the default value for this file is -1, it seems pretty natural to
let a user write it.
Signed-off-by: Vincent Donnefort
diff
From: Vincent Donnefort
Factorizing and unifying cpuhp callback range invocations, especially for
the hotunplug path, where two different ways of decrementing were used. The
first one, decrements before the callback is called:
cpuhp_thread_fun()
state = st->state;
st->
From: Vincent Donnefort
Being called for each dequeue, util_est reduces the number of its updates
by filtering out when the EWMA signal is different from the task util_avg
by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the
decay from a previous high util_avg, EWMA might
so,
despite not appearing in the statistics (the idle driver used here doesn't
report it), we can speculate that we also improve the cluster idle time.
[1] WFI: Wait for interrupt.
Vincent Donnefort (1):
PM / EM: Inefficient OPPs detection
include/linux/ener
te is hence changed from O(n) to O(1). This also
speeds-up em_cpu_energy() even if no inefficient OPPs have been found.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index 757fc60..90b9cb0 100644
--- a/include/linux/energy_model.h
From: Vincent Donnefort
Changelog since v1:
- Fix the issue in compute_energy(), as a change in cpu_util_next() would
break the OPP selection estimation.
- Separate patch for lsub_positive usage in cpu_util_next()
Vincent Donnefort (2):
sched/fair: Fix task utilization accountability
From: Vincent Donnefort
find_energy_efficient_cpu() (feec()) computes for each perf_domain (pd) an
energy delta as follows:
feec(task)
for_each_pd
base_energy = compute_energy(task, -1, pd)
-> for_each_cpu(pd)
-> cpu_util_next(cpu, task, -1)
energy
From: Vincent Donnefort
The sub_positive local version is saving an explicit load-store and is
enough for the cpu_util_next() usage.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 146ac9fec4b6..1364f8b95214 100644
--- a/kernel/sched/fair.c
+++ b
On Thu, Feb 25, 2021 at 04:26:50PM +0100, Vincent Guittot wrote:
> On Mon, 22 Feb 2021 at 10:24, Vincent Donnefort
> wrote:
> >
> > On Fri, Feb 19, 2021 at 11:48:28AM +0100, Vincent Guittot wrote:
> > > On Tue, 16 Feb 2021 at 17:39, wrote:
> > &g
On Thu, Feb 25, 2021 at 12:45:06PM +0100, Dietmar Eggemann wrote:
> On 25/02/2021 09:36, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
>
> [...]
>
> > cpu_util_next() estimates the CPU utilization that would happen if the
> > task was placed on dst_
e
on hikey/hikey960.
Signed-off-by: Vincent Donnefort
Reviewed-by: Dietmar Eggemann
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9e4104ae39ae..214e02862994 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3966,24 +3966,27 @@ static inline void util_est_dequeue(struct cfs_r
Hi Peter,
[...]
> > >
> > > How about something like:
> > >
> > > #ifdef CONFIG_64BIT
> > >
> > > #define DEFINE_U64_U32(name) u64 name
> > > #define u64_u32_load(name)name
> > > #define u64_u32_store(name, val)name = val
> > >
> > > #else
> > >
> > > #define DEFINE_U64_U32(name
From: Vincent Donnefort
This patch-set intends mainly to fix HP rollback, which is currently broken,
due to an inconsistent "state" usage and an issue with CPUHP_AP_ONLINE_IDLE.
It also improves the "fail" interface, which can now be reset and will reject
CPUHP_BRINGUP_CP
From: Vincent Donnefort
Currently, the only way of resetting this file is to actually try to run
a hotplug, hotunplug or both. This is quite annoying for testing and, as
the default value for this file is -1, it seems quite natural to let a
user write it.
Signed-off-by: Vincent Donnefort
From: Vincent Donnefort
The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
triggered by the CPUHP_BRINGUP_CPU step. If the latter doesn't run, none
of the atomic can. Hence, rollback is not possible after a hotunplug
CPUHP_BRINGUP_CPU step failure and the "fail&
From: Vincent Donnefort
After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish
the job. The steps left are as followed:
++
| CPUHP_TEARDOWN_CPU | -> If fails state is CPUHP_TEARDOWN_CPU
++
| ATOMIC STATES| ->
From: Vincent Donnefort
Factorizing and unifying cpuhp callback range invocations, especially for
the hotunplug path, where two different ways of decrementing were used. The
first one, decrements before the callback is called:
cpuhp_thread_fun()
state = st->state;
st->
Hi Valentin,
On Thu, Dec 10, 2020 at 04:38:30PM +, Valentin Schneider wrote:
> Per-CPU kworkers forcefully migrated away by hotplug via
> workqueue_offline_cpu() can end up spawning more kworkers via
>
> manage_workers() -> maybe_create_worker()
>
> Workers created at this point will be bo
On Fri, Dec 11, 2020 at 01:13:35PM +, Valentin Schneider wrote:
> On 11/12/20 12:51, Valentin Schneider wrote:
> >> In that case maybe we should check for the cpu_active_mask here too ?
> >
> > Looking at it again, I think we might need to.
> >
> > IIUC you can end up with pools bound to a sing
On Mon, Mar 01, 2021 at 06:21:23PM +0100, Peter Zijlstra wrote:
> On Mon, Mar 01, 2021 at 05:34:09PM +0100, Dietmar Eggemann wrote:
> > On 26/02/2021 09:41, Peter Zijlstra wrote:
> > > On Thu, Feb 25, 2021 at 04:58:20PM +, Vincent Donnefort wrote:
> > >
On Thu, Apr 15, 2021 at 01:12:05PM +, Quentin Perret wrote:
> Hi Vincent,
>
> On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > Some SoCs, such as the sd855 have OPPs within the same performance domain,
> > whose cost is higher than others with a h
On Thu, Apr 15, 2021 at 01:16:35PM +, Quentin Perret wrote:
> On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -10,6 +10,7 @@
> >
> > #incl
On Thu, Apr 15, 2021 at 02:59:54PM +, Quentin Perret wrote:
> On Thursday 15 Apr 2021 at 15:34:53 (+0100), Vincent Donnefort wrote:
> > On Thu, Apr 15, 2021 at 01:16:35PM +, Quentin Perret wrote:
> > > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wr
On Thu, Apr 15, 2021 at 03:04:34PM +, Quentin Perret wrote:
> On Thursday 15 Apr 2021 at 15:12:08 (+0100), Vincent Donnefort wrote:
> > On Thu, Apr 15, 2021 at 01:12:05PM +, Quentin Perret wrote:
> > > Hi Vincent,
> > >
> > > On Thursday 08 Apr 2021 at
On Thu, Apr 15, 2021 at 03:32:11PM +0100, Valentin Schneider wrote:
> On 15/04/21 10:59, Peter Zijlstra wrote:
> > Can't make sense of what I did.. I've removed that hunk. Patch now looks
> > like this.
> >
>
> Small nit below, but regardless feel free to apply to the whole lot:
> Reviewed-by: Val
On Mon, Apr 19, 2021 at 11:56:30AM +0100, Vincent Donnefort wrote:
> On Thu, Apr 15, 2021 at 03:32:11PM +0100, Valentin Schneider wrote:
> > On 15/04/21 10:59, Peter Zijlstra wrote:
> > > Can't make sense of what I did.. I've removed that hunk. Patch now looks
> >
On Fri, Feb 19, 2021 at 11:48:28AM +0100, Vincent Guittot wrote:
> On Tue, 16 Feb 2021 at 17:39, wrote:
> >
> > From: Vincent Donnefort
> >
> > Being called for each dequeue, util_est reduces the number of its updates
> > by filtering out when the EWMA signal is
On Fri, Feb 19, 2021 at 11:19:05AM +0100, Dietmar Eggemann wrote:
> On 16/02/2021 17:39, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Being called for each dequeue, util_est reduces the number of its updates
> > by filtering out when the EWMA s
From: Vincent Donnefort
Currently, cpu_util_next() estimates the CPU utilization as follows:
max(cpu_util + task_util,
cpu_util_est + task_util_est)
This is an issue when making a comparison between CPUs, as the task
contribution can be either:
(1) task_util_est, on a mostly idle
Hi Quentin,
On Mon, Feb 22, 2021 at 10:11:03AM +, Quentin Perret wrote:
> Hey Vincent,
>
> On Monday 22 Feb 2021 at 09:54:01 (+), vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Currently, cpu_util_next() estimates the CPU utilization
On Mon, Feb 22, 2021 at 12:23:04PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 11:36:03 (+), Vincent Donnefort wrote:
> > Here's with real life numbers.
> >
> > The task: util_avg=3 (1) util_est=11 (2)
> >
> > pd0 (CPU-0, CPU-1, CPU-2)
&
On Mon, Feb 22, 2021 at 03:58:56PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 15:01:51 (+), Vincent Donnefort wrote:
> > You mean that it could lead to a wrong frequency estimation when doing
> > freq = map_util_freq() in em_cpu_energy()?
>
> I'm
On Mon, Feb 22, 2021 at 04:23:42PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 15:58:56 (+), Quentin Perret wrote:
> > But in any case, if we're going to address this, I'm still not sure this
> > patch will be what we want. As per my first comment we need to keep the
> > frequency e
On Wed, Jan 20, 2021 at 01:58:35PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 05:10:45PM +, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
> > triggered by the
On Wed, Jan 20, 2021 at 06:53:33PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 20, 2021 at 06:45:16PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 11, 2021 at 05:10:46PM +, vincent.donnef...@arm.com wrote:
> > > @@ -475,6 +478,11 @@ cpuhp_set_state(struct cpuhp_cpu_state *st, enum
> > > cpuhp_s
On Thu, Jan 21, 2021 at 03:57:03PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 05:10:47PM +, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish
> > the job. The
From: Vincent Donnefort
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort
diff --git a/drivers/irqchip/irq-armada-370-xp.c
b/drivers/irqchip/
From: Vincent Donnefort
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort
diff --git a/drivers/irqchip/irq-armada-370-xp.c
b/drivers/irqchip/
a
> > bisect and found:
> >
> > $ git bisect good
> > b667cf488aa9476b0ab64acd91f2a96f188cfd21 is the first bad commit
> > commit b667cf488aa9476b0ab64acd91f2a96f188cfd21
> > Author: Vincent Donnefort
> > Date: Fri Feb 7 14:21:05 2014 +0100
> >
> >
From: Vincent Donnefort
This patch fixes kernel NULL pointer BUG introduced by the following commit:
b667cf488aa9476b0ab64acd91f2a96f188cfd21
gpio: ich: Add support for multiple register addresses.
Signed-off-by: Vincent Donnefort
diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c
a 49 03 75 00 4c 89 4d c8 e8 ec
> RIP [] ichx_gpio_probe+0x28c/0x3d0 [gpio_ich]
> RSP
> CR2:
>
>
> This is almost certainly caused by the uninitialized regs ptr
> in the ich6_desc struct (i3100_desc struct has the same problem)
> introduced in thi
On Sat, Aug 23, 2014 at 01:24:56PM -0400, Tejun Heo wrote:
> Hello,
>
> On Fri, Aug 22, 2014 at 05:21:30PM -0700, Bryan Wu wrote:
> > On Tue, Aug 19, 2014 at 6:51 PM, Hugh Dickins wrote:
> > > On Tue, 19 Aug 2014, Vincent Donnefort wrote:
> > >
> > >>
This patch introduces a work which take care of reseting the blink workqueue and
avoid calling the cancel_delayed_work_sync function which may sleep, from an IRQ
context.
Signed-off-by: Vincent Donnefort
diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
index 129729d..0971554
Hugh,
Here's a patch which must fix your problem. It allows to call led_blink_set()
from on IRQ handler by adding a work to take care of the scheduling function
cancel_delayed_work_sync().
Regards,
Vincent.
Vincent Donnefort (1):
leds: make led_blink_set IRQ safe
drivers/leds/led-cl
From: Vincent Donnefort
device_release() is freeing the resources before calling the device
specific release callback which is, in the case of devfreq, stopping
the governor.
It is a problem as some governors are using the device resources. e.g.
simpleondemand which is using the devfreq
From: Vincent Donnefort
The util_est signals are key elements for EAS task placement and
frequency selection. Having tracepoints to track these signals enables
load-tracking and schedutil testing and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/trace/events
On Mon, Sep 21, 2020 at 06:36:02PM +0200, Peter Zijlstra wrote:
[...]
> +
> + [CPUHP_AP_SCHED_WAIT_EMPTY] = {
> + .name = "sched:waitempty",
> + .startup.single = NULL,
> + .teardown.single= sched_cpu_wait_empty,
> + },
From: Vincent Donnefort
rq->cpu_capacity is a key element in several scheduler parts, such as EAS
task placement and load balancing. Tracking this value enables testing
and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/sched.h b/include/linux/sche
From: Vincent Donnefort
rq->cpu_capacity is a key element in several scheduler parts, such as EAS
task placement and load balancing. Tracking this value enables testing
and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/sched.h b/include/linux/sche
From: Vincent Donnefort
Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
factorize the u64 vminruntime and last_update_time read on a 32-bits
architecture. Those new helpers encapsulate smp_rmb() and smp_wmb()
synchronization and therefore, have a small penalty in
Hi,
On Mon, Jul 27, 2020 at 01:24:54PM +0200, Ingo Molnar wrote:
>
> * vincent.donnef...@arm.com wrote:
>
> > From: Vincent Donnefort
> >
> > Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> > factorize the u64 vminruntime and l
On Mon, Jul 27, 2020 at 02:38:01PM +0200, pet...@infradead.org wrote:
> On Mon, Jul 27, 2020 at 11:59:24AM +0100, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> > factor
Hi,
On Tue, Jul 28, 2020 at 02:00:27PM +0200, pet...@infradead.org wrote:
> On Tue, Jul 28, 2020 at 01:13:02PM +0200, pet...@infradead.org wrote:
> > On Mon, Jul 27, 2020 at 04:23:03PM +0100, Vincent Donnefort wrote:
> >
> > > For 32-bit architectures, both min_vruntime
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 2d120f71df4baeb7694f513c86fe6f85940f6f76
Gitweb:
https://git.kernel.org/tip/2d120f71df4baeb7694f513c86fe6f85940f6f76
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 08:36:11
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 5e7f238920174248049ff840eff43c94f3a2e67e
Gitweb:
https://git.kernel.org/tip/5e7f238920174248049ff840eff43c94f3a2e67e
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:05
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 9357e217ba642b39ce89f9cd5b5f3e5a21712283
Gitweb:
https://git.kernel.org/tip/9357e217ba642b39ce89f9cd5b5f3e5a21712283
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 16:58:20
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 6d06c515e9151dc858e391bd6bebce0b684eec4f
Gitweb:
https://git.kernel.org/tip/6d06c515e9151dc858e391bd6bebce0b684eec4f
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:04
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: b641a8b52c6162172ca31590510569eaadcd5e49
Gitweb:
https://git.kernel.org/tip/b641a8b52c6162172ca31590510569eaadcd5e49
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 08:36:12
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 8b89220650146d59e9a8af2e5f12fc582539609e
Gitweb:
https://git.kernel.org/tip/8b89220650146d59e9a8af2e5f12fc582539609e
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:06
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 92e1512c5329c7675b66163d357d00d95107fa03
Gitweb:
https://git.kernel.org/tip/92e1512c5329c7675b66163d357d00d95107fa03
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:05
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: e58422d9789d5c61bb039e40e4e10781d8d43ac3
Gitweb:
https://git.kernel.org/tip/e58422d9789d5c61bb039e40e4e10781d8d43ac3
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:04
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 4537b36ee8a290376174b297d6812cba1375ea79
Gitweb:
https://git.kernel.org/tip/4537b36ee8a290376174b297d6812cba1375ea79
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 08:36:12
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 78ca1ab2718a5518171f2e7d0afad0b9752c4453
Gitweb:
https://git.kernel.org/tip/78ca1ab2718a5518171f2e7d0afad0b9752c4453
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 16:58:20
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: c91b0dcb6482096e7af4adbf39cfe3296af74a78
Gitweb:
https://git.kernel.org/tip/c91b0dcb6482096e7af4adbf39cfe3296af74a78
Author:Vincent Donnefort
AuthorDate:Tue, 16 Feb 2021 10:35:06
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 8af20c5fe756a9ff556c9b520201b2d158874481
Gitweb:
https://git.kernel.org/tip/8af20c5fe756a9ff556c9b520201b2d158874481
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 08:36:11
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0372e1cf70c28de6babcba38ef97b6ae3400b101
Gitweb:
https://git.kernel.org/tip/0372e1cf70c28de6babcba38ef97b6ae3400b101
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 08:36:11
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: b89997aa88f0b07d8a6414c908af75062103b8c9
Gitweb:
https://git.kernel.org/tip/b89997aa88f0b07d8a6414c908af75062103b8c9
Author:Vincent Donnefort
AuthorDate:Thu, 25 Feb 2021 16:58:20
Committer
101 - 200 of 206 matches
Mail list logo