On Tue, 11 Jun 2024 15:43:59 -0700 Guenter Roeck <li...@roeck-us.net> wrote:
> On 6/11/24 12:28, Steven Rostedt wrote: > > From: "Steven Rostedt (Google)" <rost...@goodmis.org> > > > > In preparation for having the ring buffer mapped to a dedicated location, > > which will have the same restrictions as user space memory mapped buffers, > > allow it to use the "mapped" field of the ring_buffer_per_cpu structure > > without having the user space meta page mapping. > > > > When this starts using the mapped field, it will need to handle adding a > > user space mapping (and removing it) from a ring buffer that is using a > > dedicated memory range. > > > > Signed-off-by: Steven Rostedt (Google) <rost...@goodmis.org> > > --- > > kernel/trace/ring_buffer.c | 11 ++++++++--- > > 1 file changed, 8 insertions(+), 3 deletions(-) > > > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > > index 28853966aa9a..78beaccf9c8c 100644 > > --- a/kernel/trace/ring_buffer.c > > +++ b/kernel/trace/ring_buffer.c > > @@ -5224,6 +5224,9 @@ static void rb_update_meta_page(struct > > ring_buffer_per_cpu *cpu_buffer) > > { > > struct trace_buffer_meta *meta = cpu_buffer->meta_page; > > > > + if (!meta) > > + return; > > + > > meta->reader.read = cpu_buffer->reader_page->read; > > meta->reader.id = cpu_buffer->reader_page->id; > > meta->reader.lost_events = cpu_buffer->lost_events; > > @@ -6167,7 +6170,7 @@ rb_get_mapped_buffer(struct trace_buffer *buffer, int > > cpu) > > > > mutex_lock(&cpu_buffer->mapping_lock); > > > > - if (!cpu_buffer->mapped) { > > + if (!cpu_buffer->mapped || !cpu_buffer->meta_page) { > > mutex_unlock(&cpu_buffer->mapping_lock); > > return ERR_PTR(-ENODEV); > > } > > @@ -6359,12 +6362,13 @@ int ring_buffer_map(struct trace_buffer *buffer, > > int cpu, > > */ > > raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > > rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); > > + > > Picky again. Is that a leftover from something ? I don't see an immediate > reason > for the added newline. Hmm, I could remove it. > > > raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > > > > err = __rb_map_vma(cpu_buffer, vma); > > if (!err) { > > raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > > - cpu_buffer->mapped = 1; > > + cpu_buffer->mapped++; > > raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > > } else { > > kfree(cpu_buffer->subbuf_ids); > > @@ -6403,7 +6407,8 @@ int ring_buffer_unmap(struct trace_buffer *buffer, > > int cpu) > > mutex_lock(&buffer->mutex); > > raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); > > > > - cpu_buffer->mapped = 0; > > + WARN_ON_ONCE(!cpu_buffer->mapped); > > + cpu_buffer->mapped--; > > This will wrap to UINT_MAX if it was 0. Is that intentional ? If mapped is non zero, it limits what it can do. If it enters here as zero, we are really in a unknown state, so yeah, wrapping will just keep it limited. Which is a good thing. Do you want me to add a comment there? -- Steve > > > > > raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); > >