[RFC 6/6] doc: changes: update Python minimal version

2025-01-29 Thread Mauro Carvalho Chehab
The current minimal version doesn't match what we have currently
at the Kernel:

$ vermin -v  $(git ls-files *.py)
...
Minimum required versions: 3.10
Incompatible versions: 2

Those are the Python scripts requiring versions higher than current minimal 
(3.5):

!2, 3.10 tools/net/sunrpc/xdrgen/generators/__init__.py
!2, 3.10 tools/net/sunrpc/xdrgen/generators/program.py
!2, 3.10 tools/net/sunrpc/xdrgen/subcmds/source.py
!2, 3.10 tools/net/sunrpc/xdrgen/xdr_ast.py
!2, 3.10 
tools/power/cpupower/bindings/python/test_raw_pylibcpupower.py
!2, 3.9  tools/testing/selftests/net/rds/test.py
!2, 3.9  tools/net/ynl/ethtool.py
!2, 3.9  tools/net/ynl/cli.py
!2, 3.9  scripts/checktransupdate.py
!2, 3.8  tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py
!2, 3.8  tools/testing/selftests/hid/tests/base.py
!2, 3.7  tools/testing/selftests/turbostat/smi_aperf_mperf.py
!2, 3.7  tools/testing/selftests/turbostat/defcolumns.py
!2, 3.7  tools/testing/selftests/turbostat/added_perf_counters.py
!2, 3.7  tools/testing/selftests/hid/tests/conftest.py
!2, 3.7  tools/testing/kunit/qemu_config.py
!2, 3.7  tools/testing/kunit/kunit_tool_test.py
!2, 3.7  tools/testing/kunit/kunit.py
!2, 3.7  tools/testing/kunit/kunit_parser.py
!2, 3.7  tools/testing/kunit/kunit_kernel.py
!2, 3.7  tools/testing/kunit/kunit_json.py
!2, 3.7  tools/testing/kunit/kunit_config.py
!2, 3.7  tools/perf/scripts/python/gecko.py
!2, 3.7  scripts/rust_is_available_test.py
!2, 3.7  scripts/bpf_doc.py
!2, 3.6  tools/writeback/wb_monitor.py
!2, 3.6  tools/workqueue/wq_monitor.py
!2, 3.6  tools/workqueue/wq_dump.py
!2, 3.6  tools/usb/p9_fwd.py
!2, 3.6  tools/tracing/rtla/sample/timerlat_load.py
!2, 3.6  tools/testing/selftests/net/openvswitch/ovs-dpctl.py
!2, 3.6  tools/testing/selftests/net/nl_netdev.py
!2, 3.6  tools/testing/selftests/net/lib/py/ynl.py
!2, 3.6  tools/testing/selftests/net/lib/py/utils.py
!2, 3.6  tools/testing/selftests/net/lib/py/nsim.py
!2, 3.6  tools/testing/selftests/net/lib/py/netns.py
!2, 3.6  tools/testing/selftests/net/lib/py/ksft.py
!2, 3.6  tools/testing/selftests/kselftest/ksft.py
!2, 3.6  tools/testing/selftests/hid/tests/test_tablet.py
!2, 3.6  tools/testing/selftests/hid/tests/test_sony.py
!2, 3.6  tools/testing/selftests/hid/tests/test_multitouch.py
!2, 3.6  tools/testing/selftests/hid/tests/test_mouse.py
!2, 3.6  tools/testing/selftests/hid/tests/base_gamepad.py
!2, 3.6  tools/testing/selftests/hid/tests/base_device.py
!2, 3.6  tools/testing/selftests/drivers/net/stats.py
!2, 3.6  tools/testing/selftests/drivers/net/shaper.py
!2, 3.6  tools/testing/selftests/drivers/net/queues.py
!2, 3.6  tools/testing/selftests/drivers/net/ping.py
!2, 3.6  tools/testing/selftests/drivers/net/lib/py/remote_ssh.py
!2, 3.6  tools/testing/selftests/drivers/net/lib/py/load.py
!2, 3.6  tools/testing/selftests/drivers/net/lib/py/__init__.py
!2, 3.6  tools/testing/selftests/drivers/net/lib/py/env.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/rss_ctx.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/pp_alloc_fail.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/nic_performance.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/nic_link_layer.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/lib/py/linkconfig.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/lib/py/__init__.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/devmem.py
!2, 3.6  
tools/testing/selftests/drivers/net/hw/devlink_port_split.py
!2, 3.6  tools/testing/selftests/drivers/net/hw/csum.py
!2, 3.6  
tools/testing/selftests/devices/probe/test_discoverable_devices.py
!2, 3.6  tools/testing/selftests/bpf/test_bpftool_synctypes.py
!2, 3.6  tools/testing/selftests/bpf/generate_udp_fragments.py
!2, 3.6  tools/testing/kunit/run_checks.py
!2, 3.6  tools/testing/kunit/kunit_printer.py
!2, 3.6  tools/sched_ext/scx_show_state.py
!2, 3.6  tools/perf/tests/shell/lib/perf_metric_validation.py
!2, 3.6  tools/perf/tests/shell/lib/perf_json_output_lint.py
!2, 3.6  tools/perf/scripts/python/parallel-perf.py
!2, 3.6  tools/perf/scripts/python/flamegraph.py
!2, 3.6  tools/perf/scripts/python/arm-cs-trace-disasm.

[RFC 5/6] docs: changes: update Sphinx minimal version to 3.4.3

2025-01-29 Thread Mauro Carvalho Chehab
Doing that allows us to get rid of all backward-compatible code.

Signed-off-by: Mauro Carvalho Chehab 
---
 Documentation/conf.py | 2 +-
 Documentation/process/changes.rst | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/conf.py b/Documentation/conf.py
index 0c2205d536b3..3dad1f90b098 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -47,7 +47,7 @@ from load_config import loadConfig
 # -- General configuration 
 
 # If your documentation needs a minimal Sphinx version, state it here.
-needs_sphinx = '2.4.4'
+needs_sphinx = '3.4.3'
 
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
diff --git a/Documentation/process/changes.rst 
b/Documentation/process/changes.rst
index 82b5e378eebf..012d2b715c2a 100644
--- a/Documentation/process/changes.rst
+++ b/Documentation/process/changes.rst
@@ -58,7 +58,7 @@ mcelog 0.6  mcelog --version
 iptables   1.4.2iptables -V
 openssl & libcrypto1.0.0openssl version
 bc 1.06.95  bc --version
-Sphinx\ [#f1]_ 2.4.4sphinx-build --version
+Sphinx\ [#f1]_ 3.4.3sphinx-build --version
 cpio   any  cpio --version
 GNU tar1.28 tar --version
 gtags (optional)   6.6.5gtags --version
-- 
2.48.1




[RFC 0/6] Raise the bar with regards to Python and Sphinx requirements

2025-01-29 Thread Mauro Carvalho Chehab
This series comes after 
https://lore.kernel.org/linux-doc/87a5b96296@trenco.lwn.net/T/#t
It  increases the minimal requirements for Sphinx and Python.

Sphinx release dates:

Release 2.4.0 (released Feb 09, 2020)
Release 2.4.4 (released Mar 05, 2020) (current minimal requirement)
Release 3.4.0 (released Dec 20, 2020)
Release 3.4.3 (released Jan 08, 2021)

(https://www.sphinx-doc.org/en/master/changes/index.html)

In terms of Python, we're currently at 3.5:

Python  Release date 
3.5 2015-09-13(current minimal requirement)
3.6 2016-12-23
3.7 2018-06-27
3.8 2019-10-14
3.9 2020-10-05
3.102021-10-04

(according with 
https://en.wikipedia.org/w/index.php?title=History_of_Python)

The new minimal requirements are now:
- Sphinx 3.4.3;
- Python 3.9

The new Sphinx minimal requirement allows dropping all backward-compatible code
we have at kernel-doc and at Sphinx extensions.

The new Python minimal requirement matches the current required level for
all scripts but one (*). The one that doesn't match is at 
tools/net/sunrpc/xdrgen.

Those matches a 4-years old toolchain, which sounds a reasonable period
of time, as Python/Sphinx aren't required for the Kernel build.

Mauro Carvalho Chehab (6):
  scripts/get_abi.py: make it backward-compatible with Python 3.6
  docs: extensions: don't use utf-8 syntax for descriptions
  docs: automarkup: drop legacy support
  scripts/kernel-doc: drop Sphinx version check
  docs: changes: update Sphinx minimal version to 3.4.3
  doc: changes: update Python minimal version

 Documentation/conf.py   |   2 +-
 Documentation/process/changes.rst   |   4 +-
 Documentation/sphinx/automarkup.py  |  32 ++---
 Documentation/sphinx/cdomain.py |   7 +-
 Documentation/sphinx/kernel_abi.py  |   6 +-
 Documentation/sphinx/kernel_feat.py |   4 +-
 Documentation/sphinx/kernel_include.py  |   4 +-
 Documentation/sphinx/kerneldoc.py   |   5 -
 Documentation/sphinx/kfigure.py |  10 +-
 Documentation/sphinx/load_config.py |   2 +-
 Documentation/sphinx/maintainers_include.py |   4 +-
 Documentation/sphinx/rstFlatTable.py|  10 +-
 scripts/get_abi.py  |  16 ++-
 scripts/kernel-doc  | 129 +++-
 14 files changed, 58 insertions(+), 177 deletions(-)

-- 
2.48.1





Re: [PATCH bpf v9 0/5] bpf: fix wrong copied_seq calculation and add tests

2025-01-29 Thread patchwork-bot+netdevbpf
Hello:

This series was applied to bpf/bpf.git (master)
by Martin KaFai Lau :

On Wed, 22 Jan 2025 18:09:12 +0800 you wrote:
> A previous commit described in this topic
> http://lore.kernel.org/bpf/20230523025618.113937-9-john.fastab...@gmail.com
> directly updated 'sk->copied_seq' in the tcp_eat_skb() function when the
> action of a BPF program was SK_REDIRECT. For other actions, like SK_PASS,
> the update logic for 'sk->copied_seq' was moved to
> tcp_bpf_recvmsg_parser() to ensure the accuracy of the 'fionread' feature.
> 
> [...]

Here is the summary with links:
  - [bpf,v9,1/5] strparser: add read_sock callback
https://git.kernel.org/bpf/bpf/c/0532a79efd68
  - [bpf,v9,2/5] bpf: fix wrong copied_seq calculation
https://git.kernel.org/bpf/bpf/c/36b62df5683c
  - [bpf,v9,3/5] bpf: disable non stream socket for strparser
https://git.kernel.org/bpf/bpf/c/5459cce6bf49
  - [bpf,v9,4/5] selftests/bpf: fix invalid flag of recv()
https://git.kernel.org/bpf/bpf/c/a0c11149509a
  - [bpf,v9,5/5] selftests/bpf: add strparser test for bpf
https://git.kernel.org/bpf/bpf/c/6fcfe96e0f6e

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html





Re: [PATCH 5/6] tools: selftests/bpf: test_bpftool_synctypes: escape raw symbols

2025-01-29 Thread Quentin Monnet
2025-01-29 18:39 UTC+0100 ~ Mauro Carvalho Chehab

> Modern Python versions complain about usage of "\" inside normal
> strings, as they should use r-string notation.
> 
> Change the annotations there to avoid such warnings:
> 
>   tools/testing/selftests/bpf/test_bpftool_synctypes.py:319: 
> SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('([\w-]+) 
> ?(?:\||}[ }\]"])')
> 
> Signed-off-by: Mauro Carvalho Chehab 

Hi, and thanks! But please note we have a fix for this in the bpf-next
tree already:

https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=c5d2bac978c513e1f22273cba9c55db3778032e5

Thanks,
Quentin



Re: [PATCH v4 0/9] mm: workingset reporting

2025-01-29 Thread SeongJae Park
Hi Yuanchu,

On Wed, 29 Jan 2025 18:02:26 -0800 Yuanchu Xie  wrote:

> On Wed, Dec 11, 2024 at 11:53 AM SeongJae Park  wrote:
> >
> > On Fri, 6 Dec 2024 11:57:55 -0800 Yuanchu Xie  wrote:
> >
> > > Thanks for the response Johannes. Some replies inline.
> > >
> > > On Tue, Nov 26, 2024 at 11:26\u202fPM Johannes Weiner 
> > >  wrote:
> > > >
> > > > On Tue, Nov 26, 2024 at 06:57:19PM -0800, Yuanchu Xie wrote:
> > > > > This patch series provides workingset reporting of user pages in
> > > > > lruvecs, of which coldness can be tracked by accessed bits and fd
> > > > > references. However, the concept of workingset applies generically to
> > > > > all types of memory, which could be kernel slab caches, discardable
> > > > > userspace caches (databases), or CXL.mem. Therefore, data sources 
> > > > > might
> > > > > come from slab shrinkers, device drivers, or the userspace.
> > > > > Another interesting idea might be hugepage workingset, so that we can
> > > > > measure the proportion of hugepages backing cold memory. However, with
> > > > > architectures like arm, there may be too many hugepage sizes leading 
> > > > > to
> > > > > a combinatorial explosion when exporting stats to the userspace.
> > > > > Nonetheless, the kernel should provide a set of workingset interfaces
> > > > > that is generic enough to accommodate the various use cases, and 
> > > > > extensible
> > > > > to potential future use cases.
> > > >
> > > > Doesn't DAMON already provide this information?
> > > >
> > > > CCing SJ.
> > > Thanks for the CC. DAMON was really good at visualizing the memory
> > > access frequencies last time I tried it out!
> >
> > Thank you for this kind acknowledgement, Yuanchu!
> >
> > > For server use cases,
> > > DAMON would benefit from integrations with cgroups.  The key then would 
> > > be a
> > > standard interface for exporting a cgroup's working set to the user.
> >
> > I show two ways to make DAMON supports cgroups for now.  First way is making
> > another DAMON operations set implementation for cgroups.  I shared a rough 
> > idea
> > for this before, probably on kernel summit.  But I haven't had a chance to
> > prioritize this so far.  Please let me know if you need more details.  The
> > second way is extending DAMOS filter to provide more detailed statistics per
> > DAMON-region, and adding another DAMOS action that does nothing but only
> > accounting the detailed statistics.  Using the new DAMOS action, users will 
> > be
> > able to know how much of specific DAMON-found regions are filtered out by 
> > the
> > given filter.  Because we have DAMOS filter type for cgroups, we can know 
> > how
> > much of workingset (or, warm memory) belongs to specific groups.  This can 
> > be
> > applied to not only cgroups, but for any DAMOS filter types that exist 
> > (e.g.,
> > anonymous page, young page).
> >
> > I believe the second way is simpler to implement while providing information
> > that sufficient for most possible use cases.  I was anyway planning to do 
> > this.

I implemented the feature for the second approach I mentioned above.  The
initial version of the feature has recently merged[1] into the mainline as a
part of 6.14-rc1 MM pull request.  DAMON user-space tool (damo) is also updated
for baisc support of it.  I forgot updating that on this thread, sorry.

> For a container orchestrator like kubernetes, the node agents need to
> be able to gather the working set stats at a per-job level. Some jobs
> can create sub-hierarchies as well, so it's important that we have
> hierarchical stats.

This makes sense to me.  And yes, I believe DAMOS filters for memcg could also
be used for this use case, since we can install and use multiple DAMOS filters
in combinations.

The documentation of the feature is not that good and there are many rooms to
improve.  You might not be able to get what you want in a perfect way with the
current implementation.  But we will continue improving it, and I believe we
can make it faster if efforts are gathered.  Of course, I could be wrong, and
whether to use it or not is up to each person :)

Anyway, please feel free to ask me questions or any help about the feature if
you want.

> 
> Do you think it's a good idea to integrate DAMON to provide some
> aggregate stats in a memory controller file? With the DAMOS cgroup
> filter, there can be some kind of interface that a DAMOS action or the
> damo tool could call into. I feel that would be a straightforward and
> integrated way to support cgroups.

DAMON basically exposes its internal information via DAMON sysfs, and DAMON
user-space tool (damo) uses it.  In this case, per-memcg working set could also
be retrieved in the way (directly from DAMON sysfs or indirectly from damo).

But, yes, I think we could make new and optimized ABIs for exposing the
information to user-space in more efficient way depending on the use case, if
needed.  DAMON modules such as DAMON_RECLAIM and DAMON_LRU_SORT provides their
own ABIs that 

Re: [PATCH v4 0/9] mm: workingset reporting

2025-01-29 Thread Yuanchu Xie
On Wed, Dec 11, 2024 at 11:53 AM SeongJae Park  wrote:
>
> On Fri, 6 Dec 2024 11:57:55 -0800 Yuanchu Xie  wrote:
>
> > Thanks for the response Johannes. Some replies inline.
> >
> > On Tue, Nov 26, 2024 at 11:26\u202fPM Johannes Weiner  
> > wrote:
> > >
> > > On Tue, Nov 26, 2024 at 06:57:19PM -0800, Yuanchu Xie wrote:
> > > > This patch series provides workingset reporting of user pages in
> > > > lruvecs, of which coldness can be tracked by accessed bits and fd
> > > > references. However, the concept of workingset applies generically to
> > > > all types of memory, which could be kernel slab caches, discardable
> > > > userspace caches (databases), or CXL.mem. Therefore, data sources might
> > > > come from slab shrinkers, device drivers, or the userspace.
> > > > Another interesting idea might be hugepage workingset, so that we can
> > > > measure the proportion of hugepages backing cold memory. However, with
> > > > architectures like arm, there may be too many hugepage sizes leading to
> > > > a combinatorial explosion when exporting stats to the userspace.
> > > > Nonetheless, the kernel should provide a set of workingset interfaces
> > > > that is generic enough to accommodate the various use cases, and 
> > > > extensible
> > > > to potential future use cases.
> > >
> > > Doesn't DAMON already provide this information?
> > >
> > > CCing SJ.
> > Thanks for the CC. DAMON was really good at visualizing the memory
> > access frequencies last time I tried it out!
>
> Thank you for this kind acknowledgement, Yuanchu!
>
> > For server use cases,
> > DAMON would benefit from integrations with cgroups.  The key then would be a
> > standard interface for exporting a cgroup's working set to the user.
>
> I show two ways to make DAMON supports cgroups for now.  First way is making
> another DAMON operations set implementation for cgroups.  I shared a rough 
> idea
> for this before, probably on kernel summit.  But I haven't had a chance to
> prioritize this so far.  Please let me know if you need more details.  The
> second way is extending DAMOS filter to provide more detailed statistics per
> DAMON-region, and adding another DAMOS action that does nothing but only
> accounting the detailed statistics.  Using the new DAMOS action, users will be
> able to know how much of specific DAMON-found regions are filtered out by the
> given filter.  Because we have DAMOS filter type for cgroups, we can know how
> much of workingset (or, warm memory) belongs to specific groups.  This can be
> applied to not only cgroups, but for any DAMOS filter types that exist (e.g.,
> anonymous page, young page).
>
> I believe the second way is simpler to implement while providing information
> that sufficient for most possible use cases.  I was anyway planning to do 
> this.
For a container orchestrator like kubernetes, the node agents need to
be able to gather the working set stats at a per-job level. Some jobs
can create sub-hierarchies as well, so it's important that we have
hierarchical stats.

Do you think it's a good idea to integrate DAMON to provide some
aggregate stats in a memory controller file? With the DAMOS cgroup
filter, there can be some kind of interface that a DAMOS action or the
damo tool could call into. I feel that would be a straightforward and
integrated way to support cgroups.

Yuanchu



[PATCH 5/6] tools: selftests/bpf: test_bpftool_synctypes: escape raw symbols

2025-01-29 Thread Mauro Carvalho Chehab
Modern Python versions complain about usage of "\" inside normal
strings, as they should use r-string notation.

Change the annotations there to avoid such warnings:

tools/testing/selftests/bpf/test_bpftool_synctypes.py:319: 
SyntaxWarning: invalid escape sequence '\w' pattern = re.compile('([\w-]+) 
?(?:\||}[ }\]"])')

Signed-off-by: Mauro Carvalho Chehab 
---
 .../selftests/bpf/test_bpftool_synctypes.py   | 30 +--
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_bpftool_synctypes.py 
b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
index 0ed67b6b31dd..81f286991012 100755
--- a/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+++ b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
@@ -66,7 +66,7 @@ class ArrayParser(BlockParser):
 
 def __init__(self, reader, array_name):
 self.array_name = array_name
-self.start_marker = re.compile(f'(static )?const bool 
{self.array_name}\[.*\] = {{\n')
+self.start_marker = re.compile(fr'(static )?const bool 
{self.array_name}\[.*\] = {{\n')
 super().__init__(reader)
 
 def search_block(self):
@@ -80,7 +80,7 @@ class ArrayParser(BlockParser):
 Parse a block and return data as a dictionary. Items to extract must be
 on separate lines in the file.
 """
-pattern = re.compile('\[(BPF_\w*)\]\s*= (true|false),?$')
+pattern = re.compile(r'\[(BPF_\w*)\]\s*= (true|false),?$')
 entries = set()
 while True:
 line = self.reader.readline()
@@ -177,9 +177,9 @@ class FileExtractor(object):
 
 @enum_name: name of the enum to parse
 """
-start_marker = re.compile(f'enum {enum_name} {{\n')
-pattern = re.compile('^\s*(BPF_\w+),?(\s+/\*.*\*/)?$')
-end_marker = re.compile('^};')
+start_marker = re.compile(fr'enum {enum_name} {{\n')
+pattern = re.compile(r'^\s*(BPF_\w+),?(\s+/\*.*\*/)?$')
+end_marker = re.compile(r'^};')
 parser = BlockParser(self.reader)
 parser.search_block(start_marker)
 return parser.parse(pattern, end_marker)
@@ -226,8 +226,8 @@ class FileExtractor(object):
 
 @block_name: name of the blog to parse, 'TYPE' in the example
 """
-start_marker = re.compile(f'\*{block_name}\* := {{')
-pattern = re.compile('\*\*([\w/-]+)\*\*')
+start_marker = re.compile(fr'\*{block_name}\* := {{')
+pattern = re.compile(r'\*\*([\w/-]+)\*\*')
 end_marker = re.compile('}\n')
 return self.__get_description_list(start_marker, pattern, end_marker)
 
@@ -245,8 +245,8 @@ class FileExtractor(object):
 
 @block_name: name of the blog to parse, 'TYPE' in the example
 """
-start_marker = re.compile(f'"\s*{block_name} := {{')
-pattern = re.compile('([\w/]+) [|}]')
+start_marker = re.compile(fr'"\s*{block_name} := {{')
+pattern = re.compile(r'([\w/]+) [|}]')
 end_marker = re.compile('}')
 return self.__get_description_list(start_marker, pattern, end_marker)
 
@@ -264,8 +264,8 @@ class FileExtractor(object):
 
 @macro: macro starting the block, 'HELP_SPEC_OPTIONS' in the example
 """
-start_marker = re.compile(f'"\s*{macro}\s*" [|}}]')
-pattern = re.compile('([\w-]+) ?(?:\||}[ }\]])')
+start_marker = re.compile(fr'"\s*{macro}\s*" [|}}]')
+pattern = re.compile(r'([\w-]+) ?(?:\||}[ }\]])')
 end_marker = re.compile('}n')
 return self.__get_description_list(start_marker, pattern, end_marker)
 
@@ -284,7 +284,7 @@ class FileExtractor(object):
 @block_name: name of the blog to parse, 'TYPE' in the example
 """
 start_marker = re.compile(f'local {block_name}=\'')
-pattern = re.compile('(?:.*=\')?([\w/]+)')
+pattern = re.compile(r'(?:.*=\')?([\w/]+)')
 end_marker = re.compile('\'$')
 return self.__get_description_list(start_marker, pattern, end_marker)
 
@@ -316,7 +316,7 @@ class MainHeaderFileExtractor(SourceFileExtractor):
 {'-p', '-d', '--pretty', '--debug', '--json', '-j'}
 """
 start_marker = re.compile(f'"OPTIONS :=')
-pattern = re.compile('([\w-]+) ?(?:\||}[ }\]"])')
+pattern = re.compile(r'([\w-]+) ?(?:\||}[ }\]"])')
 end_marker = re.compile('#define')
 
 parser = InlineListParser(self.reader)
@@ -338,8 +338,8 @@ class ManSubstitutionsExtractor(SourceFileExtractor):
 
 {'-p', '-d', '--pretty', '--debug', '--json', '-j'}
 """
-start_marker = re.compile('\|COMMON_OPTIONS\| replace:: {')
-pattern = re.compile('\*\*([\w/-]+)\*\*')
+start_marker = re.compile(r'\|COMMON_OPTIONS\| replace:: {')
+pattern = re.compile(r'\*\*([\w/-]+)\*\*')
 end_marker = re.compile('}$')
 
 parser = InlineListParser(self.reader)
-- 
2.48.1




[PATCH 0/6] Address some issues related to Python version

2025-01-29 Thread Mauro Carvalho Chehab
This series remove compatibility with Python 2.x from scripts that have some
backward compatibility logic on it. The rationale is that, since 
commit 627395716cc3 ("docs: document python version used for compilation"),
the minimal Python version was set to 3.x. Also, Python 2.x is EOL since Jan, 
2020.

Patch 1: fix a script that was compatible only with Python 2.x;
Patches 2-4: remove backward-compat code;
Patches 5-6 solves forward-compat with modern Python which warns about using
 raw strings without using "r" format.

Mauro Carvalho Chehab (6):
  docs: trace: decode_msr.py: make it compatible with python 3
  tools: perf: exported-sql-viewer: drop support for Python 2
  tools: perf: tools: perf: exported-sql-viewer: drop support for Python
2
  tools: perf: task-analyzer: drop support for Python 2
  tools: selftests/bpf: test_bpftool_synctypes: escape raw symbols
  comedi: convert_csv_to_c.py: use r-string for a regex expression

 Documentation/trace/postprocess/decode_msr.py |  2 +-
 .../ni_routing/tools/convert_csv_to_c.py  |  2 +-
 .../scripts/python/exported-sql-viewer.py |  5 ++--
 tools/perf/scripts/python/task-analyzer.py| 23 --
 tools/perf/tests/shell/lib/attr.py|  6 +---
 .../selftests/bpf/test_bpftool_synctypes.py   | 30 +--
 6 files changed, 25 insertions(+), 43 deletions(-)

-- 
2.48.1