Hi all. I have enabled per-patchseries Anthropic AI code review reports (sent to test-report mailing list, available on patchwork) in the UNH CI system, so you will start seeing those come in for patches in the future which meet these conditions:
1. Applied cleanly to DPDK 2. Build test succeeded 3. Ran dpdk-test fast tests, and <= 3 tests failed 4. Patchseries does not make any modifications to AGENTS.md (protection against malicious prompt modification in CI) The reports look like this: https://mails.dpdk.org/archives/test-report/2026-March/973793.html A PASS status means the AI code review was able to run and report without issue. A WARN status means the AI code review could not run (because it failed the gating conditions explained above). Aaron Conole came over to UNH yesterday and we went over some additional features for the automated reviews which the UNH team is working on adding now. These include: 1. When a gating condition fails and we report a WARN with no report, indicate in the email which gating condition prevented the report from running. 2. Instead of emailing 1 AI review report for the entire series (sent to the last patch in the series on patchwork), parse out the AI review into smaller chunks (one per patch) and send individual AI review emails to each patch, including just the comments for that patch. 3. Allow for maintainers to override a WARN AI code review (i.e. force it to actually run the code review on a series which was skipped) by sending an override string to the mailing list as a response to the respective patch. 4. Migrate to using the upstream AGENTS.md (to be merged into DPDK in the future). Right now we are using our own out of tree AGENTS.md (which Aaron Conole worked on) until DPDK has its AGENTS.md merged.

