This is an automated email from the ASF dual-hosted git repository.
zhangliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere.git
The following commit(s) were added to refs/heads/master by this push:
new 43e6986c58c Update AGENTS.md (#37139)
43e6986c58c is described below
commit 43e6986c58c733daed7720a8be62c2a1e691f84f
Author: Liang Zhang <[email protected]>
AuthorDate: Wed Nov 19 16:48:35 2025 +0800
Update AGENTS.md (#37139)
---
AGENTS.md | 396 ++++++++++++++++++++++++++++++++++++--------------------------
1 file changed, 229 insertions(+), 167 deletions(-)
diff --git a/AGENTS.md b/AGENTS.md
index 518b45d17c4..7bbbd58cf89 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -2,182 +2,244 @@
This guide is written **for AI coding agents only**. Follow it literally;
improvise only when the rules explicitly authorize it.
-## Operating Charter
-- `CODE_OF_CONDUCT.md` is the binding “law” for any generated artifact. Review
it once per session and refuse to keep code that conflicts with it (copyright,
inclusivity, licensing, etc.).
-- Technical choices must honor ASF expectations: license headers, transparent
intent, explicit rationale in user-facing notes.
-- Instruction precedence: `CODE_OF_CONDUCT.md` > user directive > this guide >
other repository documents.
-- Coding style (assertions, formatting, etc.) always follows the standards
referenced by `CODE_OF_CONDUCT.md`; this guide only adds on top of them.
-
-## Team Signals
-- **Release tempo:** expect monthly feature trains plus weekly patch windows.
Default to smallest safe change unless explicitly asked for broader refactors.
-- **Approval gates:** structural changes (new modules, configuration knobs)
require human confirmation; doc-only or localized fixes may proceed after
self-review. Always surface what evidence reviewers need (tests, configs,
reproduction steps).
-- **Quality bias:** team prefers deterministic builds, measurable test
coverage, and clear rollback plans. Avoid speculative features without benefit
statements.
-
-## System Context Snapshot
-- ShardingSphere adds sharding, encryption, traffic governance, and
observability atop existing databases.
-- Module map:
- - `infra`, `database`, `parser`, `kernel`, `mode`: shared infrastructure,
SQL parsing, routing, governance.
- - `jdbc`, `jdbc-dialect`, `proxy`: integration surfaces for
clients/protocols.
+## Core Immutable Principles
+
+1. **Quality First**: code quality and system security are non-negotiable.
+2. **Think Before Action**: perform deep analysis and planning before coding.
+3. **Tools First**: prioritize the proven best toolchain.
+4. **Transparent Records**: keep every key decision and change traceable.
+5. **Continuous Improvement**: learn from each execution and keep optimizing.
+6. **Results Oriented**: judge success solely by whether the target is
achieved.
+7. **Coding Standards**: `CODE_OF_CONDUCT.md` is the binding “law” for any
generated artifact. Review it once per session and refuse to keep code that
conflicts with it (copyright, inclusivity, licensing, etc.).
+
+## Quality Standards
+
+### Engineering Principles
+- **Architecture**: follow SOLID, DRY, separation of concerns, and YAGNI
(build only what you need).
+- **Code Quality**:
+ - Use clear naming and reasonable abstractions.
+ - Delete unused code; when changing functionality, remove legacy
compatibility shims.
+- **Complete Implementation**: no MVPs/placeholders/TODOs—deliver fully
runnable solutions.
+
+### Performance Standards
+- **Algorithm Awareness**: account for time and space complexity.
+- **Resource Management**: optimize memory usage and I/O behavior.
+- **Boundary Handling**: cover exceptional situations and edge conditions.
+
+### Testing Requirements
+- **Test-Driven**: design for testability, ensure unit-test coverage, and keep
background unit tests under 60s to avoid job stalls.
+- **Quality Assurance**: run static checks, formatting, and code reviews.
+- **Continuous Verification**: rely on automated tests and integration
validation.
+
+## Tool Usage Guide
+
+### Sequential Thinking - Structured Reasoning
+**Purpose**: break down complex problems, build multi-step plans, and evaluate
options.
+**When to Trigger**
+- When the task needs to be decomposed into multiple steps.
+- When generating an execution plan or decision tree, or comparing multiple
approaches.
+**Rules**
+- 6-10 steps.
+- Each step: one sentence plus optional dependencies.
+- Output an executable plan without exposing intermediate reasoning.
+**Fallback**: optionally simplify to a 3-5 step core flow locally.
+
+### Exa - Web Search
+**Purpose**: fetch the latest web information, official links, or
announcements.
+**When to Trigger**
+- Needing current events, announcements, or security advisories.
+- Looking up official website entry points.
+- Verifying external information sources.
+**Key Parameters**
+- Keywords: ≤ 12.
+- `web_search_exa`: moderate.
+
+### mcp-deepwiki
+- Deep knowledge aggregation.
+ **Purpose**: deep semantic document retrieval, knowledge aggregation, and
multi-source summarization.
+ **When to Trigger**
+ - Explaining technical concepts or contrasting standards.
+ - Describing algorithm principles.
+ - Integrating multiple official sources.
+ **Key Parameters**
+ - `topic`: technical topic or concept (e.g., "adaptive servo control").
+ - `depth`: 1-3 to control semantic layers.
+
+## ⚠️ Dangerous Operation Confirmation Mechanism
+
+### High-Risk Operation Checklist—obtain explicit confirmation **before**
doing any of the following:
+- **File System**: deleting files/directories, bulk edits, or moving system
files.
+- **Code Submission**: `git commit`, `git push`, `git reset --hard`.
+- **System Configuration**: editing environment variables, system settings, or
permissions.
+- **Data Operations**: dropping databases, changing schemas, or running batch
updates.
+- **Network Requests**: sending sensitive data or calling production APIs.
+- **Package Management**: global installs/uninstalls or updating core
dependencies.
+
+### Confirmation Template
+
+⚠️ Dangerous operation detected! Operation type: [specific action] Scope of
impact: [affected area] Risk assessment: [potential consequence] Please confirm
whether to continue. [Requires explicit “yes”, “confirm”, or “proceed”]
+
+## ✅ Key Checkpoints
+
+### Task Start
+- Choose the strategy that fits the task characteristics.
+- Confirm tool availability plus fallback approaches.
+
+### Before Coding
+- Finish the `Sequential-Thinking` analysis.
+- Use tools to understand the existing code.
+- Define an execution plan and quality bar.
+
+### During Implementation
+- Follow the selected quality standards.
+- Record major decisions and reasons for changes.
+- Handle exceptions and edge cases promptly.
+
+### After Completion
+- Validate functional correctness and code quality.
+- Update related tests and documentation.
+- Summarize key learnings, constraints, and best practices for this task
inside `AGENTS.md`.
+
+## Terminal Output Style Guide
+
+### Language and Tone
+- **Friendly and Natural**: interact like a professional peer; avoid stiff
formal language.
+- **Use Light Accents**: prepend headings or bullets with emojis such as ✨⚠️
to highlight key points.
+- **Hit the Point Fast**: start with a sentence that captures the core idea,
especially for complex problems.
+
+### Content Organization
+- **Hierarchical**: separate sections with headings/subheadings; split long
content into sections.
+- **Focused Bullets**: break long paragraphs into short sentences or bullets,
each covering a single idea.
+- **Logical Flow**: use ordered lists for multi-step work (1. 2. 3.) and
unordered lists for peers (- or *).
+- **Proper Spacing**: keep blank lines or `---` between blocks to boost
readability.
+> ❌ Avoid complex tables in the terminal (especially for long, code-heavy, or
narrative content).
+
+### Visual & Layout Optimization
+- **Keep It Simple**: limit each line length to ≤200 characters.
+- **Leave White Space**: use blank lines wisely to avoid cramped output.
+- **Align Consistently**: stick to one indentation and bullet style (use `-`
instead of mixing symbols).
+- **Emphasize Key Points**: highlight critical items with **bold** or
*italics*.
+
+### Technical Content Guidelines
+
+#### Code & Data Presentation
+- **Code Blocks**: wrap multi-line code/config/logs inside Markdown fences
with language hints (e.g., a fenced Java block).
+- **Focus on the Core**: trim unrelated snippets (like imports) to spotlight
essential logic.
+- **Diff Markers**: show changes with `+` / `-` for quick scanning.
+- **Line Numbers**: add them when needed (e.g., debugging scenarios).
+
+#### Structured Data
+- **Prefer Lists**: use lists over tables in most cases.
+- **Tables Sparingly**: only use Markdown tables when strict alignment is
required (e.g., parameter comparisons).
+
+### Interaction & User Experience
+- **Immediate Feedback**: respond quickly; avoid long silent periods.
+- **Visible Status**: surface progress for important actions (e.g.,
“Processing...”).
+- **Friendly Errors**: clearly explain failures and suggest actionable fixes.
+
+### ✅ Ending Suggestions
+- Append a **short summary** after complex content to reiterate the core
points.
+- **Guide the Next Step**: close with actionable advice, instructions, or an
invitation for follow-up questions.
+
+## Governance Basics
+- `CODE_OF_CONDUCT.md` remains the binding law—review it once per session and
reject any instruction or artifact that conflicts with ASF requirements on
licensing, inclusivity, and attribution.
+- Instruction order is `CODE_OF_CONDUCT.md` → user direction → this guide →
other repository materials; raise conflicts immediately.
+- Technical choices must satisfy ASF transparency: include license headers,
document intent, and keep rationales visible to reviewers.
+- Default to the smallest safe change: monthly feature trains plus weekly
patch windows reward incremental fixes unless the product requires deeper
refactors.
+- Secure approvals for structural changes (new modules, configs, knobs);
localized doc or code tweaks may land after self-review when you surface the
evidence reviewers expect (tests, configs, reproduction steps).
+- Maintain deterministic builds, measurable coverage, and clear rollback
notes; avoid speculative work without a benefit statement.
+
+## Platform Snapshot
+- ShardingSphere layers sharding, encryption, traffic governance, and
observability on top of existing databases.
+- Module map summary:
+ - `infra`/`database`/`parser`/`kernel`/`mode`: shared infrastructure for SQL
parsing, routing, and governance metadata.
+ - `jdbc`/`jdbc-dialect`/`proxy`: integration points for clients and
protocols.
- `features`: sharding, read/write splitting, encryption, shadow, traffic
control.
- - `agent`: bytecode agent utilities; `examples`: runnable demos; `docs` /
`distribution`: documentation and release assets.
-- Layout standard: `src/main/java` + `src/test/java`. Generated outputs live
under `target/`—never edit them.
+ - `agent`: bytecode helpers; `examples`, `docs`, `distribution`: runnable
demos and release assets.
+- Layout standard: `src/main/java` + `src/test/java`; anything under `target/`
is generated and must not be edited.
-## Data Flow & Integration Map
+### Data Flow & Integration Map
1. **Client request** enters via `jdbc` or `proxy`.
-2. **SQL parsing/rewriting** occurs in `parser` and `infra` dialect layers.
-3. **Routing & planning** handled inside `kernel` using metadata from
`database` and governance hints from `mode`.
-4. **Feature hooks** (sharding/encryption/etc.) in `features` mutate route
decisions or payloads.
-5. **Executor/adapters** forward to physical databases and collect results.
+2. **SQL parsing/rewriting** happens in `parser` plus `infra` dialect layers.
+3. **Routing & planning** runs inside `kernel` using metadata from `database`
and hints from `mode`.
+4. **Feature hooks** (sharding/encryption/etc.) in `features` adjust routes or
payloads.
+5. **Executor/adapters** send work to physical databases and gather results.
6. **Observability & governance** loops feed metrics/traffic rules back
through `mode`.
-Reference this flow when reasoning about new features or debugging regressions.
+Use this flow to justify design changes and to debug regressions.
-## Deployment & Topology Snapshot
-- **Proxy cluster + registry** (ZooKeeper/Etcd): clients speak
MySQL/PostgreSQL to `proxy`; governance data lives in `mode` configs.
-- **JDBC embedded**: applications embed `jdbc` driver with sharding/encryption
rules shipped via YAML/Spring configs.
-- **Hybrid**: traffic governance and observability via `mode` while compute
happens inside applications.
-Mention which topology you target, the registry used, and any compatibility
constraints (e.g., MySQL 5.7 vs 8.0) when proposing changes.
+### Deployment Modes
+- **Proxy cluster + registry** (ZooKeeper/Etcd): clients speak
MySQL/PostgreSQL to `proxy`; governance state resides in `mode`.
+- **JDBC embedded**: applications embed the `jdbc` driver with YAML/Spring
configs describing sharding and encryption.
+- **Hybrid**: compute happens in applications while governance/observability
leverage `mode`.
+Always state which topology, registry, and engine versions (e.g., MySQL 5.7 vs
8.0) your change targets.
## Design Playbook
-- **Styles to favor:** elegant, minimal solutions—express intent with the
smallest construct that works, keep methods/tests lean, and avoid incidental
complexity.
-- **Patterns to lean on:** builder/factory helpers in `infra`, SPI-based
extension points, immutable DTOs for plan descriptions, explicit strategy enums
for behavior toggles.
-- **Anti-patterns:** duplicating SQL parsing logic, bypassing metadata caches,
silent fallbacks when configuration is invalid, adding static singletons in
shared modules, over-engineering simple flows.
-- **Known pitfalls:** routing regressions when skipping shadow rules, timezone
drift when mocking time poorly, forgetting to validate both standalone and
cluster (`mode`) settings, missing ASF headers in new files, Mockito inline
mocks failing on GraalVM or other JDKs that block self-attach (run
inline-mocking suites on HotSpot/Temurin or document the limitation),
reflexively poking private methods instead of driving logic through public APIs.
-- **Success recipe:** describe why a change is needed, point to affected data
flow step, keep public APIs backwards compatible, and document defaults in
`docs`.
-- **Case in point:** a prior shadow-rule regression was fixed by (1)
reproducing via `proxy` + sample config, (2) adding a `kernel` unit test
covering the skipped branch, (3) updating docs with the exact YAML flag—mirror
that discipline for new features.
-
-## AI Execution Workflow
-1. **Intake & Clarify** — restate the ask, map affected modules, confirm
sandbox/approval/network constraints.
- - After clarifying, jot down a “constraint checklist” capturing any
user-specific rules (forbidden APIs/assertions, output formats, required order
of operations) plus coverage targets; revisit this list before making edits.
- - Mirror the checklist in your first substantive reply so every
user-specific requirement (e.g., “must use `assertThat+is`”) is written down
verbatim; update the list as scope evolves and explicitly re-check each item
before the final response.
-2. **Plan & Reason** — write a multi-step plan with checkpoints (analysis,
edits, tests). Align scope with release tempo (prefer incremental fixes unless
told otherwise). When the user demands precise branch coverage or “minimum
test” constraints, first enumerate the target branches and map each to the
single test case that will cover it before touching code, and reply with that
list (or test plan) for confirmation before modifying files whenever the user
explicitly asks for it. If the u [...]
-3. **Implement** — touch only necessary files, reuse abstractions, keep ASF
headers.
-4. **Validate** — choose the smallest meaningful command, announce the intent
before execution, summarize exit codes afterward; if blocked (sandbox, missing
deps), explain what would have run and why it matters. When edits only touch
tests or assertions, run the narrowest viable command (e.g., `./mvnw -pl
<module> -DskipITs -Dspotless.skip=true -Dtest=ClassName test`) before
rerunning heavier `-am` builds.
-5. **Report** — lead with intent, list edited files with rationale and line
references, state verification results, propose next actions.
-
-## Tooling & Verification Matrix
-
-| Command | Purpose | When to run |
-| --- | --- | --- |
-| `./mvnw clean install -B -T1C -Pcheck` | Full build with Spotless, license,
checkstyle gates | Before releasing or when cross-module impact is likely |
-| `./mvnw test -pl {module}[-am]` | Unit tests for targeted modules (+ rebuild
deps with `-am`) | After touching code in a module |
-| `./mvnw spotless:apply -Pcheck [-pl module]` | Auto-format + import ordering
| After edits that may violate style |
-| `./mvnw spotless:check -Pcheck` | Format check only (fast lint) | When
sandbox forbids writes or before pushing |
-| `./mvnw test jacoco:check@jacoco-check -Pcoverage-check` | Enforce Jacoco
thresholds | When coverage requirements are mentioned or when adding new
features |
-| `./mvnw -pl {module} -DskipITs -Dspotless.skip=true test` | Quick lint-free
smoke (unit tests only) | To shorten feedback loops during iteration |
-| `./mvnw -pl {module} -DskipTests spotbugs:check checkstyle:check` | Static
analysis without running tests | Before landing riskier refactors or when
reviewing user patches |
-| `./mvnw -pl proxy -am -DskipTests package &&
shardingsphere-proxy/bin/start.sh -c conf/perf.yaml` | Proxy packaging +
lightweight perf smoke | When change may influence runtime characteristics;
capture QPS/latency deltas |
-
-## Testing Expectations
-- Use JUnit 5 + Mockito; tests mirror production packages, are named
`ClassNameTest`, and assert via `assertXxxCondition`. Keep Arrange–Act–Assert,
adding separators only when clarity demands.
-- Before writing code, outline how each branch/scenario will be exercised
(single test vs combined, data setup strategy) so you implement the intended
coverage in one pass.
-- Mock databases/time/network; instantiate simple POJOs. Reset static
caches/guards between cases if production code retains global state.
-- When pipeline/tests require job parameters or data-source configs, prefer
constructing them via existing swapper/helpers (e.g.,
`YamlMigrationJobConfigurationSwapper`,
`YamlPipelineDataSourceConfigurationSwapper`) instead of hand-written YAML so
production parsing paths are exercised.
-- Keep static mocks minimal—only stub SPI/static calls actually reached by the
scenario to avoid `UnnecessaryStubbingException`.
-- Jacoco workflow: `./mvnw -pl {module} -am -Djacoco.skip=false test
jacoco:report`, then inspect `{module}/target/site/jacoco/index.html`.
Aggregator modules require testing concrete submodules before running
`jacoco:report`. When Jacoco fails, describe uncovered branches and the new
tests that cover them.
-- Static / constructor mocking: prefer `@ExtendWith(AutoMockExtension.class)`
with `@StaticMockSettings`/`@ConstructionMockSettings`; avoid manual
`mockStatic`/`mockConstruction`. Ensure the module `pom.xml` has the
`shardingsphere-test-infra-framework` test dependency before using these
annotations.
-- For coverage gating, run `./mvnw test jacoco:check@jacoco-check
-Pcoverage-check` and report results. If code is truly unreachable, cite
file/line and explain why, noting whether cleanup is recommended.
-- When a request calls for “minimal branch coverage” or “each branch appears
only once,” list every branch up front, map each to a single test, and
explicitly document any uncovered branches (file, line, reason) to avoid
redundant cases.
-- If the user bans specific assertions/tools (e.g., “don’t use
`assertEquals`”), add that rule to your test plan, avoid the disallowed API
during implementation, and run a quick search (e.g., `rg assertEquals`) before
finishing to ensure compliance; mention the search command and result in the
final report.
-
-### Test Auto-Directives
-When a task requires tests, automatically:
-1. Apply the above style/dependency rules (JUnit5 + Mockito, naming, AAA,
mocks for externals).
-2. Plan for first-pass success; reason against `./mvnw test -pl {module} -am`
(or equivalent) and note fixture trade-offs when heavy.
-3. Target 100% relevant coverage; run Jacoco check when appropriate and
summarize the outcome.
-4. Document any unreachable code (file + line + rationale) and suggest cleanup
if applicable.
-
-### How To Ask Me (Tests)
-- "Implement or update unit tests for {class|scenario} following the AGENTS.md
testing guidelines."
-- "Ensure the tests pass in one shot; reason against `./mvnw test -pl {module}
-am` (or an equivalent command) and describe the mock/fixture strategy."
-- "Cover every branch of {module}/{class}, run `./mvnw test
jacoco:check@jacoco-check -Pcoverage-check`, and report the coverage results."
-- "If dead code remains uncovered, explain the reason and cite the exact
location."
-
-## Run, Debug & Triage
-
-| Scenario | How to run / inspect | AI response pattern |
-| --- | --- | --- |
-| Proxy quick start | `./mvnw -pl proxy -am package`; run
`shardingsphere-proxy/bin/start.sh -c conf/server.yaml` (use
`examples/src/resources/conf`) | Record command + exit code, cite config path,
include protocol info |
-| JDBC smoke | `./mvnw -pl jdbc -am test -Dtest=YourTest` with datasource
configs from `examples` | Note test name, datasource setup, key log excerpts on
failure |
-| Config change validation | Update standalone `server.yaml` and cluster
`mode/` configs; document defaults under `docs/content` | Explain affected
deployment mode(s), show snippet, list docs touched |
-| Failure triage | Gather `proxy/logs/` + `target/surefire-reports` | Quote
relevant log lines, map to data-flow step, propose next diagnostic |
-| SQL routes wrong/missing shards | Check feature rule configs, metadata
freshness, parser dialect | Provide SQL + config snippet, identify impacted
module (`features`/`kernel`), add/plan targeted tests |
-| Proxy won’t start | Validate configs/mode/ports, reuse example configs |
Share exact log snippet, list configs inspected, suggest fix (without editing
generated files) |
-| Spotless/checkstyle errors | `./mvnw spotless:apply -Pcheck [-pl module]`
(or `spotless:check`) | Mention command result, confirm ASF headers/import
order |
-| Sandbox/network block | Command denied due to sandbox/dependency fetch |
State attempted command + purpose, request approval or alternative plan |
-
-- When touching a single module/class, prefer the narrowest Maven command such
as `./mvnw -pl <module> -am -Dspotless.skip=true -DskipITs -Dtest=TargetTest
test` (or an equivalent) for fast feedback, and cite the exact command in the
report.
-- Avoid running `-Dtest=Pattern` from the repo root unless you are certain the
selected modules contain matching test classes; otherwise Surefire fails fast.
If in doubt, run the module’s full suite (preferred) or add
`-Dsurefire.failIfNoSpecifiedTests=false`.
+- **Preferred styles:** elegant, minimal solutions that keep methods/tests
lean, use guard clauses, and delete dead code immediately.
+- **Patterns to lean on:** builders/factories from `infra`, SPI-driven
extensions, immutable DTOs for plan descriptions, explicit strategy enums.
+- **Anti-patterns:** duplicating parsing logic, bypassing metadata caches,
silently accepting invalid configs, static singletons in shared modules, or
overbuilt helpers.
+- **Known pitfalls:** routing regressions when shadow rules are skipped,
timezone drift from poor time-mocking, forgetting standalone vs cluster
(`mode`) validation, missing ASF headers, Mockito inline mocks breaking on JDKs
that block self-attach.
+- **Success recipe:** explain why the change exists, cite the affected
data-flow step, keep public APIs backward compatible, and record defaults/knobs
alongside code changes.
+
+## Execution Loop
+1. **Intake & Clarify** – restate the request, map affected modules, confirm
sandbox/network/approval constraints, and capture a constraint checklist
(forbidden APIs, output formats, ordering rules, coverage targets).
+2. **Plan & Reason** – craft a multi-step plan (analysis, edits, tests).
Enumerate branches/tests upfront whenever the user demands minimum coverage;
add rule-specific constraints (e.g., “no `assertEquals`”) to the plan and
re-check them before edits.
+3. **Implement** – touch only the required files, reuse abstractions, preserve
ASF headers, and document major decisions.
+4. **Validate** – run the narrowest meaningful command (e.g., `./mvnw -pl
<module> -am test`, `./mvnw -pl <module> -DskipITs -Dspotless.skip=true
-Dtest=ClassName test`). Announce intent beforehand and summarize exit codes
afterward; when blocked, state the command you intended to run and why it
matters.
+5. **Report** – lead with intent, list edited files plus rationale/line refs,
cite verification commands + results, and propose next steps.
+
+**Self-check before finishing**
+- Confirm instruction precedence and constraint checklist items are satisfied.
+- Ensure edits are minimal, ASF headers intact, Spotless-ready, and any
semantic change has a corresponding test (or explicit rationale).
+- Record exact commands, exit codes, and relevant log snippets.
+- Highlight remaining risks or follow-ups and keep ASCII-only output unless
non-ASCII already existed.
+
+## Tooling & Testing Essentials
+- **Go-to commands:** `./mvnw clean install -B -T1C -Pcheck` (full build),
`./mvnw test -pl <module>[-am]` (scoped unit tests), `./mvnw spotless:apply
-Pcheck [-pl <module>]` (format), `./mvnw -pl <module> -DskipITs
-Dspotless.skip=true -Dtest=ClassName test` (fast verification), `./mvnw test
jacoco:check@jacoco-check -Pcoverage-check` (coverage gate), and `./mvnw -pl
proxy -am -DskipTests package` (proxy packaging/perf smoke).
+- **Testing ground rules:** JUnit 5 + Mockito, `ClassNameTest` naming,
Arrange–Act–Assert structure, mocks for databases/time/network, reset static
caches between cases, and prefer existing swapper/helpers for complex configs.
+- **Coverage discipline:** run Jacoco (`./mvnw -pl <module> -am
-Djacoco.skip=false test jacoco:report`) when coverage is in question; describe
any uncovered branches with file/line reasoning.
+- **Branch-focused work:** when asked for “minimal branch coverage” or
similar, list every branch upfront, map each to a single test, and document
unreachable code explicitly instead of adding redundant cases.
+- **API bans:** if a user disallows a tool or assertion, add it to your plan,
avoid it during implementation, and cite any verification searches (e.g., `rg
assertEquals`) in the final report.
+
+## Run & Triage Quick Sheet
+- **Proxy quick start:** `./mvnw -pl proxy -am package` then
`shardingsphere-proxy/bin/start.sh -c conf/server.yaml`; report command, exit
code, config path, and protocol.
+- **JDBC smoke:** `./mvnw -pl jdbc -am test -Dtest=YourTest` with datasource
configs from `examples`; note test name, datasource setup, and failure logs.
+- **Config validation:** update standalone `server.yaml` and cluster `mode/`
configs together; call out defaults and any edits that affect both.
+- **Failure triage:** collect `proxy/logs/` plus `target/surefire-reports`,
quote the relevant log lines, map them to the data-flow step, and propose the
next diagnostic.
+- **Routing mistakes:** check feature-rule configs, metadata freshness, and
parser dialect; include SQL + config snippet plus impacted module (`features`
or `kernel`), and add/plan targeted tests.
+- **Proxy won’t start:** verify configs/mode/ports and reuse known-good
example configs; share the log snippet and files inspected without editing
generated artifacts.
+- **Spotless/checkstyle:** run `./mvnw spotless:apply -Pcheck [-pl <module>]`
(or `spotless:check`) and confirm ASF headers/import ordering.
+- **Sandbox/network block:** if a command is denied, state what you ran, why
it failed, and the approval or alternative plan required.
+- **Single-module tests:** prefer scoped commands over repo-wide runs; avoid
`-Dtest=Pattern` from repo root unless you know the target exists, otherwise
use the module’s suite or `-Dsurefire.failIfNoSpecifiedTests=false`.
## Compatibility, Performance & External Systems
-- **Database/protocol support:** note targeted engines (MySQL 5.7/8.0,
PostgreSQL 13+, openGauss, etc.) and ensure new behavior stays backward
compatible; link to affected dialect files.
-- **Registry & config centers:** `mode` integrates with ZooKeeper, Etcd,
Consul; describe tested registry and highlight compatibility risks when editing
governance logic.
-- **Metrics/observability:** mention if changes touch agent plugins,
Prometheus exporters, or tracing hooks; reference the module (`agent`,
`infra/metrics`, `docs/content/dev-guide/observability`).
-- **Performance guardrails:** when runtime impact is possible, capture
baseline vs new latency/QPS from perf smoke and include CPU/memory observations.
-
-## AI Collaboration Patterns
-
-| Scenario | Include in the response |
-| --- | --- |
-| Change request | Goal, constraints, suspected files/modules, planned
validation command |
-| Code review | Observed issue, impact/risk, suggested fix or follow-up test |
-| Status update | Work done, verification status (commands + exit codes),
remaining risks/TODOs |
-| Failure/blocker | Command attempted, why it failed (sandbox/policy/log),
approval or artifact required next |
-
-- **Anti-patterns:** decline vague orders (e.g., “optimize stuff”) and ask for
module + expected outcome instead of guessing.
-- **Hand-off checklist:** intent, edited files + rationale, executed
commands/results, open risks, related issues/PRs.
-
-## Release & Rollback Rituals
-1. **Pre-change:** restate why current release train needs the change, cite
relevant issue/PR/decision doc, note affected modules and configs.
-2. **Post-implementation:** record tests + perf smokes, confirm
Spotless/Jacoco/static analysis status, update docs/configs, flag
translation/backport tasks if docs span multiple locales.
-3. **Rollback-ready:** describe simple revert steps (e.g., disable knob,
revert YAML, roll back module), list files to touch, and mention how to confirm
rollback success.
-4. **Communication:** include release note snippets or doc anchor updates so
reviewers can copy/paste.
-
-## Collaboration & Escalation
-- Commit messages use `module: intent` (e.g., `kernel: refine route planner`)
and cite why the change exists.
-- Reviews focus on risks first (regressions, coverage gaps, configuration
impact) before polish.
-- Approval request template: “Command → Purpose → Sandbox limitation/side
effects → Expected output”; include why escalation is justified.
-- If repo state or sandbox limits conflict with `CODE_OF_CONDUCT.md`, stop
immediately and request direction—do not attempt workarounds.
-
-## Prompt & Reference Quick Sheet
-
-| Area | Include when prompting/reporting | Key references |
-| --- | --- | --- |
-| Parser / dialect | Target database version, sample SQL, expected AST deltas,
downstream module depending on result | `parser/src/**`,
`docs/content/dev-guide/sql-parser` |
-| Kernel routing & features | Metadata shape (tables, binding,
sharding/encryption rules), knob values, impacted `features` hook |
`kernel/src/**`, `features/**`, configs under `examples` |
-| Proxy runtime & governance | Startup command, `conf/server.yaml`,
registry/mode config, log excerpts, client protocol | `proxy/**`, `mode/**`,
`examples/src/resources/conf`, `docs/content/dev-guide/proxy` |
-| Observability/agent | Metrics/tracing plugin touched, expected signal
format, dashboards affected | `agent/**`, `infra/metrics/**`,
`docs/content/dev-guide/observability` |
-| Docs/config updates | Audience (user/admin), file path, translation needs,
screenshots/assets | `docs/content/**`, `README*.md`, translation directories |
-| Process & releases | Commit/PR intent, release-train notes, maturity level,
rollback strategy | `CONTRIBUTING.md`, `MATURITY.md`, `README.md` |
-| Compliance & conduct | Any risk of license header, third-party code,
inclusive language issues | `CODE_OF_CONDUCT.md`, `LICENSE`, `NOTICE` |
-
-## Mocking Guidelines
-- Prefer Mockito mocks over bespoke test fixtures; avoid introducing new
fixture classes unless a scenario cannot be expressed with mocks.
-- When different rule/attribute types must be distinguished, declare marker
interfaces and mock them with Mockito instead of writing concrete fixture
implementations.
-- If the project already exposes a suitable interface (e.g.,
`ShardingSphereRule`), create distinct Mockito mocks directly
(`mock(MyRule.class)`), and only introduce marker interfaces when no existing
type can represent the scenario.
-- Test method names must reference the production method under test (e.g.,
`assertGetInUsedStorageUnitNameAndRulesMapWhen...`) so intent is unambiguous to
reviewers.
-- Never use reflection to test private helpers; unit tests must target public
APIs. If a private branch is unreachable via public methods, document it (file
+ reason) instead of adding a reflective test.
-- Mock dependencies that need heavy external environments (database, cache,
registry, network) instead of provisioning real instances.
-- When constructing an object requires more than two nested
builders/factories, prefer mocking the object.
-- If an SPI returns an object that is not the subject under test, use
`mockStatic` (or equivalent) to provide canned behavior.
-- When only a few properties of a complex object are used, mock it rather than
assembling the full graph.
-- Do not mock simple objects that can be instantiated directly with `new`.
-- Do not enable Mockito’s `RETURNS_DEEP_STUBS` unless unavoidable chained
interactions make explicit stubs impractical; if you must enable it, mention
the justification in the test description.
-- If a production class hides dependencies behind private finals (no
constructor/setter), you may instantiate it directly and use reflection to
inject mocks, but document why SPI creation is bypassed and still mock the SPI
interactions the code triggers.
-- If a class hard-wires collaborators in its constructor (e.g.,
ConsistencyCheckJob’s executor delegate), use Mockito’s
`Plugins.getMemberAccessor()` to inject mocks and explain why, instead of
adding new constructors or test-only helpers.
-
-## SPI Loader Usage
-- Cache services obtained through SPI loaders (`OrderedSPILoader`,
`TypedSPILoader`, `DatabaseTypedSPILoader`, etc.) at the test-class or suite
level whenever the same type is reused, so repeated lookups do not slow tests
or introduce ordering surprises.
-- When multiple loader invocations require different keys (e.g., different
database types), scope and document each cached instance per key instead of
calling the loader inline inside every test.
-
-## AI Self-Check Checklist (Pre-Submission Must-Do)
-1. Instruction precedence: `CODE_OF_CONDUCT.md` → user request → this guide →
other docs. Are any conflicts unresolved?
-2. Are edited files minimal, include ASF headers, and pass Spotless?
-3. Do all semantic changes have corresponding tests (or a rationale if not)
with commands/exit codes recorded?
-4. Does run/triage information cite real file paths plus log/config snippets?
-5. Does the report list touched files, verification results, known risks, and
recommended next steps?
-6. For new or updated tests, did you inspect the target production code paths,
enumerate the branches being covered, and explain that in your answer?
-7. Have you enforced every user-specific constraint on the checklist (e.g.,
forbidden assertions), including a final search/inspection to confirm
compliance?
-8. Before finishing, did you re-check the latest verification command
succeeded (rerun if needed) so the final state is green, and were the commands
scoped to the smallest necessary modules to avoid redundant reruns?
+- Specify targeted engines and dialect files (MySQL 5.7/8.0, PostgreSQL 13+,
openGauss, etc.) and guarantee backward-compatible behavior.
+- Call out registry/config-center expectations (ZooKeeper, Etcd, Consul) and
note risks when governance logic changes.
+- Mention observability or agent impacts (metrics exporters, tracing hooks)
whenever touched.
+- Capture performance guardrails—baseline vs new latency/QPS, CPU, memory
observations—when runtime paths change.
+
+## Collaboration Patterns & Rituals
+- **Response focus:** for change requests list goal/constraints/suspected
files/validation plan; for code reviews highlight issues + risks + suggested
fixes; for status updates summarize completed work + verification +
risks/TODOs; for failures/blockers share attempted command, reason for failure,
and what you need next.
+- **Anti-patterns:** do not accept vague orders (“optimize stuff”); always
clarify module + expected result.
+- **Hand-off checklist:** intent, edited files, rationale, executed commands
(with exit codes), open risks, related issues/PRs.
+- **Release & rollback:** (1) restate why the release needs the change and
cite affected modules/configs, (2) after implementation capture tests/perf
smokes and document Spotless/Jacoco/static results plus translation/backport
tasks, (3) outline rollback steps (disable knob, revert YAML/module) and how to
confirm success, (4) prep release-note snippets or anchor updates for reviewers.
+- **Escalation etiquette:** commit messages use `module: intent`, reviews
prioritize regression risks, approval requests follow “Command → Purpose →
Sandbox limitation → Expected output,” and halt for guidance whenever sandbox
limits conflict with `CODE_OF_CONDUCT.md`.
+
+## Prompt & Reference Snapshot
+- **Parser / dialect:** include target database version, sample SQL, expected
AST deltas, and downstream modules relying on the result.
+- **Kernel routing & features:** describe metadata shape (tables, binding,
sharding/encryption rules), knob values, and which `features` hook processes
the change.
+- **Proxy runtime & governance:** state startup command, `conf/server.yaml`,
registry/mode config, relevant logs, and client protocol.
+- **Observability / agent:** mention metrics or tracing plugins touched plus
expected signal format and dashboards affected.
+- **Docs / config updates:** specify audience (user/admin), file path,
translation implications, and any screenshots/assets added.
+- **Process & releases:** cite commit/PR intent, release-train context,
maturity level, and rollback plan.
+- **Compliance & conduct:** flag license-header risk, third-party code usage,
and inclusive language considerations.
+
+## Mocking & SPI Guidance
+- Favor Mockito over bespoke fixtures; only add new fixture classes when mocks
cannot express the scenario.
+- Use marker interfaces when distinct rule/attribute types are needed; reuse
SPI types such as `ShardingSphereRule` where possible.
+- Name tests after the production method under test; never probe private
helpers directly—document unreachable branches instead.
+- Mock heavy dependencies (database/cache/registry/network) and prefer mocking
over building deep object graphs; avoid `RETURNS_DEEP_STUBS` unless chained
interactions demand it.
+- When constructors hide collaborators, use `Plugins.getMemberAccessor()` to
inject mocks and document why SPI creation is bypassed.
+- Cache SPI loader results (`OrderedSPILoader`, `TypedSPILoader`,
`DatabaseTypedSPILoader`, etc.) per key at the test-class level to avoid
redundant lookups.
## Brevity & Signal
- Prefer tables/bullets over prose walls; cite file paths (`kernel/src/...`)
directly.