labath added inline comments.

================
Comment at: 
lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux_arm64.cpp:240
+  // Update SVE registers in case there is change in configuration.
+  ConfigureRegisterContext();
+
----------------
omjavaid wrote:
> labath wrote:
> > What's the relationship of this function and the `InvalidateAllRegisters` 
> > function we added (for arm64 benefit) a couple of reviews back? Can we just 
> > have one of them?
> Both InvalidateAllRegisters and ConfigureRegisterContext are inter related.
> 
> The idea is to maintain a cache of SVE register configurations created after 
> thread stop on first call to ReadRegister/WriteRegister, using 
> ConfigureRegisterContext. This configuration cache is maintained unless we do 
> register write or we issue a step or resume.
> 
> InvalidateAllRegisters is called before resume and it will invalidate all 
> cached register data including SVE configuration thus forcing us to do 
> ConfigureRegisterContext on stop.
> 
> There is also a case where we need to reconfigure register context when we 
> write vector granule register and need to update register sizes and offsets 
> and sync them up between LLDB client and server. That case is handled by the 
> child revisions of this patch.
Right, it's that inter-relatedness that worries me. Back when we were talking 
about `InvalidateAllRegisters` we (I) decided that it was simpler to call it 
before a thread resumes. I'm wondering if that wasn't a mistake.

Like, if we called InvalidateAllRegisters when a thread stops, then you could 
use this opportunity to initialize the SVE configuration, right? And that would 
avoid the need to call `ConfigureRegisterContext` on every register read 
operation? Obviously there'd still need to be some reconfiguration/invalidation 
of the SVE context after some register writes, but that would be similar to us 
invalidating the GPR cache when the GPR registers get written.

Basically, it seems to me that would make things more homogeneous.


================
Comment at: 
lldb/source/Plugins/Process/Linux/NativeRegisterContextLinux_arm64.h:22
+#define INCLUDE_LINUX_PTRACE_DEFINITIONS_FOR_SVE_ARM64
+#include "Plugins/Process/Utility/LinuxPTraceDefines_arm64sve.h"
+#endif
----------------
omjavaid wrote:
> omjavaid wrote:
> > labath wrote:
> > > At this point, that header is sufficiently different from the 
> > > asm/ptrace.h implementation (names of all structs and fields are 
> > > different) that I don't think it makes sense to use it as an optional 
> > > replacement. It makes it very likely things will not build in one of the 
> > > modes.
> > > 
> > > This should either use the system version (and then only build when the 
> > > system has the appropriate headers), or we should use the in-house 
> > > version unconditionally (which may require renaming/namespacing the rest 
> > > of the file so that it does not conflict with the real `asm/ptrace.h`).
> > Ack.
> asm/ptrace.h also describes legacy structs for fpsimd access, hardware 
> breakpoint structs and other related ptrace access. Newer versions of ptrace 
> only append SVE specific structs and macros. In house LLDB's SVE macros file 
> only include SVE specific register names. I dont see any conflict since I 
> have also added a guard for multiple definitions in 
> LinuxPTraceDefines_arm64sve.h as it protects against builds of AArch64/Linux 
> where ThreadElfCore also includes sigcontext and ptrace.h for accessing 
> elf-core related struct definitions.
I'm not saying we shouldn't include asm/ptrace.h at all. It obviously contains 
a lot of useful stuff, and we wouldn't want to duplicate all of that.

What I am saying is that for the stuff that we already have duplicated, we just 
be using that unconditionally. Doing otherwise just needlessly forks the build 
matrix and creates opportunities for things to not work in one mode or the 
other. We're already committed to ensuring things work with the in-house 
definitions so why bothering to ensure that things work both with them _and_ 
with the system ones.


================
Comment at: 
lldb/test/API/commands/register/register/aarch64_sve_registers/rw_access_static_config/main.c:2-4
+  asm volatile("ptrue p0.s\n\t");
+  asm volatile("fcpy  z0.s, p0/m, #5.00000000\n\t");
+  return 0; // Set a break point here.
----------------
omjavaid wrote:
> labath wrote:
> > In our newer register tests we use the pattern where the debugger sets the 
> > register values, which are then read back through the inferior (and vice 
> > versa). I thing this is a good thing as it avoids the possibility of making 
> > symmetrical bugs in the read/write code, which will then appear to modify 
> > the register successfully, but they won't actually change the value 
> > observed by the inferior process.
> In this test we are also doing the same please have a look at 
> TestSVERegisters.py:140-142.
> 
> The code in main.c is used to make sure we enable SVE mode. AArch64 SVE mode 
> is enabled by Linux kernel on first execution of SVE code.
I have a feeling we're misunderstanding each other. I am talking about the 
tests in `test/Shell/Register` (e.g. `x86-64-xmm16-read.test`, 
`x86-64-xmm16-write.test`). This test definitely doesn't to that. The code in 
140-142 is just reading back what was written by lines 135-137, not values set 
by the inferior.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D79699/new/

https://reviews.llvm.org/D79699



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to