On 2012-12-07 11:24, Peter Maydell wrote: > On 7 December 2012 01:25, liguang <lig.f...@cn.fujitsu.com> wrote: >> Signed-off-by: liguang <lig.f...@cn.fujitsu.com> >> --- >> target-i386/cpu.h | 15 +++++++++++++-- >> 1 files changed, 13 insertions(+), 2 deletions(-) >> >> diff --git a/target-i386/cpu.h b/target-i386/cpu.h >> index 29245d1..3646128 100644 >> --- a/target-i386/cpu.h >> +++ b/target-i386/cpu.h >> @@ -996,9 +996,20 @@ int cpu_x86_handle_mmu_fault(CPUX86State *env, >> target_ulong addr, >> #define cpu_handle_mmu_fault cpu_x86_handle_mmu_fault >> void cpu_x86_set_a20(CPUX86State *env, int a20_state); >> >> -static inline int hw_breakpoint_enabled(unsigned long dr7, int index) >> +static inline bool hw_local_breakpoint_enabled(unsigned long dr7, int index) >> { >> - return (dr7 >> (index * 2)) & 3; >> + return !(((dr7 >> (index * 2)) ^ 1) & 3); > > This is pretty confusing and I'm pretty sure the function is > misnamed too. If we're checking "is local breakpoint enabled" > then we only want to check one of the two enable bits, not both. >
Yes, and I already asked to define the proper constants that allow checking for local vs. global enable bit. They have to be used here instead of all the magic & 3 or ^ 1 stuff. BTW, there is no need for "converting" ("!!") the result of the (value & mask) to bool, the compiler will do this already. Jan > >> +} >> + >> +static inline bool hw_global_breakpoint_enabled(unsigned long dr7, int >> index) >> +{ >> + return !!((dr7 >> (index * 2)) & 2); >> +} >> + >> +static inline bool hw_breakpoint_enabled(unsigned long dr7, int index) >> +{ >> + return (hw_global_breakpoint_enabled(dr7, index) || >> + hw_local_breakpoint_enabled(dr7, index)); >> } > > -- PMM > -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux