Hi James, On 10/05/2018 11:20 AM, James Morse wrote: > Hi Babu, > > On 24/09/18 20:19, Moger, Babu wrote: >> Enables QOS feature on AMD. >> Following QoS sub-features are supported in AMD if the underlying >> hardware supports it. >> - L3 Cache allocation enforcement >> - L3 Cache occupancy monitoring >> - L3 Code-Data Prioritization support >> - Memory Bandwidth Enforcement(Allocation) >> >> There are differences in the way some of the features are implemented. >> Separate those functions and add those as vendor specific functions. >> The major difference is in MBA feature. >> - AMD uses CPUID leaf 0x80000020 to initialize the MBA features. >> - AMD uses direct bandwidth value instead of delay based on bandwidth >> values. >> - MSR register base addresses are different for MBA. > >> - Also AMD allows non-contiguous L3 cache bit masks. > > Nice! > > This is visible to user-space, the 'Cache Bit Masks (CBM)' section of > Documentation/x86/intel_rdt_ui.txt currently says 'X86 hardware requires ... a > contiguous block'. > > Does user-space need to know it can do this in advance, or is it a > try-it-and-see?
It is try-it-and-see. > > Arm's MPAM stuff can do this too, but I'm against having the ABI vary between > architectures. If this is going to be discoverable, I'd like it to work on > Arm too. It not discoverable at this point. Mostly predefined. Yes. It will be bit of a challenge handle these differences. We may have to come up with some kind of a flag(or something) to make it look similar on the ABI side. > > > Thanks, > > James > >> Adds following functions to take care of the differences. >> rdt_get_mem_config_amd : MBA initialization function >> parse_bw_amd : Bandwidth parsing >> mba_wrmsr_amd: Writes bandwidth value >> cbm_validate_amd : L3 cache bitmask validation > >> diff --git a/arch/x86/kernel/cpu/rdt_ctrlmondata.c >> b/arch/x86/kernel/cpu/rdt_ctrlmondata.c >> index 5a282b6c4bd7..1e4631f88696 100644 >> --- a/arch/x86/kernel/cpu/rdt_ctrlmondata.c >> +++ b/arch/x86/kernel/cpu/rdt_ctrlmondata.c >> @@ -123,6 +169,41 @@ bool cbm_validate(char *buf, u32 *data, struct >> rdt_resource *r) >> return true; >> } >> >> +/* >> + * Check whether a cache bit mask is valid. AMD allows >> + * non-contiguous masks. >> + */ >> +bool cbm_validate_amd(char *buf, u32 *data, struct rdt_resource *r) >> +{ >> + unsigned long first_bit, zero_bit, val; >> + unsigned int cbm_len = r->cache.cbm_len; >> + int ret; >> + >> + ret = kstrtoul(buf, 16, &val); >> + if (ret) { >> + rdt_last_cmd_printf("non-hex character in mask %s\n", buf); >> + return false; >> + } >> + >> + if (val == 0 || val > r->default_ctrl) { >> + rdt_last_cmd_puts("mask out of range\n"); >> + return false; >> + } >> + >> + first_bit = find_first_bit(&val, cbm_len); >> + zero_bit = find_next_zero_bit(&val, cbm_len, first_bit); >> + >> + >> + if ((zero_bit - first_bit) < r->cache.min_cbm_bits) { >> + rdt_last_cmd_printf("Need at least %d bits in mask\n", >> + r->cache.min_cbm_bits); >> + return false; >> + } >> + >> + *data = val; >> + return true; >> +} >> + >> struct rdt_cbm_parse_data { >> struct rdtgroup *rdtgrp; >> char *buf; >> >