anakryiko added inline comments.
================ Comment at: clang/lib/CodeGen/CGBuiltin.cpp:9321-9322 + if (IsError) + return IsBitField ? EmitLValue(Arg).getBitFieldPointer() + : EmitLValue(Arg).getPointer(); + ---------------- move this under `if (!getDebugInfo)` above and ditch IsError completely? ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:235 report_fatal_error("Missing metadata for llvm.preserve.array.access.index intrinsic"); - AccessIndex = cast<ConstantInt>(Call->getArgOperand(2)) - ->getZExtValue(); + AccessIndex = getConstant(Call->getArgOperand(2)); return true; ---------------- getConstant returns u64, but AccessIndex is u32, is that a concern? ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:254 + } + if (GV->getName().startswith("llvm.bpf.get.field.info")) { + Kind = BPFGetFieldInfoAI; ---------------- just curious, why did you decide to not follow llvm.preserve.xxx pattern here? E.g., llvm.preserve.field.info would read just fine, I guess? ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:533 + auto *MemberTy = cast<DIDerivedType>(CTy->getElements()[AccessIndex]); + assert(MemberTy->isBitField()); + uint64_t OffsetInBits = MemberTy->getOffsetInBits(); ---------------- is it possible to allow capturing same bitfield information for non-bitfield integers? E.g, if my current vmlinux.h has some field as plain integer, but I know that some older kernel (or maybe future kernels?) had this field as a bitfield, I won't have to do anything nasty just to handle this generically. The less artificial restrictions we have, the better, IMO. ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:536 + uint64_t SizeInBits = MemberTy->getSizeInBits(); + assert(SizeInBits <= 63); + switch (InfoKind) { ---------------- I think we should be generic here and allow up to 128 bits and let libbpf handle its limitations separately (e.g., if we can't support >64 bits initially, libbpf will reject the program; but if we ever extend the support, at least we won't have to go back to Clang and fix it up). ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:541-548 + case BPFCoreSharedInfo::BTF_FIELD_BF_BYTE_OFFSET_64BIT_ALIGN: + PatchImm = (OffsetInBits & ~0x3f) >> 3; + break; + case BPFCoreSharedInfo::BTF_FIELD_BF_LSHIFT_64BIT_ALIGN: + PatchImm = OffsetInBits & 0x3f; + break; + case BPFCoreSharedInfo::BTF_FIELD_BF_RSHIFT_64BIT_ALIGN: ---------------- I'll need to do some drawing and math to check how this is going to be handled in big-endian and little-endian, I'll get back to you later with that. ================ Comment at: llvm/lib/Target/BPF/BPFAbstractMemberAccess.cpp:600 + + PatchImm += FirstIndex * Ty->getSizeInBits() >> 3; break; ---------------- nit: I can't from the top of my head figure out what takes precedence: shift or multiplication, maybe add parens to make it obvious? ================ Comment at: llvm/lib/Target/BPF/BTFDebug.cpp:1163 + // Emit "mov ri, <imm>" for patched immediate. + int64_t Imm = PatchImms[GVar->getName().str()]; OutMI.setOpcode(BPF::MOV_ri); ---------------- why int64? ================ Comment at: llvm/lib/Target/BPF/BTFDebug.h:227 /// Represent one offset relocation. struct BTFOffsetReloc { const MCSymbol *Label; ///< MCSymbol identifying insn for the reloc ---------------- nit: this is not BTFOffsetReloc anymore, more like BTFFieldReloc Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D67980/new/ https://reviews.llvm.org/D67980 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits