Hahnfeld added inline comments.

================
Comment at: clang/include/clang/Interpreter/Interpreter.h:43-44
 public:
   static llvm::Expected<std::unique_ptr<CompilerInstance>>
   create(std::vector<const char *> &ClangArgv);
+  static llvm::Expected<std::unique_ptr<CompilerInstance>>
----------------
If I understand the change correctly, the "old" `create` function on its own is 
not sufficient anymore to create a fully working `CompilerInstance` - should we 
make it `private`? (and unify the two `Builder` classes into one?)

Alternatively, we could keep the current function working and move the bulk of 
the work to an `createInternal` function. What do you think?


================
Comment at: clang/lib/CodeGen/CodeGenAction.cpp:296
       }
+      LinkModules.clear();
       return false; // success
----------------
This looks like a change that has implications beyond support for CUDA. Would 
it make sense to break this out into a separate review, ie does this change 
something in the current setup?


================
Comment at: clang/lib/Interpreter/IncrementalParser.cpp:125
 
+static CodeGenerator *getCodeGen(FrontendAction *Act);
+
----------------
Is it possible to move the `static` function here instead of forward declaring? 
Or does it depend on other functions in this file?


================
Comment at: clang/lib/Interpreter/IncrementalParser.cpp:143-156
+  // An initial PTU is needed as CUDA includes some headers automatically
+  auto PTU = ParseOrWrapTopLevelDecl();
+  if (auto E = PTU.takeError()) {
+    consumeError(std::move(E)); // FIXME
+    return;                     // PTU.takeError();
+  }
+
----------------
Should this be done as part of the "generic" `IncrementalParser` constructor, 
or only in the CUDA case by something else? Maybe `Interpreter::createWithCUDA`?


================
Comment at: clang/lib/Interpreter/Interpreter.cpp:270-274
+  if (DeviceParser) {
+    auto DevicePTU = DeviceParser->Parse(Code);
+    if (auto E = DevicePTU.takeError())
+      return E;
+  }
----------------
Just wondering about the order here: Do we have to parse the device side first? 
Does it make a difference for diagnostics? Maybe you can add a comment about 
the choice here...


================
Comment at: clang/lib/Interpreter/Offload.cpp:1
+//===-------------- Offload.cpp - CUDA Offloading ---------------*- C++ 
-*-===//
+//
----------------
v.g.vassilev wrote:
> How about `DeviceOffload.cpp`?
Or `IncrementalCUDADeviceParser.cpp` for the moment - not sure what other 
classes will be added in the future, and if they should be in the same TU.


================
Comment at: clang/lib/Interpreter/Offload.cpp:90-91
+  PTXCode += '\0';
+  while (PTXCode.size() % 8)
+    PTXCode += '\0';
+  return PTXCode.str();
----------------
Is padding to 8 bytes a requirement for PTX? Maybe add a coment...


================
Comment at: clang/lib/Interpreter/Offload.h:20
+
+class DeviceCodeInlinerAction;
+
----------------
unused


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146389/new/

https://reviews.llvm.org/D146389

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to