linliu-code commented on code in PR #13498:
URL: https://github.com/apache/hudi/pull/13498#discussion_r2190903215
##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/BaseSparkInternalRowReaderContext.java:
##########
@@ -110,6 +112,14 @@ public HoodieRecord<InternalRow>
constructHoodieRecord(BufferedRecord<InternalRo
return new HoodieSparkRecord(hoodieKey, row,
HoodieInternalRowUtils.getCachedSchema(schema), false);
}
+ @Override
+ public InternalRow constructEngineRecord(Schema schema, List<Object> values)
{
+ if (schema.getFields().size() != values.size()) {
+ throw new IllegalArgumentException("Schema field count and values size
must match.");
+ }
+ return new GenericInternalRow(values.toArray());
Review Comment:
Thanks, @cshuo, please share with me offline.
Meanwhile, I think `toBinary` is unavoidable if we use binary records, since
when we extract the fields out by `getValue`, the binary value will be
automatically be converted to Java Objects; after we have all fields, we
reconstruct the records. I do believe we can extract binary values, but I am
not sure how safe it will be. CC: @danny0405 , @cshuo
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]