cshuo commented on code in PR #13498:
URL: https://github.com/apache/hudi/pull/13498#discussion_r2191379335
##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/BaseSparkInternalRowReaderContext.java:
##########
@@ -110,6 +112,23 @@ public HoodieRecord<InternalRow>
constructHoodieRecord(BufferedRecord<InternalRo
return new HoodieSparkRecord(hoodieKey, row,
HoodieInternalRowUtils.getCachedSchema(schema), false);
}
+ @Override
+ public InternalRow constructEngineRecord(Schema schema,
+ Map<Integer, Object> updateValues,
+ BufferedRecord<InternalRow>
baseRecord) {
+ List<Schema.Field> fields = schema.getFields();
+ Object[] values = new Object[fields.size()];
+ for (Schema.Field field : fields) {
+ int pos = field.pos();
+ if (updateValues.containsKey(pos)) {
+ values[pos] = updateValues.get(pos);
+ } else {
+ values[pos] = getValue(baseRecord.getRecord(), schema, field.name());
+ }
+ }
+ return toBinaryRow(schema, new GenericInternalRow(values));
Review Comment:
yes, there is no need to call `toBinaryRow` here, since `toBinaryRow` will
be called right before buffered records being put into the record buffer.
https://github.com/apache/hudi/blob/f80fc25fe906686694557b5d8d2f68f9b5c9215e/hudi-common/src/main/java/org/apache/hudi/common/table/read/KeyBasedFileGroupRecordBuffer.java#L96
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]