the-other-tim-brown commented on code in PR #13623:
URL: https://github.com/apache/hudi/pull/13623#discussion_r2232019216


##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/KeepValuesPartialMergingUtils.java:
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.util.VisibleForTesting;
+import org.apache.hudi.common.util.collection.Pair;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Class to assist with merging two versions of the record that may contain 
partial updates using
+ * {@link org.apache.hudi.common.table.PartialUpdateMode#KEEP_VALUES} mode.
+ */
+public class KeepValuesPartialMergingUtils<T> {
+  static KeepValuesPartialMergingUtils INSTANCE = new 
KeepValuesPartialMergingUtils();
+  private static final Map<Schema, Map<String, Integer>>
+      FIELD_NAME_TO_ID_MAPPING_CACHE = new ConcurrentHashMap<>();
+  private static final Map<Pair<Pair<Schema, Schema>, Schema>, Schema>
+      MERGED_SCHEMA_CACHE = new ConcurrentHashMap<>();
+
+  /**
+   * Merges records which can contain partial updates.
+   *
+   * @param older         Older record of type {@BufferedRecord<T>}.
+   * @param oldSchema     Schema of the older record.
+   * @param newer         Newer record of type {@BufferedRecord<T>}.
+   * @param newSchema     Schema of the newer record.
+   * @param readerSchema  Reader schema containing all the fields to read. 
This is used to maintain
+   *                      the ordering of the fields of the merged record.
+   * @param readerContext ReaderContext instance.
+   * @return The merged record and schema.
+   */
+  Pair<BufferedRecord<T>, Schema> mergePartialRecords(BufferedRecord<T> older,
+                                                             Schema oldSchema,
+                                                             BufferedRecord<T> 
newer,
+                                                             Schema newSchema,
+                                                             Schema 
readerSchema,
+                                                             
HoodieReaderContext<T> readerContext) {
+    // The merged schema contains fields that only appear in either older 
and/or newer record.
+    Schema mergedSchema =
+        getCachedMergedSchema(oldSchema, newSchema, readerSchema);
+    boolean isNewerPartial = isPartial(newSchema, mergedSchema);
+    if (!isNewerPartial) {
+      return Pair.of(newer, newSchema);
+    }
+    Set<String> fieldNamesInNewRecord =
+        getCachedFieldNameToIdMapping(newSchema).keySet();
+    // Collect field values.
+    List<Object> values = new ArrayList<>();
+    List<Schema.Field> mergedSchemaFields = mergedSchema.getFields();
+    for (Schema.Field mergedSchemaField : mergedSchemaFields) {
+      String fieldName = mergedSchemaField.name();
+      if (fieldNamesInNewRecord.contains(fieldName)) { // field present in 
newer record.

Review Comment:
   At this point in the code if the merge mode is event time,  are the older 
and newer records already in the correct order or is the "older" here just 
meaning it is in the previous set of data.



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/PartialUpdateHandler.java:
##########
@@ -41,17 +42,19 @@
  * {@link 
BufferedRecordMergerFactory.CommitTimeBufferedRecordPartialUpdateMerger} and
  * {@link 
BufferedRecordMergerFactory.EventTimeBufferedRecordPartialUpdateMerger}.
  */
-public class PartialUpdateStrategy<T> {
+public class PartialUpdateHandler<T> {

Review Comment:
   I'm not seeing unit tests for this, let's make sure there are some unit test 
cases here.



##########
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/HiveHoodieReaderContext.java:
##########
@@ -346,4 +365,80 @@ public float getProgress() throws IOException {
     }
     throw new IllegalStateException("getProgress() should not be called before 
a record reader has been initialized");
   }
+
+  Schema resolveUnion(Schema schema) {

Review Comment:
   `AvroSchemaUtils.resolveNullableSchema` provides similar functionality



##########
hudi-common/src/test/java/org/apache/hudi/common/table/read/TestKeepValuesPartialMergingUtils.java:
##########
@@ -0,0 +1,334 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.util.collection.Pair;
+
+import org.apache.avro.Schema;
+import org.apache.avro.generic.GenericData;
+import org.apache.avro.generic.GenericRecord;
+import org.apache.avro.generic.IndexedRecord;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertSame;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+class TestKeepValuesPartialMergingUtils {
+  private KeepValuesPartialMergingUtils<IndexedRecord> 
keepValuesPartialMergingUtils;
+  private HoodieReaderContext<IndexedRecord> mockReaderContext;
+  private Schema fullSchema;
+  private Schema partialSchema;
+  private Schema readerSchema;
+
+  @BeforeEach
+  void setUp() {
+    keepValuesPartialMergingUtils = new KeepValuesPartialMergingUtils<>();
+    mockReaderContext = mock(HoodieReaderContext.class);
+
+    // Create test schemas
+    fullSchema = Schema.createRecord("TestRecord", "Test record", "test", 
false);
+    fullSchema.setFields(Arrays.asList(
+        new Schema.Field("id", Schema.create(Schema.Type.STRING), "ID field", 
null),
+        new Schema.Field("name", Schema.create(Schema.Type.STRING), "Name 
field", null),
+        new Schema.Field("age", Schema.create(Schema.Type.INT), "Age field", 
null),
+        new Schema.Field("city", Schema.create(Schema.Type.STRING), "City 
field", null)
+    ));
+
+    partialSchema = Schema.createRecord("TestRecord", "Test record", "test", 
false);
+    partialSchema.setFields(Arrays.asList(
+        new Schema.Field("id", Schema.create(Schema.Type.STRING), "ID field", 
null),
+        new Schema.Field("name", Schema.create(Schema.Type.STRING), "Name 
field", null)
+    ));
+
+    readerSchema = Schema.createRecord("TestRecord", "Test record", "test", 
false);

Review Comment:
   `readerSchema` is the same as `fullSchema`. I think you should have another 
schema that is different so the case where new, old, and reader schemas are all 
different.



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/PartialUpdateHandler.java:
##########
@@ -63,44 +66,65 @@ BufferedRecord<T> partialMerge(BufferedRecord<T> newRecord,
                                  BufferedRecord<T> oldRecord,
                                  Schema newSchema,
                                  Schema oldSchema,
+                                 Schema readerSchema,
                                  boolean keepOldMetadataColumns) {
     // Note that: When either newRecord or oldRecord is a delete record,
     //            skip partial update since delete records do not have 
meaningful columns.
-    if (partialUpdateMode == PartialUpdateMode.NONE
-        || null == oldRecord
+    if (null == oldRecord
         || newRecord.isDelete()
         || oldRecord.isDelete()) {
       return newRecord;
     }
 
     switch (partialUpdateMode) {
       case KEEP_VALUES:
-      case FILL_DEFAULTS:
-        return newRecord;
+        return reconcileBasedOnKeepValues(newRecord, oldRecord, newSchema, 
oldSchema, readerSchema);
       case IGNORE_DEFAULTS:
         return reconcileDefaultValues(
-            newRecord, oldRecord, newSchema, oldSchema, 
keepOldMetadataColumns);
-      case IGNORE_MARKERS:
+            newRecord, oldRecord, newSchema, oldSchema, 
keepOldMetadataColumns, false);
+      case IGNORE_DEFAULTS_NULLS:
+        return reconcileDefaultValues(newRecord, oldRecord, newSchema, 
oldSchema, keepOldMetadataColumns, true);
+      case FILL_UNAVAILABLE:
         return reconcileMarkerValues(

Review Comment:
   This method fetches a config per record for the marker value, let's pull out 
the relevant configs in the constructor so it decreases the merging overhead.



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/KeepValuesPartialMergingUtils.java:
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.util.VisibleForTesting;
+import org.apache.hudi.common.util.collection.Pair;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Class to assist with merging two versions of the record that may contain 
partial updates using
+ * {@link org.apache.hudi.common.table.PartialUpdateMode#KEEP_VALUES} mode.
+ */
+public class KeepValuesPartialMergingUtils<T> {
+  static KeepValuesPartialMergingUtils INSTANCE = new 
KeepValuesPartialMergingUtils();
+  private static final Map<Schema, Map<String, Integer>>
+      FIELD_NAME_TO_ID_MAPPING_CACHE = new ConcurrentHashMap<>();
+  private static final Map<Pair<Pair<Schema, Schema>, Schema>, Schema>
+      MERGED_SCHEMA_CACHE = new ConcurrentHashMap<>();
+
+  /**
+   * Merges records which can contain partial updates.
+   *
+   * @param older         Older record of type {@BufferedRecord<T>}.
+   * @param oldSchema     Schema of the older record.
+   * @param newer         Newer record of type {@BufferedRecord<T>}.
+   * @param newSchema     Schema of the newer record.
+   * @param readerSchema  Reader schema containing all the fields to read. 
This is used to maintain
+   *                      the ordering of the fields of the merged record.
+   * @param readerContext ReaderContext instance.
+   * @return The merged record and schema.
+   */
+  Pair<BufferedRecord<T>, Schema> mergePartialRecords(BufferedRecord<T> older,
+                                                             Schema oldSchema,
+                                                             BufferedRecord<T> 
newer,
+                                                             Schema newSchema,
+                                                             Schema 
readerSchema,
+                                                             
HoodieReaderContext<T> readerContext) {
+    // The merged schema contains fields that only appear in either older 
and/or newer record.
+    Schema mergedSchema =
+        getCachedMergedSchema(oldSchema, newSchema, readerSchema);
+    boolean isNewerPartial = isPartial(newSchema, mergedSchema);
+    if (!isNewerPartial) {
+      return Pair.of(newer, newSchema);
+    }
+    Set<String> fieldNamesInNewRecord =
+        getCachedFieldNameToIdMapping(newSchema).keySet();
+    // Collect field values.
+    List<Object> values = new ArrayList<>();
+    List<Schema.Field> mergedSchemaFields = mergedSchema.getFields();
+    for (Schema.Field mergedSchemaField : mergedSchemaFields) {
+      String fieldName = mergedSchemaField.name();
+      if (fieldNamesInNewRecord.contains(fieldName)) { // field present in 
newer record.
+        values.add(readerContext.getValue(newer.getRecord(), newSchema, 
fieldName));
+      } else { // if not present in newer record pick from old record
+        values.add(readerContext.getValue(older.getRecord(), oldSchema, 
fieldName));
+      }
+    }
+    // Build merged record.
+    T engineRecord = readerContext.createEngineRecord(mergedSchema, values);
+    BufferedRecord<T> mergedRecord = new BufferedRecord<>(
+        newer.getRecordKey(),
+        newer.getOrderingValue(),
+        engineRecord,
+        readerContext.encodeAvroSchema(mergedSchema),
+        newer.isDelete());
+    return Pair.of(mergedRecord, mergedSchema);
+  }
+
+  /**
+   * @param avroSchema Avro schema.
+   * @return The field name to ID mapping.
+   */
+  static Map<String, Integer> getCachedFieldNameToIdMapping(Schema avroSchema) 
{
+    return FIELD_NAME_TO_ID_MAPPING_CACHE.computeIfAbsent(avroSchema, schema 
-> {
+      Map<String, Integer> schemaFieldIdMapping = new HashMap<>();
+      int fieldId = 0;
+      for (Schema.Field field : schema.getFields()) {
+        schemaFieldIdMapping.put(field.name(), fieldId);
+        fieldId++;
+      }
+      return schemaFieldIdMapping;
+    });
+  }
+
+  /**
+   * Merges the two schemas so the merged schema contains all the fields from 
the two schemas,
+   * with the same ordering of fields based on the provided reader schema.
+   *
+   * @param oldSchema    Old schema.
+   * @param newSchema    New schema.
+   * @param readerSchema Reader schema containing all the fields to read.
+   * @return             The merged Avro schema.
+   */
+  static Schema getCachedMergedSchema(Schema oldSchema,
+                                             Schema newSchema,
+                                             Schema readerSchema) {
+    return MERGED_SCHEMA_CACHE.computeIfAbsent(
+        Pair.of(Pair.of(oldSchema, newSchema), readerSchema), schemaPair -> {
+          Schema schema1 = schemaPair.getLeft().getLeft();
+          Schema schema2 = schemaPair.getLeft().getRight();
+          Schema refSchema = schemaPair.getRight();
+          Set<String> schema1Keys =
+              getCachedFieldNameToIdMapping(schema1).keySet();
+          Set<String> schema2Keys =
+              getCachedFieldNameToIdMapping(schema2).keySet();
+          List<Schema.Field> mergedFieldList = new ArrayList<>();
+          for (int i = 0; i < refSchema.getFields().size(); i++) {

Review Comment:
   The `refSchema` is the `newSchema` - this means we are only looking for 
fields in the new schema?



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/KeepValuesPartialMergingUtils.java:
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.util.VisibleForTesting;
+import org.apache.hudi.common.util.collection.Pair;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Class to assist with merging two versions of the record that may contain 
partial updates using
+ * {@link org.apache.hudi.common.table.PartialUpdateMode#KEEP_VALUES} mode.
+ */
+public class KeepValuesPartialMergingUtils<T> {
+  static KeepValuesPartialMergingUtils INSTANCE = new 
KeepValuesPartialMergingUtils();
+  private static final Map<Schema, Map<String, Integer>>
+      FIELD_NAME_TO_ID_MAPPING_CACHE = new ConcurrentHashMap<>();
+  private static final Map<Pair<Pair<Schema, Schema>, Schema>, Schema>
+      MERGED_SCHEMA_CACHE = new ConcurrentHashMap<>();
+
+  /**
+   * Merges records which can contain partial updates.
+   *
+   * @param older         Older record of type {@BufferedRecord<T>}.
+   * @param oldSchema     Schema of the older record.
+   * @param newer         Newer record of type {@BufferedRecord<T>}.
+   * @param newSchema     Schema of the newer record.
+   * @param readerSchema  Reader schema containing all the fields to read. 
This is used to maintain
+   *                      the ordering of the fields of the merged record.
+   * @param readerContext ReaderContext instance.
+   * @return The merged record and schema.
+   */
+  Pair<BufferedRecord<T>, Schema> mergePartialRecords(BufferedRecord<T> older,
+                                                             Schema oldSchema,
+                                                             BufferedRecord<T> 
newer,
+                                                             Schema newSchema,
+                                                             Schema 
readerSchema,
+                                                             
HoodieReaderContext<T> readerContext) {
+    // The merged schema contains fields that only appear in either older 
and/or newer record.

Review Comment:
   nit: You can also describe this as the union of the fields



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/KeepValuesPartialMergingUtils.java:
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.util.VisibleForTesting;
+import org.apache.hudi.common.util.collection.Pair;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Class to assist with merging two versions of the record that may contain 
partial updates using
+ * {@link org.apache.hudi.common.table.PartialUpdateMode#KEEP_VALUES} mode.
+ */
+public class KeepValuesPartialMergingUtils<T> {
+  static KeepValuesPartialMergingUtils INSTANCE = new 
KeepValuesPartialMergingUtils();
+  private static final Map<Schema, Map<String, Integer>>
+      FIELD_NAME_TO_ID_MAPPING_CACHE = new ConcurrentHashMap<>();
+  private static final Map<Pair<Pair<Schema, Schema>, Schema>, Schema>
+      MERGED_SCHEMA_CACHE = new ConcurrentHashMap<>();
+
+  /**
+   * Merges records which can contain partial updates.
+   *
+   * @param older         Older record of type {@BufferedRecord<T>}.
+   * @param oldSchema     Schema of the older record.

Review Comment:
   Can you note that this schema may not contain all the values due to 
operations writing only the values that change?



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/KeyBasedFileGroupRecordBuffer.java:
##########
@@ -72,6 +72,9 @@ public void processDataBlock(HoodieDataBlock dataBlock, 
Option<KeySpec> keySpecO
       // When a data block contains partial updates, subsequent record merging 
must always use
       // partial merging.
       enablePartialMerging = true;
+      if (partialUpdateMode.isEmpty()) {
+        this.partialUpdateMode = Option.of(PartialUpdateMode.KEEP_VALUES);

Review Comment:
   Synced with Siva, this is because of the flag above. This block of code is 
hit with a merge into command that only writes out partial updates to the log 
files.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to