satish created HUDI-1533:
----------------------------
Summary: SerializableSchema doesnt work for some schemas
Key: HUDI-1533
URL: https://issues.apache.org/jira/browse/HUDI-1533
Project: Apache Hudi
Issue Type: Sub-task
Reporter: satish
java.util.concurrent.CompletionException: org.apache.spark.SparkException: Task
not serializable
at
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
at
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
at
java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: org.apache.spark.SparkException: Task not serializable
at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:416)
at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:406)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:163)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
at org.apache.spark.rdd.RDD$$anonfun$keyBy$1.apply(RDD.scala:1578)
at org.apache.spark.rdd.RDD$$anonfun$keyBy$1.apply(RDD.scala:1577)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.keyBy(RDD.scala:1577)
at org.apache.spark.rdd.RDD$$anonfun$sortBy$1.apply(RDD.scala:644)
at org.apache.spark.rdd.RDD$$anonfun$sortBy$1.apply(RDD.scala:646)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.sortBy(RDD.scala:643)
at org.apache.spark.api.java.JavaRDD.sortBy(JavaRDD.scala:206)
at
org.apache.hudi.execution.bulkinsert.RDDCustomColumnsSortPartitioner.repartitionRecords(RDDCustomColumnsSortPartitioner.java:53)
at
org.apache.hudi.execution.bulkinsert.RDDCustomColumnsSortPartitioner.repartitionRecords(RDDCustomColumnsSortPartitioner.java:37)
at
org.apache.hudi.table.action.commit.SparkBulkInsertHelper.bulkInsert(SparkBulkInsertHelper.java:103)
at
org.apache.hudi.client.clustering.run.strategy.SparkSortAndSizeExecutionStrategy.performClustering(SparkSortAndSizeExecutionStrategy.java:74)
at
org.apache.hudi.client.clustering.run.strategy.SparkSortAndSizeExecutionStrategy.performClustering(SparkSortAndSizeExecutionStrategy.java:50)
at
org.apache.hudi.table.action.cluster.SparkExecuteClusteringCommitActionExecutor.lambda$runClusteringForGroupAsync$3(SparkExecuteClusteringCommitActionExecutor.java:121)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 5 more
Caused by: java.io.UTFDataFormatException
at
java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2164)
at
java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2007)
at java.io.ObjectOutputStream.writeUTF(ObjectOutputStream.java:869)
at
org.apache.hudi.common.config.SerializableSchema.writeObjectTo(SerializableSchema.java:65)
at
org.apache.hudi.common.config.SerializableSchema.writeObject(SerializableSchema.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1128)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
at
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
at
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:413)
This happened with pretty large schema. schema.toString.length = 172673
Some of the schemas are tested here, but largest schema string in test is <2000
https://github.com/apache/hudi/blob/master/hudi-common/src/test/java/org/apache/hudi/common/util/TestSerializableSchema.java#L42
We need to find good alternative for large schemas.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)