ericm-db commented on code in PR #49277:
URL: https://github.com/apache/spark/pull/49277#discussion_r1915953518


##########
sql/core/src/main/scala/org/apache/spark/sql/avro/SchemaConverters.scala:
##########
@@ -374,67 +374,29 @@ object SchemaConverters extends Logging {
     }
   }
 
-  private def getDefaultValue(dataType: DataType): Any = {
-    def createNestedDefault(st: StructType): java.util.HashMap[String, Any] = {
-      val defaultMap = new java.util.HashMap[String, Any]()
-      st.fields.foreach { field =>
-        field.dataType match {
-          case nested: StructType =>
-            // For nested structs, recursively create the default structure
-            defaultMap.put(field.name, createNestedDefault(nested))
-          case _ =>
-            // For leaf fields, use null
-            defaultMap.put(field.name, null)
-        }
+  private def createDefaultStruct(st: StructType): java.util.HashMap[String, 
Any] = {

Review Comment:
   Actually - I'm not sure if making a whole nested struct be null really makes 
sense for schema evolution, I feel like going through and setting each nested 
field to have a null default makes more sense. 
   Let's say we have added a deeply nested field in a struct. It doesn't make 
sense to me to set this struct to null, I think it only makes sense to set that 
new field to null



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to