ccciudatu commented on code in PR #49:
URL: https://github.com/apache/datafusion-ray/pull/49#discussion_r1856555348


##########
src/shuffle/codec.rs:
##########
@@ -102,18 +99,23 @@ impl PhysicalExtensionCodec for ShuffleCodec {
             };
             PlanType::ShuffleReader(reader)
         } else if let Some(writer) = 
node.as_any().downcast_ref::<ShuffleWriterExec>() {
-            let plan = 
PhysicalPlanNode::try_from_physical_plan(writer.plan.clone(), self)?;
             let partitioning =
                 
encode_partitioning_scheme(writer.properties().output_partitioning())?;
             let writer = ShuffleWriterExecNode {
                 stage_id: writer.stage_id as u32,
-                plan: Some(plan),
+                // No need to redundantly serialize the child plan, as input 
plan(s) are recursively
+                // serialized by PhysicalPlanNode and will be available as 
`inputs` in `try_decode`.
+                // TODO: remove this field from the proto definition?
+                plan: None,

Review Comment:
   @austin362667 is this assessment legit or is there a reason that I missed 
for serialising the child plan as part of the `ShuffleWriterExecNode` proto 
instead of relying on the built-in recursive input plan serialisation?
   
   As a side note, this change is needed so that we don't assume that the 
`ShuffleCodec` (i.e. `self` in the `try_from_physical_plan(writer.plan.clone(), 
self)` call) is the only extension codec needed for serialising the whole plan, 
but allow for additional extensions be registered without requiring this 
particular codec to be aware of them.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to