[ 
https://issues.apache.org/jira/browse/FLINK-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950319#comment-15950319
 ] 

Luke Hutchison commented on FLINK-6115:
---------------------------------------

[~greghogan] You're assuming data with {{null}} values in it is "bad data", or 
that getting a {{null}} value is always going to be unexpected, or that it will 
trigger {{NullPointerException}}s downstream. Far from it, there is a huge 
range of valid and intentional uses for {{null}} values. Many programmers won't 
be able to use {{Optional}}, for years, due to being stuck on Java 7 for 
reasons beyond their control). But even with {{Optional}} available, many Java 
8 programmers still use {{null}} for a wide range of uses, and will until the 
language dies a very slow death. (It will always be impossible until the end of 
time, for example, to tell without an extra call if {{Map#get(key)}} returned 
{{null}} because there was no value corresponding to that key, or because 
{{null}} was mapped to the key -- but programmers know and expect this, for 
better or for worse.)

The most ironic thing about this whole conversation though is the claim that 
omitting {{null}} was done for performance reasons, followed by the 
recommendation to use {{Optional}}:

(1) both the performance overhead _and_ the memory overhead of wrapping types 
in {{Optional}} is _significantly higher_ than using a bitfield in the wire 
format of a serialized tuple to mark null fields;

(2) the overhead of actually _using_ a serialized tuple for any of the things 
you ever need to serialize one for (i.e. writing a serialized tuple to 
persistent storage, and/or sending it over the wire) takes orders of magnitude 
more time than the serialization and deserialization process, which makes the 
impact of adding a bitfield vanishingly small. Did you actually benchmark the 
impact of serializing {{null}} before concluding it would be too inefficient, 
and are those numbers available? Having hard numbers, particularly in a 
head-to-head comparison (both using tuples without {{null}}s, using {{Record}}s 
with nulls, and using tuples with {{Optional}}) would be an important factor in 
the community decision process that you linked.

> Need more helpful error message when trying to serialize a tuple with a null 
> field
> ----------------------------------------------------------------------------------
>
>                 Key: FLINK-6115
>                 URL: https://issues.apache.org/jira/browse/FLINK-6115
>             Project: Flink
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.0
>            Reporter: Luke Hutchison
>
> When Flink tries to serialize a tuple with a null field, you get the 
> following, which has no information about where in the program the problem 
> occurred (all the stack trace lines are in Flink, not in user code).
> {noformat}
> Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>       at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
>       at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>       at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>       at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>       at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>       at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>       at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>       at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>       at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.IllegalArgumentException: The record must not be null.
>       at 
> org.apache.flink.api.common.typeutils.base.array.StringArraySerializer.serialize(StringArraySerializer.java:73)
>       at 
> org.apache.flink.api.common.typeutils.base.array.StringArraySerializer.serialize(StringArraySerializer.java:33)
>       at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.serialize(TupleSerializer.java:124)
>       at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.serialize(TupleSerializer.java:30)
>       at 
> org.apache.flink.runtime.plugable.SerializationDelegate.write(SerializationDelegate.java:56)
>       at 
> org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializer.addRecord(SpanningRecordSerializer.java:77)
>       at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:113)
>       at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:88)
>       at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>       at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>       at 
> org.apache.flink.runtime.operators.chaining.ChainedMapDriver.collect(ChainedMapDriver.java:79)
>       at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>       at 
> org.apache.flink.api.java.operators.translation.PlanFilterOperator$FlatMapFilter.flatMap(PlanFilterOperator.java:51)
>       at 
> org.apache.flink.runtime.operators.FlatMapDriver.run(FlatMapDriver.java:108)
>       at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:490)
>       at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:355)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655)
>       at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The only thing I can tell from this is that it happened somewhere in a 
> flatMap (but I have dozens of them in my code). Surely there's a way to pull 
> out the source file name and line number from the program DAG node when 
> errors like this occur?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to