Hey Cody, I would have responded to the mailing list but it looks like this
thread got aged off. I have the problem where one of my topics dumps more
data than my spark job can keep up with. We limit the input rate with
maxRatePerPartition Eventually, when the data is aged off, I get the
OffsetOutOfRangeException from Kafka, as we would expect. As we work
towards more efficient processing of that topic, or get more resources, I'd
like to be able to log the error and continue the application without
failing. Is there a place where I can catch that error before it gets to
org.apache.spark.util.Utils$.logUncaughtExceptions ? Maybe somewhere in
DirectKafkaInputDStream::compute?

https://mail-archives.apache.org/mod_mbox/spark-user/201512.mbox/%3CCAKWX9VUoNd4ATGF+0TkNJ+9=b8r2nr9pt7sbgr-bv4nnttk...@mail.gmail.com%3E
-- 
Dan ✆

Reply via email to