You raised a good point. Fortunately, there should be a simply way to fix
this.
The Kafka Sunk Function should implement the "Checkpointed" interface. It
will get a call to the "snapshotState()" method whenever a checkpoint
happens. Inside that call, it should then sync on the callbacks, and only
Hi David!
You are using the JDBC format that was written for the batch API in the
streaming API.
While that should actually work, it is a somewhat new and less tested
function. Let's double check that the call to open() is properly forwarded.
On Sun, Jun 5, 2016 at 12:47 PM, David Olsen
wrote:
Switching to use org.apache.flink.api.java.ExecutionEnvironment, my code
can successfully read data from database through JDBCInputFormat. But I
need stream mode (and now it seems that the DataSet and DataStream is not
interchangeable). Are there any additional functions required to be
executed bef
I remove the open method when constructing jdbc input format, but I still
obtain "couldn't access resultSet" error.
Caused by: java.io.IOException: Couldn't access resultSet
at
org.apache.flink.api.java.io.jdbc.JDBCInputFormat.nextRecord(JDBCInputFormat.java:179)
at
org.apache.flink.api.java.io.jd
you are not supposed to call open yourselves.
On 05.06.2016 11:05, David Olsen wrote:
Following the sample on the flink website[1] to test jdbc I
encountered an error "Couldn't access resultSet". It looks like the
nextRecord is called before open() function. However I've called
open() when I c
Following the sample on the flink website[1] to test jdbc I encountered an
error "Couldn't access resultSet". It looks like the nextRecord is called
before open() function. However I've called open() when I construct jdbc
input format. Any functions I should call before job submission?
def jdbc()=