Ok I think I found it. it's the batch interval setting. From what I see, if we want "realtime" stream to the database we have to set it to 1 other wise the sink will wait until, the batch interval count is reached.
The batch interval mechanism doesn't see correct? If the default size is 5000 and you need to insert 5001 you will never get that 1 record? On Tue, 15 Oct 2019 at 15:54, John Smith <java.dev....@gmail.com> wrote: > Hi, using 1.8.0 > > I have the following job: https://pastebin.com/ibZUE8Qx > > So the job does the following steps... > 1- Consume from Kafka and return JsonObject > 2- Map JsonObject to MyPojo > 3- Convert The stream to a table > 4- Insert the table to JDBC sink table > 5- Print the table. > > - The job seems to work with no errors and I can see the row print to the > console and I see nothing in my database. > - If I put invalid host for the database and restart the job, I get a > connection SQLException error. So at least we know that works. > - If I make a typo on the INSERT INTO statement like INSERTS INTO > non_existing_table, there are no exceptions thrown, the print happens, the > stream continues to work. > - If I drop the table from the database, same thing, no exceptions thrown, > the print happens, the stream continues to work. > > So am I missing something? >