Hello,
I'm using flinkKafkaConsumer to read message from a kafka topic with
JSONKeyValueDeserializationSchema. When the message is json formatted,
everything works fine, but it throws NullPointerException when processing a
message is not json formatted. I try to catch the exception but cannot do
t
Hi all,
Thanks for your participation.
In this thread, we got one +1 for option 1 and option 3, respectively. In the
original thread[1], we got two +1 for option 1, one +1 for option 2, and five
+1 and one -1 for option 3.
To summarize,
Option 1 (port side output to flatMap and deprecate spli
Hi All,
Occasionally I run into failed checkpoints error where 2 or 3 consecutive
checkpoints fails after running for a minute and then it recovers. This is
causing delay in processing the incoming data since there is huge amount of
data buffered during the failed checkpoints. I don't see any erro
Hi Flavio:
Looks like you use java.util.Date in your pojo, Now Flink table not support
BasicTypeInfo.DATE_TYPE_INFO because of the limitations of some judgments in
the code.
Can you use java.sql.Date?
Best, JingsongLee
--
From:F
Hello Flink community,
I believe the question below has been already asked, but since I couldn't find
my answer from internet, I'd love to reach out the community for help.
We basically want to find out the best configurations for Flink that running on
Kubernetes to achieve the best performanc
Hi Mans,
Flink does not coordinate the consumption at all (neither within the same
nor across applications). Each FlinkKinesisConsumer will keep track of its
consumption (position in each shared) in order to provide exactly-once
guarantees.
Cheers,
Konstantin
On Mon, Jul 1, 2019 at 12:30 PM M
Hi Wang,
you guessed correctly, the events are not replayed from Kafka, but are part
of the state of the AsyncWaitOperator and the request are resubmitted by
the AsyncOperator in it's open() method.
Cheers,
Konstantin
On Mon, Jul 1, 2019 at 9:39 PM wang xuchen wrote:
> Hi Flink experts,
>
>
Hi Juan,
I just replied to your other question, but I think, I better get where you
are coming from now.
Are you aware of per-partition watermarking [1]? You don't need to manage
this map yourself. BUT: this does not solve the problem, that this Map is
not stored in Managed State. Watermarks are
Hi Soheil,
I don't think it is a bug the Row class is pretty tightly linked to the
TableAPI. DataSet#writeAsCsv has always only worked with Tuple classes. You
can use DataSet#writeAsText to write arbitrary DataSets to file (will use
toString() methods).
Cheers,
Konstantin
On Sat, Jul 6, 2019 at
Hi John,
in case of a failure (e.g. in the SQL Sink) the Flink Job will be restarted
from the last checkpoint. This means the offset of all Kafka partitions
will be reset to that point in the stream along with state of all
operators. To enable checkpointing you need to call
StreamExecutionEnvironm
Hi Juan,
can you elaborate a bit on why you want to put the WatermarkAssigner itself
into state? It is generally unusual to store a UDF in Managed State.
Cheers,
Konstantin
On Fri, Jul 5, 2019 at 5:07 PM Juan Gentile wrote:
> Hello,
>
>
>
> We are currently facing an issue where we need to
Hi Shivam
Did this reproduce each time? Would you please share the full stack trace when
you get this exception. Moreover, task manager log of that value state is also
very welcome.
Best
Yun Tang
From: Shivam Dubey
Sent: Sunday, July 7, 2019 17:35
To: user@flin
I am using Flink queryable state client to read a dummy valuestate I created.
The code is quite simple, just created a stream out of a kafka topic, keys it as
inputStream.keyBy(value -> 0L).map(new mapFucntion()).print()
Withing the mapFunction I create a value state, and declare it
Dear community,
this week's community digest with news on Flink 1.9.0 and Flink 1.8.1, our
Travis setup, Flink on PyPi, and a couple of new initiatives around the
DataStream API.
As always, please feel free to add additional updates and news to this
thread!
Flink Development
===
* [
I am using Flink queryable state client to read a dummy valuestate I
created.
The code is quite simple, just created a stream out of a kafka topic, keys
it as
inputStream.keyBy(value -> 0L).map(new
mapFucntion()).print()
Withing the mapFunction I create a value state, and declare
Hi Eduardo,
Flink 1.9 will add a new State Processor API [1], which you can use to
create Savepoints from scratch with a batch job.
Cheers,
Konstantin
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/libs/state_processor_api.html#writing-new-savepoints
On Thu, Jul 4, 2019 at 12:3
16 matches
Mail list logo