Hi Sivaprasanna
I think state schema evolution can work for incremental checkpoint. And
I tried with a simple Pojo schema, It also works. maybe you need to check
the schema, from the exception stack, the schema before and after are
incompatible.
Best,
Congxian
Sivaprasanna 于2020年7月24日周五 上午12
Hi Flink Team,
*FLINK Streaming:* I have DataStream[String] from kafkaconsumer
DataStream stream = env
.addSource(new FlinkKafkaConsumer<>("topic", new
SimpleStringSchema(), properties));
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html
I have to sink this st
There's no start-cluster.bat and flink.bat in bin directory.
So how can i start flink on windowns OS?
Thanks,
Lei
wangl...@geekplus.com.cn
Hi Wojtek,
In flink 1.11, the methods register_table_source and register_table_sink of
ConnectTableDescriptor have been removed. You need to use
createTemporaryTable instead of these two methods.Besides, it seems that
the version of your pyflink is 1.10, but the corresponding flink is 1.11.
Best,
Hello everyone,
We recently upgrade FLINK from 1.9.1 to 1.11.0. Found one strange behavior when
we stop a job to a save point got following time out error.
I checked Flink web console, the save point is created in s3 in 1 second.The
job is fairly simple, so 1 second for savepoint generation is e
Thanks Gordon,
So, 10(ThreadPoolSize) * 80 sub-tasks = 800 threads goes to a
Queue(unbounded by default). This then goes through KPL MaxConnections(24
by default) to KDS.
This suggests, I need to decrease sub-tasks or setQueueLimit(800) and
increase MaxConnections=256 (max allowed).
Checkpointing
Thanks Niels for a great talk. You have covered two of my pain areas - slim
and broken streams. Since I am dealing with device data from on-prem data
centers. The first option of generating fabricated watermark events is
fine, however as mentioned in your talk how are you handling forwarding it
to
Hi,
I have come across an issue related to GET /job/:jobId endpoint from monitoring
REST API in Flink 1.9.0. A few seconds after successfully starting a job and
confirming its status as RUNNING, that endpoint would return 404 (Not Found).
Interestingly, querying immediately again (literally a m
Hi David,
Thanks for the response. I'm actually specifying --allowNonRestoredState
while I submit the job to the yarn session but it still fails with the same
error:
StateMigrationException: The new state serializer cannot be incompatible.
Maybe we cannot resume from incremental checkpoint with s
Thanks Yang!
It worked as expected after I made the changes suggested by you!
Avijit
On Wed, Jul 22, 2020 at 11:05 PM Yang Wang wrote:
> Hi Avijit,
>
> I think you need to create a network via "docker network create
> flink-network".
> And then use "docker run ... --name=jobmanager --network f
I believe this should work, with a couple of caveats:
- You can't do this with unaligned checkpoints
- If you have dropped some state, you must specify --allowNonRestoredState
when you restart the job
David
On Wed, Jul 22, 2020 at 4:06 PM Sivaprasanna
wrote:
> Hi,
>
> We are trying out state s
Hi,
as a follow up to https://issues.apache.org/jira/browse/FLINK-18478 I now
face a class cast exception.
The reproducible example is available at
https://gist.github.com/geoHeil/5a5a4ae0ca2a8049617afa91acf40f89
I do not understand (yet) why such a simple example of reading Avro from a
Schema Re
Hi All,
We are working on migration existing pipelines from Flink 1.10 to Flink 1.11.
We are using Blink planner and have unified pipelines which can be used in
stream and batch mode.
Stream pipelines works as expected, but batch once fail on Flink 1.11 if they
have any table aggregation transf
Thank you for your answer.
I have replaced that .jar with Kafka version universal - the links to
other versions are extinct.
After the attempt of deploying:
bin/flink run -py
/home/wojtek/PycharmProjects/k8s_kafka_demo/kafka2flink.py --jarfile
/home/wojtek/Downloads/flink-sql-connector-kafka_2.11
Yes - I am able to process matched out patterns.
Let's suppose I have an order fulfillment process.
I want to know how many fulfillments have not met SLA and further how late
they are and track until they are fulfilled.
>From what I tried with samples, once the pattern timeout, it is discarded
a
Hi,
I have come across an issue related to GET /job/:jobId endpoint from monitoring
REST API in Flink 1.9.0. A few seconds after successfully starting a job and
confirming its status as RUNNING, that endpoint would return 404 (Not Found).
Interestingly, querying immediately again (literally a m
Hi,
We use so-called "control stream" pattern to deliver settings to the Flink
job using Apache Kafka topics. The settings are in fact an unlimited stream
of events originating from the master DBMS, which acts as a single point of
truth concerning the rules list.
It may seems odd, since Flink gua
Hi Wojtek,
you need to use the fat jar 'flink-sql-connector-kafka_2.11-1.11.0.jar'
which you can download in the doc[1]
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/kafka.html
Best,
Xingbo
Wojciech Korczyński 于2020年7月23日周四
下午4:57写道:
> Hello,
>
> I am tr
Hello,
I am trying to deploy a Python job with Kafka connector:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.dataset import ExecutionEnvironment
from pyflink.table import TableConfig, DataTypes, BatchTableEnvironment,
StreamTableEnvironment
from pyflink.table.descriptors
ThreadPoolSize is per KPL instance, so yes that is per subtask.
As I previously mentioned, the maximum concurrent requests going to KDS
would be capped by MaxConnections.
On Thu, Jul 23, 2020 at 6:25 AM Vijay Balakrishnan
wrote:
> Hi Gordon,
> Thx for your reply.
> FlinkKinesisProducer default i
Thank you Yang, I checked "yarn.application-attempts" is already set to 10.
Here is the exception part from job manager log. Full log file is too big,
I also reflected it to remove some company specific info.
Any suggestion to this exception would be appreciated!
2020-07-15 20:04:52,265 INFO
org.
21 matches
Mail list logo