our jobs and noticed that checkpoint offsets are not committed to
kafka for source connectors.
To simplfiy the issues I created simple repoducer projects:
https://github.com/simpleusr/flink_problem_1.5.5
https://github.com/simpleusr/flink_problem_1.12.0
It seems that there are major changes in
Many Thanks Fabian
I will start to investigate ProcessFunction
Regards
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
My ultimate requirement is to stop processing of certain events between
00:00:00 and 01:00:00 for each day (Time is in HH:mm:SS format).
I am flink newbie and I thought only option to delay elements is to collect
them in a window between 00:00:00 and 01:00:00 for each day.
TumblingEventTime
Hi Selveraj,
This did not help either.
Thanks
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Gary,
By "job logs" I mean all the loggers under a subpackage of com.mycompany.xyz
.
We are using ./bin/flink run command for job execution thats why I modified
log4j-cli.properties. Modification of log4j.properties also did not help...
Regards
--
Sent from: http://apache-flink-user-mail
Hi Chesnay,
below is the content for my log4j-cli.properties file. I expect my job logs
(packaged under com.mycompany.xyz to be written to file2 appender. However
no file generated with prefix XYZ. I restarted the cluster , canceled
resubmitted several times but none of them helped.
/
log4j.root
Hi Chesnay,
Many thanks..
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I know this seems a silly question but I am trying to figure out optimal set
up for our flink jobs.
We are using standalone cluster with 5 jobs. Each job has 3 asynch operators
with Executors with thread counts of 20,20,100. Source is kafka and
cassandra and rest sinks exist.
Currently we are usin
We are using standalone cluster and submittig jobs through command line
client.
As stated in
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html
, we are editing log4j-cli.properties but this does not make any effect?
Anybody seen that before?
Regards
--
Sent fro
I know this seems a silly question but I am trying to figure out optimal set
up for our flink jobs.
We are using standalone cluster with 5 jobs. Each job has 3 asynch operators
with Executors with thread counts of 20,20,100. Source is kafka and
cassandra and rest sinks exist.
Currently we are usin
Flink Standalone cluster - dumps
We are using standalone cluster and submittig jobs through command line
client.
As far as I understand, the job is executed in task manager. A single task
manager represents a single jvm? So the dump shows threads from all jobs
bound to task manager.
Two questions
I am a flink newbie and trying to apply windowing . My source is kafka and my
model does not contain event time info so, I am tring to use Kafka
timestamps with assignTimestampsAndWatermarks() method
I implemented two timestamp assigners as below.
public class TimestampAssigner1 implements
Assign
12 matches
Mail list logo