Hi Averell,
You need to understand that Flink reflects the recovery of the state, not
the recovery of the record.
Of course, sometimes your record is state, but sometimes the intermediate
result of your record is the state.
It depends on your business logic and your operators.
Thanks, vino.
Aver
Hi Joe,
Did you try the word_count example from the flink codebase?[1]
Recently, I tried this example, it works fine to me.
An example of an official document may not guarantee your success due to
maintenance issues.
cc @Chesnay
[1]:
https://github.com/apache/flink/blob/master/flink-libraries/
Hi Averall,
As Vino said, checkpoints store the state of all operators of an
application.
The state of a monitoring source function is the position in the currently
read split and all splits that have been received and are currently pending.
In case of a recovery, the splits are recovered and the
I agree, please open a JIRA.
On 08.08.2018 05:11, vino yang wrote:
Hi Dylan,
I roughly looked at your job program and the DAG of the job. It seems
that the optimizer chose the wrong optimization execution plan.
cc Till.
Thanks, vino.
Dylan Adams mailto:dylan.ad...@gmail.com>>
于2018年8月8日周三
What have you tried so far to increase performance? (Did you try
different combinations of -yn and -ys?)
Can you provide us with your application? What source/sink are you using?
On 08.08.2018 07:59, Ravi Bhushan Ratnakar wrote:
Hi Everybody,
Currently I am working on a project where i need t
I'll take a look, but it sounds like the source is the issue?
On 08.08.2018 09:34, vino yang wrote:
Hi Joe,
Did you try the word_count example from the flink codebase?[1]
Recently, I tried this example, it works fine to me.
An example of an official document may not guarantee your success due
hmm, i was able to run it with 1.5.2 at least. Let's see what 1.6 says...
On 08.08.2018 10:27, Chesnay Schepler wrote:
I'll take a look, but it sounds like the source is the issue?
On 08.08.2018 09:34, vino yang wrote:
Hi Joe,
Did you try the word_count example from the flink codebase?[1]
Re
Hi Dylan,
Yes, that's a bug.
As you can see from the plan, the partitioning step is pushed past the
Filter.
This is possible, because the optimizer knows that a Filter function cannot
modify the data (it only removes records).
A workaround should be to implement the filter as a FlatMapFunction. A
Do you want to read the data once or monitor a directory and process new
files as they appear?
Reading from S3 with Flink's current MonitoringFileSource implementation is
not working reliably due to S3's eventual consistent list operation (see
FLINK-9940 [1]).
Reading a directory also has some iss
I've created FLINK-10100 [1] to track the problem and suggest a solution
and workaround.
Best, Fabian
[1] https://issues.apache.org/jira/browse/FLINK-10100
2018-08-08 10:39 GMT+02:00 Fabian Hueske :
> Hi Dylan,
>
> Yes, that's a bug.
> As you can see from the plan, the partitioning step is push
I cannot reproduce the problem in 1.6-rc4 and 1.7-SNAPSHOT either :/
On 08.08.2018 10:33, Chesnay Schepler wrote:
hmm, i was able to run it with 1.5.2 at least. Let's see what 1.6 says...
On 08.08.2018 10:27, Chesnay Schepler wrote:
I'll take a look, but it sounds like the source is the issue?
The code or the execution plan (ExecutionEnvironment.getExecutionPlan()) of
the job would be interesting.
2018-08-08 10:26 GMT+02:00 Chesnay Schepler :
> What have you tried so far to increase performance? (Did you try different
> combinations of -yn and -ys?)
>
> Can you provide us with your app
Hi Alexis,
First of all, I think you leverage the partitioning and sorting properties
of the data returned by the database using SplitDataProperties.
However, please be aware that SplitDataProperties are a rather experimental
feature.
If used without query parameters, the JDBCInputFormat generate
Hi everybody,
The Flink community maintains a directory of organizations and projects
that use Apache Flink [1].
Please reply to this thread if you'd like to add an entry to this list.
Thanks,
Fabian
[1] https://cwiki.apache.org/confluence/display/FLINK/Powered+by+Flink
Hi Fabian,
Thanks for the clarification. I have a few remarks, but let me provide more
concrete information. You can find the query I'm using, the JDBCInputFormat
creation, and the execution plan in this github gist:
https://gist.github.com/asardaes/8331a117210d4e08139c66c86e8c952d
I cannot call
Hi Fabian,
We at Limeroad, are using Flink for multiple use-cases ranging from ETL
jobs, ClickStream data processing, real-time dashboard to CEP. Could you
list us on given directory?
Website: https://www.limeroad.com
--
Thanks,
Amit
--
Sent from: http://apache-flink-user-mailing-list-archiv
Hi Florian,
Thanks for following up. Does it consistently work for you if -ytm and -yjm
are set to 2 GB?
Can you enable DEBUG level logging, submit with 1GB of memory again, and
send
all TaskManager logs in addition? The output of yarn logs -applicationId
should suffice.
The Flink version that
I was trying to cancel a job with savepoint, but the CLI command failed
with "akka.pattern.AskTimeoutException: Ask timed out".
The stack trace reveals that ask timeout is 10 seconds:
Caused by: akka.pattern.AskTimeoutException: Ask timed out on
[Actor[akka://flink/user/jobmanager_0#106635280]] a
Thanks Amit!
I've added Limeroad to the list with your description.
Best, Fabian
2018-08-08 14:12 GMT+02:00 amit.jain :
> Hi Fabian,
>
> We at Limeroad, are using Flink for multiple use-cases ranging from ETL
> jobs, ClickStream data processing, real-time dashboard to CEP. Could you
> list us on
Hello -
It does not appear that Flink supports a charset encoding of "UTF-16". It
particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM)
to establish whether a UTF-16 file is UTF-16LE or UTF-16BE. Are there any
plans to enhance Flink to handle UTF-16 with BOM?
Thank you,
Davi
Hello,
I'm looking at the following page of the documentation
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html
particularly at this piece of code:
val stream: DataStream[(String, Int)] = ...
val counts: DataStream[(String, Int)] = stream
.keyBy(_._1)
.m
Hi Juan,
The state will be purged if you return None instead of a Some.
However, this only happens when the function is called for a specific key,
i.e., state won't be automatically removed after some time.
If this is your use case, you have to implement a ProcessFunction and use
timers to manuall
Hi everyone,
Thanks for your help. I discovered that the WordCount example runs when the
lib directory is empty - something I had in there was causing it to break
(perhaps a version conflict?). I haven't yet figured out what the culprit
was, but I'll post an update if I do.
Thanks again,
Joe
On
I'm wondering what the best practice is for using secrets in a Flink program,
and I can't find any info in the docs or posted anywhere else.
I need to store an access token to one of my APIs for flink to use to dump
results into, and right now I'm passing it through as a configuration
parameter
I'm working on getting a flink job into production. As part of the production
requirement, I need telemetry/metrics insight into my flink job. I have
followed instructions in
https://ci.apache.org/projects/flink/flink-docs-release-1.5/monitoring/metrics.html
- Added the flink graphite jar to tas
Hi Juho,
This problem does exist, I suggest you separate these two steps to
temporarily deal with this problem:
1) Trigger Savepoint separately;
2) execute the cancel command;
Hi Till, Chesnay:
Our internal environment and multiple users on the mailing list have
encountered similar problems.
In
I read the doc about ProcessWindowFunction
But I the code on the flink demo is incorrect
public class MyProcessWindowFunction extends
ProcessWindowFunction, String, String, TimeWindow>
{
Tuple cannot have to parameter.
I try to find a demo which ProcessWindowFunction used in window word count
dem
Hi yuanjun,
There are very few examples of ProcessWindowFunction, but there are some
implementations for testing in Flink's source code for your reference.[1]
[1]:
https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/WindowTranslati
Hi Matt,
Flink is currently enhancing its security, such as the current data
transmission can be configured with SSL mode[1].
However, some problems involving configuration and web ui display do exist,
and they are still displayed in plain text.
I think a temporary way to do this is to keep your s
29 matches
Mail list logo