Hi Alex,
I’m not very familiar with JsonLinesInputFormat, is that your own
implementation? You may look into the `createInputSplits()` method which should
do the listing work. You may rewrite it with concurrent listing.
> On Dec 13, 2018, at 11:56 PM, Alex Vinnik wrote:
>
> Qi,
>
> Thanks fo
Hello!
I am working on setting up a new flink cluster that stores it's checkpoints
in an HDFS cluster deployed to the same Kubernetes cluster.
I am running into problems with the dependencies required to use the
hdfs:// storage location.
The exception I am getting is
Caused by: org.apache.flink
Hi Gordon,
My use-case was slightly different.
1. Started a Kinesis connector source, with TRIM_HORIZON as the startup
position.
2. Only a few Records were written to the Kinesis stream
3. The FlinkKinesisConsumer reads the records from Kinesis stream. Then
after a period of time of not reading
Qi,
Thanks for references! How do enable concurrent s3 file listing? Here is
the code.
// Consume the JSON files
Configuration configuration = new
Configuration(GlobalConfiguration.loadConfiguration());
configuration.setBoolean(JsonLinesInputFormat.ENUMERATE_NESTED_FILES_FLAG,
true);
JsonLinesIn
The directory is automatically created when Flink is started; maybe it
was deleted by some cleanup process?
In any case we can make a small adjustment to the code to create all
required directories when they don't exist.
On 13.12.2018 14:46, Chang Liu wrote:
Dear All,
I did a workaround and
Dear All,
I did a workaround and the job submitting is working. I manually created the
directory flink-web-upload under the directory
/tmp/flink-web-ec768ff6-1db1-4afa-885f-b2828bc31127 .
But I don’t think this is the proper solution. Flink should be able to create
such directory automatically
Hi Gordon,
We are using flink cluster 1.6.1, elastic search connector version:
flink-connector-elasticsearch6_2.11
Attached the stack trace.
Following are the max open file descriptor limit of theTask manager
process and open connections to the elastic
search cluster
Regards
Bhaskar
*#lsof -p 620
Hi,
In that case, are you sure that your Flink version corresponds to the version
of the flink-hadoop-compatibility jar? It seems that you are using Flink 1.7
for the jar and your cluster needs to run that version as well. IIRC, this
particular class was introduced with 1.7, so using a differen
Dear all,
I am trying to submit a job but it got stuck here:
...
2018-12-13 10:43:11,476 INFO
org.apache.flink.configuration.GlobalConfiguration- Loading
configuration property: jobmanager.heap.size, 1024m
2018-12-13 10:43:11,476 INFO
org.apache.flink.configuration.GlobalConfigur
Hi,
Thanks for analyzing the problem. If it turns out that there is a problem with
the termination of the Kafka sources, could you please open an issue for that
with your results?
Best,
Stefan
> On 11. Dec 2018, at 19:04, PedroMrChaves wrote:
>
> Hello Stefan,
>
> Thank you for the reply.
>
Hi!
Thanks for reporting this.
This looks like an overlooked corner case that the Kinesis connector doesn’t
handle properly.
First, let me clarify the case and how it can be reproduced. Please let me know
if the following is correct:
1. You started a Kinesis connector source, with TRIM_HORIZON
Hi,
Besides the information that Chesnay requested, could you also provide a stack
trace of the exception that caused the job to terminate in the first place?
The Elasticsearch sink does indeed close the internally used Elasticsearch
client, which should in turn properly release all resources [
Specifically which connector are you using, and which Flink version?
On 12.12.2018 13:31, Vijay Bhaskar wrote:
Hi
We are using flink elastic sink which streams at the rate of 1000
events/sec, as described in
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.ht
Hi Rakesh,
So the problem is that you want your Flink job to monitor '
/data/ingestion/ingestion-raw-product’ path for new files inside and process
them when they appear, right?
Can you try env.readFile but with watchType =
FileProcessingMode.PROCESS_CONTINUOUSLY?
You can see an example in how
Hello once again,
If you want to use CEP library you can e.g. key by device, then apply
pattern:
Pattern.begin("connect").where(...)
.followedBy("established").where(new IterativeCondition() {
@Override
public boolean filter(
Event value, Context ctx) throws Exception {
re
Hi Florin,
I concur with Dian. If you have any other questions, please do not
hesitate to ask.
Best,
Dawid
On 13/12/2018 03:37, fudian.fd wrote:
> Hi Florin,
>
> Are you using processing time or event time? The JIRA FLINK-7384
> allows to emit timed-out patterns without having to wait for the n
Hello, Dian!
Thank you very much for your explanations. In my case, CEP patterns are
based on the event time. Also, as you said I have many devices and also
many ports on that devices. Therefore, I'm using keyed streams.
So I would like to know which device was disconnected on which port.
Is that
17 matches
Mail list logo