Thank you for quick reply. Will wait for 1.9.2 then. I believe you dont have
any estimates on when it can happen?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
recently we have upgraded our environment to from 1.6.4 to 1.9.1. We started
to notice similar behaviour we met in 1.6.2, which was allocating more
containers on yarn then are needed by job - i think it was fixed by
https://issues.apache.org/jira/browse/FLINK-10848, but that one is still
exi
we dont set it up anywhere so i guess its default 16. Do you think its too
much?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
anyone?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
after switching from 1.4.2. to 1.5.2 we started to have problems with JM
container.
Our use case is as follows:
- we get request from user
- run DataProcessing job
- once finished we store details to DB
We have ~1000 jobs per day. After version update our container is dying
after ~1-2 da
Yes - it seems that main method returns success but for some reason we have
that exception thrown.
For now we applied workaround to catch exception and just skip it (later on
our statusUpdater is reading statuses from FlinkDashboard).
--
Sent from: http://apache-flink-user-mailing-list-archiv
Yes - we are submitting jobs one by one.
How can we change that to work for our needs?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
We are running same job all the time. And that error is happening from time
to time.
Here is job submittion code:
private JobSubmissionResult submitProgramToCluster(PackagedProgram
packagedProgram) throws JobSubmitterException,
ProgramMissingJobException, ProgramInvocationExce
No.
execute was called, and all calculation succeeded - there were job on
dashboard with status FINISHED.
after execute we had our logs that were claiming that everything succeded.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,We are currently running jobs on Flink 1.4.2. Our usecase is as
follows:
-service get request from customer
- we submit job to flink using YarnClusterClient
Sometimes we have up to 6 jobs at the same time.
>From time to time we got error as below:
The program didn't contain a Flink job.
or
thank you, I had to miss that option somehow :)
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
we were playing around with flink 1.5 - so far so good.
Only thing that we are missing is web history setup.
In flink 1.4 and before we were using *web.history* config to hold 100 jobs.
With Flink 1.5. we can see that history is limited to 1 hour only. Is it
possible to somehow extend/confi
Hello,
I still have problem after upgrading from flink 1.3.1 to 1.4.2
Our scenario looks like that:
we have container running on top of yarn. Machine that starts it has
installed flink and also loading some classpath libraries (e.g. hadoop) into
container.
there is seperate rest service that gets
Thanks a lot. It seems to work.
What is now the default classloader's order? To keep it working in new
version how should I inject Hadoop dependencies so that they are read
properly?
The class that is missing (HadoopInputFormat) is from hadoop-compatibility
library. I have upgraded it to version 1
we were jumping from version 1.3.1 (where everything worked fine)
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
We have recently upgraded flink to version 1.4.2. Now our jobs that rely on
Parquet/Avro files located on HDFS stopped working.
I get exception:
Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot
initialize task 'CHAIN DataSource (READING_RECORDS) -> Map
(MAPPING_RECO
Hi guys,
We were trying to use UI's "Submit new job" functionality (and later REST
endpoints for that).
There were few problems we found:
1. When we ran job that had additional code done after env execution (or any
sink) the code was not executed. E.g. our job was calculating some data,
writing it
bumping up that issue, as i have similar problem now.
We are running flink on Yarn and trying to submit job via java api using
YarnClusterClient (run method with PackagedProgram). Job starts to execute
(we can see it on Dashboard) but fails with error:
Caused by: java.lang.RuntimeException: Coul
Hello,
I will describe my use case shortly with steps for easier understanding:
1) currently my job is loading data from parquet files using
HadoopInputFormat along with AvroParquetInputFormat, with current approach:
AvroParquetInputFormat inputFormat = new
AvroParquetInputFormat();
19 matches
Mail list logo