Re: Flink 1.9.1 allocating more containers than needed

2019-12-06 Thread eSKa
Thank you for quick reply. Will wait for 1.9.2 then. I believe you dont have any estimates on when it can happen? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Flink 1.9.1 allocating more containers than needed

2019-12-06 Thread eSKa
Hello, recently we have upgraded our environment to from 1.6.4 to 1.9.1. We started to notice similar behaviour we met in 1.6.2, which was allocating more containers on yarn then are needed by job - i think it was fixed by https://issues.apache.org/jira/browse/FLINK-10848, but that one is still exi

Re: JobManager container is running beyond physical memory limits

2018-09-25 Thread eSKa
we dont set it up anywhere so i guess its default 16. Do you think its too much? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: JobManager container is running beyond physical memory limits

2018-09-24 Thread eSKa
anyone? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

JobManager container is running beyond physical memory limits

2018-09-10 Thread eSKa
Hello, after switching from 1.4.2. to 1.5.2 we started to have problems with JM container. Our use case is as follows: - we get request from user - run DataProcessing job - once finished we store details to DB We have ~1000 jobs per day. After version update our container is dying after ~1-2 da

Re: The program didn't contain a Flink job

2018-07-03 Thread eSKa
Yes - it seems that main method returns success but for some reason we have that exception thrown. For now we applied workaround to catch exception and just skip it (later on our statusUpdater is reading statuses from FlinkDashboard). -- Sent from: http://apache-flink-user-mailing-list-archiv

Re: The program didn't contain a Flink job

2018-07-03 Thread eSKa
Yes - we are submitting jobs one by one. How can we change that to work for our needs? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: The program didn't contain a Flink job

2018-07-03 Thread eSKa
We are running same job all the time. And that error is happening from time to time. Here is job submittion code: private JobSubmissionResult submitProgramToCluster(PackagedProgram packagedProgram) throws JobSubmitterException, ProgramMissingJobException, ProgramInvocationExce

Re: The program didn't contain a Flink job

2018-07-02 Thread eSKa
No. execute was called, and all calculation succeeded - there were job on dashboard with status FINISHED. after execute we had our logs that were claiming that everything succeded. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

The program didn't contain a Flink job

2018-07-02 Thread eSKa
Hello,We are currently running jobs on Flink 1.4.2. Our usecase is as follows: -service get request from customer - we submit job to flink using YarnClusterClient Sometimes we have up to 6 jobs at the same time. >From time to time we got error as below: The program didn't contain a Flink job. or

Re: Web history limit in flink 1.5

2018-07-02 Thread eSKa
thank you, I had to miss that option somehow :) -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Web history limit in flink 1.5

2018-06-28 Thread eSKa
Hello, we were playing around with flink 1.5 - so far so good. Only thing that we are missing is web history setup. In flink 1.4 and before we were using *web.history* config to hold 100 jobs. With Flink 1.5. we can see that history is limited to 1 hour only. Is it possible to somehow extend/confi

Classloading issues after changing to 1.4

2018-04-13 Thread eSKa
Hello, I still have problem after upgrading from flink 1.3.1 to 1.4.2 Our scenario looks like that: we have container running on top of yarn. Machine that starts it has installed flink and also loading some classpath libraries (e.g. hadoop) into container. there is seperate rest service that gets

Re: Deserializing the InputFormat (org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat@50bb10fd) failed: unread block data after upgrade to 1.4.2

2018-03-16 Thread eSKa
Thanks a lot. It seems to work. What is now the default classloader's order? To keep it working in new version how should I inject Hadoop dependencies so that they are read properly? The class that is missing (HadoopInputFormat) is from hadoop-compatibility library. I have upgraded it to version 1

Re: Deserializing the InputFormat (org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat@50bb10fd) failed: unread block data after upgrade to 1.4.2

2018-03-15 Thread eSKa
we were jumping from version 1.3.1 (where everything worked fine) -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Deserializing the InputFormat (org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat@50bb10fd) failed: unread block data after upgrade to 1.4.2

2018-03-15 Thread eSKa
Hello, We have recently upgraded flink to version 1.4.2. Now our jobs that rely on Parquet/Avro files located on HDFS stopped working. I get exception: Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'CHAIN DataSource (READING_RECORDS) -> Map (MAPPING_RECO

Submiting jobs via UI/Rest API

2018-03-09 Thread eSKa
Hi guys, We were trying to use UI's "Submit new job" functionality (and later REST endpoints for that). There were few problems we found: 1. When we ran job that had additional code done after env execution (or any sink) the code was not executed. E.g. our job was calculating some data, writing it

Re: hadoopcompatibility not in dist

2017-10-17 Thread eSKa
bumping up that issue, as i have similar problem now. We are running flink on Yarn and trying to submit job via java api using YarnClusterClient (run method with PackagedProgram). Job starts to execute (we can see it on Dashboard) but fails with error: Caused by: java.lang.RuntimeException: Coul

Additional data read inside dataset transformations

2017-09-06 Thread eSKa
Hello, I will describe my use case shortly with steps for easier understanding: 1) currently my job is loading data from parquet files using HadoopInputFormat along with AvroParquetInputFormat, with current approach: AvroParquetInputFormat inputFormat = new AvroParquetInputFormat();