Re: Failing to build Flink 1.9 using Scala 2.12

2022-12-28 Thread Milind Vaidya
Hi David, Thanks for this useful information. This unblocked and now locally build is successful which was not the case earlier. I have a few more questions though. - Are there any requirements for the maven and / or plugin version ? - Another error says *Failure to find org.apache.kafka:k

Re: Failing to build Flink 1.9 using Scala 2.12

2022-12-24 Thread David Anderson
Flink only officially supports Scala 2.12 up to 2.12.7 -- you are running into the binary compatibility check, intended to keep you from unknowingly running into problems. You can disable japicmp, and everything will hopefully work: mvn clean install -DskipTests -Djapicmp.skip -Dscala-2.12 -Dscala

Failing to build Flink 1.9 using Scala 2.12

2022-12-23 Thread Milind Vaidya
Hi First of all, I do understand that I am using a very old version. But as of now the team can not help it. We need to move to Scala 2.12 first and then we will move forward towards the latest version of Flink. I have added following things to main pom.xml 2.11.12 2.11 Under Scala-2.11 remov

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-30 Thread Martijn Visser
ilind Vaidya" > *收件人: *"Weihua Hu" > *抄送: *"User" > *发送时间: *星期五, 2022年 8 月 19日 上午 1:26:45 > *主题: *Re: Failing to compile Flink 1.9 with Scala 2.12 > > Hi Weihua, > > Thanks for the update. I do understand that, but right now it is not > possibl

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread yuxia
At least for Flink 1.15, it's recommended to use maven 3.2.5. So I guess maybe you can try use a lower version of maven. Best regards, Yuxia 发件人: "Milind Vaidya" 收件人: "Weihua Hu" 抄送: "User" 发送时间: 星期五, 2022年 8 月 19日 上午 1:26:45 主题: Re: Failing to com

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread Milind Vaidya
Hi Weihua, Thanks for the update. I do understand that, but right now it is not possible to update immediately to 1.15, so wanted to know what is the way out. - Milind On Thu, Aug 18, 2022 at 7:19 AM Weihua Hu wrote: > Hi > Flink 1.9 is not updated since 2020-04-24, it's recomme

Re: Failing to compile Flink 1.9 with Scala 2.12

2022-08-18 Thread Weihua Hu
Hi Flink 1.9 is not updated since 2020-04-24, it's recommended to use the latest stable version 1.15.1. Best, Weihua On Thu, Aug 18, 2022 at 5:36 AM Milind Vaidya wrote: > Hi > > Trying to compile and build Flink jars based on Scala 2.12. > > Settings : > Java

Failing to compile Flink 1.9 with Scala 2.12

2022-08-17 Thread Milind Vaidya
Hi Trying to compile and build Flink jars based on Scala 2.12. Settings : Java 8 Maven 3.6.3 / 3.8.6 Many online posts suggest using Java 8 which is already in place. Building using Jenkins. Any clues as to how to get rid of it? net.alchim31.maven scala-maven-plugin 3.3.2 -nobootcp -Xs

Re: idleTimeMsPerSecond on Flink 1.9?

2021-03-10 Thread Till Rohrmann
Hi Lakshmi, as you have said the StreamTask code base has evolved quite a bit between Flink 1.9 and Flink 1.12. With the mailbox model it now works quite differently. Moreover, the community no longer actively maintains versions < 1.11. Hence, if possible I would recommend you to upgrade to

idleTimeMsPerSecond on Flink 1.9?

2021-03-08 Thread Lakshmi Gururaja Rao
Hi I'm trying to understand the implementation of idleTimeMsPerSecond. Specifically what I'm trying to do is, adapt this metric to be used with Flink 1.9 (for a fork). I tried an approach similar to this PR <https://github.com/apache/flink/pull/11564/files> and measuring the t

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-14 Thread bat man
Hello Arvid, Thanks I’ll check my config and use the correct reporter and test it out. Thanks, Hemant On Fri, 14 Aug 2020 at 6:57 PM, Arvid Heise wrote: > Hi Hemant, > > according to the influx section of the 1.9 metric documentation [1], you > should use the reporter without a factory. The fa

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-14 Thread Arvid Heise
Hi Hemant, according to the influx section of the 1.9 metric documentation [1], you should use the reporter without a factory. The factory was added later. metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReportermetrics.reporter.influxdb.host: localhostmetrics.reporter.

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-12 Thread bat man
Anyone who has made metrics integration to external systems for flink running on AWS EMR, can you share if its a configuration issue or EMR specific issue. Thanks, Hemant On Wed, Aug 12, 2020 at 9:55 PM bat man wrote: > An update in the yarn logs I could see the below - > > Classpath: > *lib/fl

Re: Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-12 Thread bat man
An update in the yarn logs I could see the below - Classpath: *lib/flink-metrics-influxdb-1.9.0.jar:lib/flink-shaded-hadoop-2-uber-2.8.5-amzn-5-7.0.jar:lib/flink-table-blink_2.11-1.9.0.jar:lib/flink-table_2.11-1.9.0.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.15.jar:log4j.properties:plugins/inf

Pushing metrics to Influx from Flink 1.9 on AWS EMR(5.28)

2020-08-12 Thread bat man
Hello Experts, I am running Flink - 1.9.0 on AWS EMR(emr-5.28.1). I want to push metrics to Influxdb. I followed the documentation[1]. I added the configuration to /usr/lib/flink/conf/flink-conf.yaml and copied the jar to /usr/lib/flink//lib folder on master node. However, I also understand that t

Re: Handle idle kafka source in Flink 1.9

2020-08-05 Thread bat man
Hello Arvid, Thanks for the suggestion/reference and my apologies for the late reply. With this I am able to process the data with some topics not having regular data. Obviously, late data is being handheld as in side-output and has a process for it. One challenge is to handle the back-fill as wh

Re: Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Avijit Saha
ations to the image? This exception should only > be thrown if there is already a file with the same path, and I don't think > Flink would do that. > > On 03/08/2020 21:43, Avijit Saha wrote: > > Hello, > > Has there been any change in behavior related to the "web.upl

Re: Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Chesnay Schepler
n any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Exception but the same flow

Any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11

2020-08-03 Thread Avijit Saha
Hello, Has there been any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Excepti

Re: Between Flink 1.9 and 1.11 - any behavior change for web.upload.dir

2020-08-03 Thread Avijit Saha
Hello, Has there been any change in behavior related to the "web.upload.dir" behavior between Flink 1.9 and 1.11? I have a failure case where when build an image using "flink:1.11.0-scala_2.12" in Dockerfile, the job manager job submissions fail with the following Excepti

Re: Handle idle kafka source in Flink 1.9

2020-07-30 Thread Arvid Heise
Hi Hemant, sorry for the late reply. You can just create your own watermark assigner and either copy the assigner from Flink 1.11 or take the one that we use in our trainings [1]. [1] https://github.com/ververica/flink-training-troubleshooting/blob/master/src/main/java/com/ververica/flinktrainin

Re: Handle idle kafka source in Flink 1.9

2020-07-23 Thread bat man
Thanks Niels for a great talk. You have covered two of my pain areas - slim and broken streams. Since I am dealing with device data from on-prem data centers. The first option of generating fabricated watermark events is fine, however as mentioned in your talk how are you handling forwarding it to

Re: Handle idle kafka source in Flink 1.9

2020-07-22 Thread Niels Basjes
Have a look at this presentation I gave a few weeks ago. https://youtu.be/bQmz7JOmE_4 Niels Basjes On Wed, 22 Jul 2020, 08:51 bat man, wrote: > Hi Team, > > Can someone share their experiences handling this. > > Thanks. > > On Tue, Jul 21, 2020 at 11:30 AM bat man wrote: > >> Hello, >> >> I ha

Re: Handle idle kafka source in Flink 1.9

2020-07-21 Thread bat man
Hi Team, Can someone share their experiences handling this. Thanks. On Tue, Jul 21, 2020 at 11:30 AM bat man wrote: > Hello, > > I have a pipeline which consumes data from a Kafka source. Since, the > partitions are partitioned by device_id in case a group of devices is down > some partitions

Handle idle kafka source in Flink 1.9

2020-07-20 Thread bat man
Hello, I have a pipeline which consumes data from a Kafka source. Since, the partitions are partitioned by device_id in case a group of devices is down some partitions will not get normal flow of data. I understand from documentation here[1] in flink 1.11 one can declare the source idle - Watermar

Re: flink 1.9 conflict jackson version

2020-04-07 Thread Fanbin Bu
%22ouywl%40139.com%22%5D> >> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 >> >> On 12/17/2019 08:10,Fanbin Bu >> wrote: >> >> Hi, >> >> After I upgrade flink 1.9, I got the following error message on EMR, it >> wor

Re: flink 1.9 conflict jackson version

2020-04-06 Thread aj
https%3A%2F%2Fmail-online.nosdn.127.net%2Fsma8dc7719018ba2517da7111b3db5a170.jpg&items=%5B%22ouywl%40139.com%22%5D> > 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 > > On 12/17/2019 08:10,Fanbin Bu > wrote: > > Hi, > > After I upgrade flink 1.9, I got

Re: Hadoop user jar for flink 1.9 plus

2020-03-20 Thread Vishal Santoshi
.8.x on production and were planning to go to > > flink 1.9 or above. We have always used hadoop uber jar from > > > https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2-uber > but > > > it seems they go up to 1.8.3 and their distribution ends 2019.

Re: Hadoop user jar for flink 1.9 plus

2020-03-17 Thread Chesnay Schepler
You can download flink-shaded-hadoop from the downloads page: https://flink.apache.org/downloads.html#additional-components On 17/03/2020 15:56, Vishal Santoshi wrote: We have been on flink 1.8.x on production and were planning to go to flink 1.9 or above. We have always used hadoop uber jar

Hadoop user jar for flink 1.9 plus

2020-03-17 Thread Vishal Santoshi
We have been on flink 1.8.x on production and were planning to go to flink 1.9 or above. We have always used hadoop uber jar from https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2-uber but it seems they go up to 1.8.3 and their distribution ends 2019. How do or where do we

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread godfrey he
t;>> >>>>>>> Sorry, 1.10 not support "CREATE VIEW" in raw SQL too. Workaround is: >>>>>>> - Using TableEnvironment.createTemporaryView... >>>>>>> - Or using "create view" and "drop view" in the sql-client. &g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
gt;> >>>>>> FLIP-71 will be finished in 1.11 soon. >>>>>> >>>>>> Best, >>>>>> Jingsong Lee >>>>>> >>>>>> On Sun, Jan 19, 2020 at 4:10 PM kant kodali >>>>>> wrote: &g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
finished in 1.11 soon. >>>>> >>>>> Best, >>>>> Jingsong Lee >>>>> >>>>> On Sun, Jan 19, 2020 at 4:10 PM kant kodali >>>>> wrote: >>>>> >>>>>> I tried the following. >>&

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-26 Thread godfrey he
ableEnv.sqlUpdate("CREATE VIEW my_view AS SELECT * FROM sample1 FULL >>>>> OUTER JOIN sample2 on sample1.f0=sample2.f0"); >>>>> >>>>> Table result = bsTableEnv.sqlQuery("select * from my_view"); >>>>> >>>>

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-25 Thread kant kodali
sult = bsTableEnv.sqlQuery("select * from my_view"); >>>> >>>> It looks like >>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-71+-+E2E+View+support+in+FLINK+SQL >>>> Views >>>> are not supported. Can I expect them to be supporte

【checkpoint】Flink 1.9 , checkpoint is declined for exception with message 'Pending record count must be zero at this point'

2020-02-20 Thread tao wang
Hi all , may someone help me !! tks. The full exception as follows. > 2020-02-21 08:32:15,738 INFO > org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Discarding > checkpoint 941 of job 0e16cf38a0bff313544e1f31d078f75b. > org.apache.flink.runtime.checkpoint.CheckpointException: C

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jingsong Li
t;>> It looks like >>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-71+-+E2E+View+support+in+FLINK+SQL >>> Views >>> are not supported. Can I expect them to be supported in Flink 1.10? >>> >>> Currently, with Spark SQL when the query g

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jark Wu
Currently, with Spark SQL when the query gets big I break it down into views > and this is one of the most important features my application relies on. is > there any workaround for this at the moment? > > Thanks! > > On Sat, Jan 18, 2020 at 6:24 PM kant kodali <mailto:kanth...@gmail.com>> wrote: > Hi All, > > Does Flink 1.9 support create or replace views syntax in raw SQL? like spark > streaming does? > > Thanks! > > > -- > Best, Jingsong Lee

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread kant kodali
1+-+E2E+View+support+in+FLINK+SQL >> Views >> are not supported. Can I expect them to be supported in Flink 1.10? >> >> Currently, with Spark SQL when the query gets big I break it down into >> views and this is one of the most important features my application relies >

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread Jingsong Li
hen the query gets big I break it down into > views and this is one of the most important features my application relies > on. is there any workaround for this at the moment? > > Thanks! > > On Sat, Jan 18, 2020 at 6:24 PM kant kodali wrote: > >> Hi All, >> >&

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-19 Thread kant kodali
moment? Thanks! On Sat, Jan 18, 2020 at 6:24 PM kant kodali wrote: > Hi All, > > Does Flink 1.9 support create or replace views syntax in raw SQL? like > spark streaming does? > > Thanks! >

Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-18 Thread kant kodali
Hi All, Does Flink 1.9 support create or replace views syntax in raw SQL? like spark streaming does? Thanks!

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread Benchao Li
- Blink Planner >>>- Old Planner (Legacy Planner) >>> >>> You can try out blink planner by [2]. >>> Hope this helps. >>> >>> [1] >>> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#main-differences-

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread Benchao Li
>> You can try out blink planner by [2]. >> Hope this helps. >> >> [1] >> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#main-differences-between-the-two-planners >> [2] >> https://ci.apache.org/projects/flink/flink-docs-rele

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread kant kodali
ble/common.html#main-differences-between-the-two-planners > [2] > https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#create-a-tableenvironment > > > kant kodali 于2020年1月12日周日 上午7:48写道: > >> Hi All, >> >> Are blink changes merged into fl

Re: are blink changes merged into flink 1.9?

2020-01-11 Thread Benchao Li
table/common.html#main-differences-between-the-two-planners [2] https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#create-a-tableenvironment kant kodali 于2020年1月12日周日 上午7:48写道: > Hi All, > > Are blink changes merged into flink 1.9? It looks like there are a lot

are blink changes merged into flink 1.9?

2020-01-11 Thread kant kodali
Hi All, Are blink changes merged into flink 1.9? It looks like there are a lot of features and optimizations in Blink and if they aren't merged into flink 1.9 I am not sure on which one to use? is there any plan towards merging it? Thanks!

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-08 Thread Arvid Heise
Hi Salva, I already answered on SO [1], but I'll replicate it here: With Flink 1.9, you cannot dynamically broadcast to all channels anymore. Your StreamPartitioner has to statically specify if it's a broadcast with isBroadcast. Then, selectChannel is never invoked. Do you have a sp

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-03 Thread Salva Alcántara
ation(), new MyDynamicPartitioner()) ) ``` The problem when migrating to Flink 1.9 is that MyDynamicPartitioner cannot handle broadcasted elements as explained in the question description. So, based on your reply, I guess I could do something like this: ``` resultSingleChannel = ne

Re: Migrate custom partitioner from Flink 1.7 to Flink 1.9

2020-01-03 Thread Chesnay Schepler
adcast().union(singleChannel) // apply operations on result On 26/12/2019 08:20, Salva Alcántara wrote: I am trying to migrate a custom dynamic partitioner from Flink 1.7 to Flink 1.9. The original partitioner implemented the `selectChannels` method within the `StreamPartitioner` interface lik

Migrate custom partitioner from Flink 1.7 to Flink 1.9

2019-12-25 Thread Salva Alcántara
I am trying to migrate a custom dynamic partitioner from Flink 1.7 to Flink 1.9. The original partitioner implemented the `selectChannels` method within the `StreamPartitioner` interface like this: ```java // Original: working for Flink 1.7 //@Override public int[] selectChannels

Re: Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread Jark Wu
Hi LakeShen, I'm sorry there is no such configuration for json format currently. I think it makes sense to add such configuration like 'format.ignore-parse-errors' in csv format. I created FLINK-15396[1] to track this. Best, Jark [1]: https://issues.apache.org/jira/browse/FLINK-15396 On Thu, 26

Flink 1.9 SQL Kafka Connector,Json format,how to deal with not json message?

2019-12-25 Thread LakeShen
Hi community,when I write the flink ddl sql like this: CREATE TABLE kafka_src ( id varchar, a varchar, b TIMESTAMP, c TIMESTAMP ) with ( ... 'format.type' = 'json', 'format.property-version' = '1', 'format.derive-schema' = 'true', 'update-mode' = 'append' ); If the me

flink 1.9 conflict jackson version

2019-12-16 Thread Fanbin Bu
Hi, After I upgrade flink 1.9, I got the following error message on EMR, it works locally on IntelliJ. I'm explicitly declaring the dependency as implementation 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.10.1' and I have implementation group: 'com.amazonaws&#

Re: Flink 1.9 Sql Rowtime Error

2019-11-01 Thread OpenInx
Hi Polarisary. Checked the flink codebase and your stacktraces, seems you need to format the timestamp as : "-MM-dd'T'HH:mm:ss.SSS'Z'" The code is here: https://github.com/apache/flink/blob/38e4e2b8f9bc63a793a2bddef5a578e3f80b7376/flink-formats/flink-json/src/main/java/org/apache/flink/forma

Flink 1.9 Sql Rowtime Error

2019-11-01 Thread Polarisary
Hi All: I have define kafka connector Descriptor, and registe Table tEnv.connect(new Kafka() .version("universal") .topic(tableName) .startFromEarliest() .property("zookeeper.connect", “xxx") .property("bootstrap.servers", “xxx") .property("group.id"

Re: Flink 1.9 measuring time taken by each operator in DataStream API

2019-10-25 Thread Fabian Hueske
Hi Komal, Measuring latency is always a challenge. The problem here is that your functions are chained, meaning that the result of a function is directly passed on to the next function and only when the last function emits the result, the first function is called with a new record. This makes meas

Flink 1.9 measuring time taken by each operator in DataStream API

2019-10-24 Thread Komal Mariam
Hello, I have a few questions regarding flink’s dashboard and monitoring tools. I have a fixed number of records that I process through the datastreaming API on my standalone cluster and want to know how long it takes to process them. My questions are: 1)How can I see the time taken in milli

Re: flink 1.9

2019-10-18 Thread Gezim Sejdiu
Hi Chesnay, I see. Many thanks for your prompt reply. Will make us of flink-shaded-hadoop-uber jar when deploying Flink using Docker starting from Flink v.1.8.0. Best regards, On Fri, Oct 18, 2019 at 1:30 PM Chesnay Schepler wrote: > We will not release Flink version bundling Hadoop. > > The v

Re: flink 1.9

2019-10-18 Thread Chesnay Schepler
We will not release Flink version bundling Hadoop. The versioning for flink-shaded-hadoop-uber is entirely decoupled from Flink version. You can just use the flink-shaded-hadoop-uber jar linked on the downloads page with any Flink version. On 18/10/2019 13:25, GezimSejdiu wrote: Hi Flink com

Re: flink 1.9

2019-10-18 Thread GezimSejdiu
Hi Flink community, I'm aware of the split done for binary sources of Flink starting from Flink 1.8.0 version, i.e there are no hadoop-shaded binaries available on apache dist. archive: https://archive.apache.org/dist/flink/flink-1.8.0/. Are there any plans to move the hadoop-pre-build binaries t

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-14 Thread Timothy Victor
Thanks for the insight Roman, and also for the GC tips. There are 2 reasons why I wanted to see this memory released. First as a way to just confirm my understanding of Flink memory segment handling. Second is that I run a single standalone cluster that runs both streaming and batch jobs, and t

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-14 Thread Roman Grebennikov
Forced GC does not mean that JVM will even try to release the freed memory back to the operating system. This highly depends on the JVM and garbage collector used for your Flink setup, but most probably it's the jvm8 with the ParallelGC collector. ParallelGC is known to be not that aggressive o

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-12 Thread Timothy Victor
This part about the GC not cleaning up after the job finishes makes sense. However, I o served that even after I run a "jcmd GC.run" on the task manager process ID the memory is still not released. This is what concerns me. Tim On Sat, Oct 12, 2019, 2:53 AM Xintong Song wrote: > Generally y

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-11 Thread Timothy Victor
Thanks Xintong! In my case both of those parameters are set to false (default). I think I am sort of following what's happening here. I have one TM with heap size set to 1GB. When the cluster is started the TM doesn't use that 1GB (no allocations). Once the first batch job is submitted I can

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-10 Thread Xintong Song
I think it depends on your configurations. - Are you using on-heap/off-heap managed memory? (configured by 'taskmanager.memory.off-heap', by default is false) - Is managed memory pre-allocated? (configured by 'taskmanager.memory.preallocate', by default is ffalse) If managed memory is pre-alloca

Re: Batch Job in a Flink 1.9 Standalone Cluster

2019-10-10 Thread Yang Wang
Hi Tim, Do you mean the user heap memory used by the tasks of finished jobs is not freed up? If this is the case, the memory usage of taskmanger will increase as more and more jobs finished. However this does not happen, the memory will be freed up by jvm gc. BTW, flink has its own memory managem

Batch Job in a Flink 1.9 Standalone Cluster

2019-10-10 Thread Timothy Victor
After a batch job finishes in a flink standalone cluster, I notice that the memory isn't freed up. I understand Flink uses it's own memory manager and just allocates a large tenured byte array that is not GC'ed. But does the memory used in this byte array get released when the batch job is done

Re: flink 1.9

2019-10-09 Thread Vishal Santoshi
Thanks a lot. On Wed, Oct 9, 2019, 8:55 AM Chesnay Schepler wrote: > Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take the > current master and compile&run it on Java 11. > > We have not investigated later Java versions yet. > On 09/10/2019 14:14, Vishal Santoshi wrote: > >

Re: flink 1.9

2019-10-09 Thread Chesnay Schepler
Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take the current master and compile&run it on Java 11. We have not investigated later Java versions yet. On 09/10/2019 14:14, Vishal Santoshi wrote: Thank you. A related question, has flink been tested with jdk11 or above. ? O

Re: flink 1.9

2019-10-09 Thread Vishal Santoshi
Thank you. A related question, has flink been tested with jdk11 or above. ? On Tue, Oct 8, 2019, 5:18 PM Steven Nelson wrote: > > https://flink.apache.org/downloads.html#apache-flink-190 > > > Sent from my iPhone > > On Oct 8, 2019, at 3:47 PM, Vishal Santoshi > wrote: > > where do I get the c

Re: flink 1.9

2019-10-08 Thread Steven Nelson
https://flink.apache.org/downloads.html#apache-flink-190 Sent from my iPhone > On Oct 8, 2019, at 3:47 PM, Vishal Santoshi wrote: > > where do I get the corresponding jar for 1.9 ? > > flink-shaded-hadoop2-uber-2.7.5-1.8.0.jar > > Thanks..

flink 1.9

2019-10-08 Thread Vishal Santoshi
where do I get the corresponding jar for 1.9 ? flink-shaded-hadoop2-uber-2.7.5-1.8.0.jar Thanks..

Re: Flink 1.9, MapR secure cluster, high availability

2019-09-19 Thread Stephan Ewen
Hi! Not sure what is happening here. - I cannot understand why MapR FS should use Flink's relocated ZK dependency - It might be that it doesn't and that all the logging we see probably comes from Flink's HA services. Maybe the MapR stuff uses a different logging framework and the logs do not

Re: Flink 1.9, MapR secure cluster, high availability

2019-09-16 Thread Maxim Parkachov
Hi Stephan, sorry for the late answer, didn't have access to cluster. Here is log and stacktrace. Hope this helps, Maxim. - 2019-09-16 18:00:31,804 INFO org.apache.fli

Re: [flink-1.9] how to read local json file through Flink SQL

2019-09-08 Thread Anyang Hu
Hi Wesley, This is not the way I want, I want to read local json data in Flink SQL by defining DDL. Best regards, Anyang Wesley Peng 于2019年9月8日周日 下午6:14写道: > On 2019/9/8 5:40 下午, Anyang Hu wrote: > > In flink1.9, is there a way to read local json file in Flink SQL like > > the reading of csv f

Re: [flink-1.9] how to read local json file through Flink SQL

2019-09-08 Thread Wesley Peng
On 2019/9/8 5:40 下午, Anyang Hu wrote: In flink1.9, is there a way to read local json file in Flink SQL like the reading of csv file? hi, might this thread help you? http://mail-archives.apache.org/mod_mbox/flink-dev/201604.mbox/%3cCAK+0a_o5=c1_p3sylrhtznqbhplexpb7jg_oq-sptre2neo...@mail.gmail.

[flink-1.9] how to read local json file through Flink SQL

2019-09-08 Thread Anyang Hu
Hi guys, In flink1.9, is there a way to read local json file in Flink SQL like the reading of csv file? Now we can read local csv file like the following, replacing of 'csv' to 'json' can not work: create table source ( first varchar, id int ) with ( 'connector.type' = 'filesystem', 'connector.p

Re: Flink 1.9, MapR secure cluster, high availability

2019-08-30 Thread Stephan Ewen
Could you share the stack trace where the failure occurs, so we can see why the Flink ZK is used during MapR FS access? /CC Till and Tison - just FYI On Fri, Aug 30, 2019 at 9:40 AM Maxim Parkachov wrote: > Hi Stephan, > > With previous versions, I tried around 1.7, I always had to compile MapR

Re: Flink 1.9, MapR secure cluster, high availability

2019-08-30 Thread Maxim Parkachov
Hi Stephan, With previous versions, I tried around 1.7, I always had to compile MapR hadoop to get it working. With 1.9 I took hadoop-less Flink, which worked with MapR FS until I switched on HA. So it is hard to say if this is regression or not. The error happens when Flink tries to initialize B

Re: Flink 1.9, MapR secure cluster, high availability

2019-08-29 Thread Stephan Ewen
Hi Maxim! The change of the MapR dependency should not have an impact on that. Do you know if the same thing worked in prior Flink versions? Is that a regression in 1.9? The exception that you report, is that from Flink's HA services trying to connect to ZK, or from the MapR FS client trying to c

Flink 1.9, MapR secure cluster, high availability

2019-08-27 Thread Maxim Parkachov
Hi everyone, I'm testing release 1.9 on MapR secure cluster. I took flink binaries from download page and trying to start Yarn session cluster. All MapR specific libraries and configs are added according to documentation. When I start yarn-session without high availability, it uses zookeeper from

Re: Flink 1.9 build failed

2019-08-26 Thread Eliza
Hi on 2019/8/27 11:35, Simon Su wrote: Could not resolve dependencies for project org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find artifact org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT in maven-ali (http://maven.aliyun.com/nexus/content/groups/public/) A

Flink 1.9 build failed

2019-08-26 Thread Simon Su
Hi all I’m trying to build flink 1.9 release branch, it raises the error like: Could not resolve dependencies for project org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find artifact org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT in maven-ali (http

Re: Support priority of the Flink YARN application in Flink 1.9

2019-08-02 Thread tian boxiu
lines [1]. > To propose a feature, you should open a Jira issue [2] and start a > discussion there. > > Please note that the feature freeze for the Flink 1.9 release happened a > few weeks ago. > The community is currently working on fixing blockers and testing the > release an

Re: Support priority of the Flink YARN application in Flink 1.9

2019-08-02 Thread Fabian Hueske
Hi Boxiu, This sounds like a good feature. Please have a look at our contribution guidelines [1]. To propose a feature, you should open a Jira issue [2] and start a discussion there. Please note that the feature freeze for the Flink 1.9 release happened a few weeks ago. The community is

Re: Support priority of the Flink YARN application in Flink 1.9

2019-07-31 Thread Xintong Song
: > To: user@flink.apache.org > Title: Support priority of the Flink YARN application in Flink 1.9 > > Hello everyone, > > Many of our batch jobs have changed the execution engine from spark to > flink. The flink is deployed on the yarn cluster. In some scenarios, > high-priori

Support priority of the Flink YARN application in Flink 1.9

2019-07-31 Thread tian boxiu
To: user@flink.apache.org Title: Support priority of the Flink YARN application in Flink 1.9 Hello everyone, Many of our batch jobs have changed the execution engine from spark to flink. The flink is deployed on the yarn cluster. In some scenarios, high-priority core tasks need to be submitted

Re: Queryable state support in Flink 1.9

2019-04-16 Thread Boris Lublinsky
Thanks thats it. Boris Lublinsky FDP Architect boris.lublin...@lightbend.com https://www.lightbend.com/ > On Apr 16, 2019, at 8:31 AM, Guowei Ma wrote: > > AbstractQueryableStateTestBase

Re: Queryable state support in Flink 1.9

2019-04-16 Thread Guowei Ma
ryablestate.client.proxy.KvStateClientProxyImpl - > Started Queryable State Proxy Server @ /127.0.0. <http://127.0.0.0/> > > > Best, > Guowei > > > Boris Lublinsky 于2019年4月15日周一 上午4:02写道: > >> I was testing with Flink 1.9. Here is how I set up mini

Re: Queryable state support in Flink 1.9

2019-04-16 Thread Boris Lublinsky
>. > 1436 [main] INFO > org.apache.flink.queryablestate.client.proxy.KvStateClientProxyImpl - > Started Queryable State Proxy Server @ /127.0.0. <http://127.0.0.0/> > > > Best, > Guowei > > > Boris Lublinsky <mailto:boris.lublin...@lightbend.com>> 于2019年4月15日周一 上午4:02

Re: Queryable state support in Flink 1.9

2019-04-14 Thread Guowei Ma
- Started Queryable State Proxy Server @ /127.0.0. Best, Guowei Boris Lublinsky 于2019年4月15日周一 上午4:02写道: > I was testing with Flink 1.9. Here is how I set up mini cluster > > int port = 6124; > int parallelism = 2; > Configuration config = new Configuration(); >

Queryable state support in Flink 1.9

2019-04-14 Thread Boris Lublinsky
I was testing with Flink 1.9. Here is how I set up mini cluster int port = 6124; int parallelism = 2; Configuration config = new Configuration(); config.setInteger(JobManagerOptions.PORT, port); config.setString