Checkout new task manager instances connection status, if connect to job
manager is normal, check available slot, check create connection at function
initializes.
> On Nov 11, 2021, at 18:01, Xiangyu Su wrote:
>
> Thank you Jake!
> Enable debug level logging have to ask sys
Set log root level is DEBUG and check Job manager logs, you will get it.
> On Nov 11, 2021, at 17:02, Xiangyu Su wrote:
>
> Hello Everyone,
>
> We are facing an issue on Flink 1.14.0.
> Every time if the job gets restarted, some tasks/slots get stuck in
> INITIALIZING state, and will neve
Hi
You can use like this:
```java
val calciteParser = new
CalciteParser(SqlUtil.getSqlParserConfig(tableEnv.getConfig))
sqlArr
.foreach(item => {
println(item)
val itemNode = calciteParser.parse(item)
itemNode match {
case sqlSet: SqlSet => {
Hi, you can do like this:
```java
val statementSet = tableEnv.createStatementSet()
val insertSqlBuffer = ListBuffer.empty[String]
val calciteParser = new
CalciteParser(SqlUtil.getSqlParserConfig(tableEnv.getConfig))
sqlArr
.foreach(item => {
println(item)
val itemNode = cal
Hi
When submit job in yarn-cluster model, flink job finish but yarn job not exit.
What should I do?
Submit command:
/opt/app/flink/flink-1.14.0/bin/flink run -m yarn-cluster
./flink-sql-client.jar --file dev.sql
Hi igyu:
You can submit job use these arguements like this
```
-m yarn-cluster \
-yqu root.realtime \
-ynm “test" \
-yjm 2g \
-ytm 2g \
-n \
-d \
-ys 1 \
-yD security.kerberos.login.principal=xxx...@x.com \
-yD security.kerberos.login.keytab=/tmp/xx.keytab \
...
```
> On May 27, 2021,
Hi, vtygoss
You can check out the official demo[1]
```
import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment}
val settings = EnvironmentSettings
.newInstance()
//.inStreamingMode()
.inBatchMode()
.build()
val tEnv = TableEnvironment.create(setting)
```
Regar
Hi zheng,
It seem’s data is large. Resizing the framesize of akka will not working.
You can increase the parallelism.
Jake.
> On Sep 15, 2020, at 5:58 PM, zheng faaron wrote:
>
> Hi Zhu,
>
> It's just a mistake in mail. It seems increase akka.framesize not works
Hi fanchao
Yes. I suggest that.
Jake
> On Aug 25, 2020, at 11:20 AM, 范超 wrote:
>
> Thanks Jake. But can I just want to implement the ouput-tag function in my
> flatmap function not in the process function. I check the parameters for the
> flatmap ,there is no ‘context’, so i
Hi fanchao
use side output, see[1]
[1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/side_output.html
<https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/side_output.html>
Jake
> On Aug 25, 2020, at 10:54 AM, 范超 wrote:
>
> Hi,
Hi Mason
Can you use the jvm cpu perfrommance analysis tools?
Jprofile and https://github.com/alibaba/arthas
<https://github.com/alibaba/arthas>
You can probably guess the reason for the high CPU load.
Jake
> On Aug 6, 2020, at 12:25 PM, Chen, Mason wrote:
>
> Thanks Pete
hi 魏子涵
Idea decompiled code is not match java source code, you can download java
source code in idea.
/Volumes/work/maven_repository/org/apache/flink/flink-runtime_2.11/1.10.1/flink-runtime_2.11-1.10.1-sources.jar!/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImpl.java
Jake
> On
Hi Suraj Puvvada
Yes, Flink back pressure depend on the Flink task buffer。process task will
sends buffer remaining size to source, source will slow down.
https://www.ververica.com/blog/how-flink-handles-backpressure
<https://www.ververica.com/blog/how-flink-handles-backpressure>
Jake
ocs-release-1.11/ops/deployment/yarn_setup.html>
Jake
> On Jul 27, 2020, at 6:19 PM, 范超 wrote:
>
> Hi, Flink community
>
> I’m starter at Flink ,and don’t know how to passing parameters to my jar
> file, where I want to start the job in detached mode on the yarn cluster.
Hi Mu Kong
Yes, you need check your kafka cluser server log, network traffic, disk
latency, cpu load.
Jake
> On Jul 22, 2020, at 7:34 PM, Till Rohrmann wrote:
>
> Hi Mu Kong,
>
> I think Jake was asking for the logs of your Kafka cluster and not the Flink
> TM logs.
&
Need some flink kafka consumer log and kafka server log!
> On Jul 20, 2020, at 5:45 PM, Mu Kong wrote:
>
> Hi, community
>
> I have a flink application consuming from a kafka topic with 60 partitions.
> The parallelism of the source is set to 60, same with the topic partition
> number.
> The
16 matches
Mail list logo