Can we make a judgment based on the fields in the status?
--
Best,
Hjw
CURRENTROW FROM product
--
Best,
Hjw
At 2023-04-04 12:23:26, "Shammon FY" wrote:
Hi hjw
To rescale data for dim join, I think you can use `partition by` in sql before
`dim join` which will redistribute data by specific column. In addition, you
can add cache for `d
, so only one
subtask in the Lookup join operator can receive data.I want to set the
relationship between the kafka Source and the Lookup join operator is reblance
so that all subtask in Lookup join operator can recevie data.
Env:
Flink version:1.15.1
--
Best,
Hjw
:
- name: TZ
value: Asia/Shanghai
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelis
- name: TZ
value: Asia/Shanghai
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelism: 2
upgradeMode: stateless
--
Best,
Hjw
Mode: Flink kubernetes Operator 1.2.0(Application Mode)
--
Best,
Hjw
:
Flink version:1.14.4
--
Best,
Hjw
http://maven.aliyun.com/nexus/content/groups/public/
false
false
never
--
Best,
Hjw
--
Best,
Hjw
--
Best,
Hjw
or in the production environment ?
--
Best,
Hjw
Hi,
The yarn.classpath.include-user-jar parameter is shown as
yarn.per-job-cluster.include-user-jar parameter in Flink 1.14.
I have tried DISABLED??FIRST??LAST??ORDER .But the error still happened.
Best,
Hjw
-- Original --
From
version: flink 1.14.0
Best,
Hjw
thanks for everyone. I will increase the parallelism to solve the
problem.Besides, I am looking forward to support remote state.
Best,
Hjw
Hi,Alexander
When Flink job deployed on Native k8s, taskmanager is a Pod.The data directory
size of a single container is limited in our company.Are there any idea to deal
with this ?
Best,
Hjw
should take some measures to
deal with it.(mount a volume for local data directories store RocksDB database
etc...)
thx.
Best,
Hjw
Hi,Matthias
I have solved this problem as you say.The link to the PR [1] .thank you.
[1]https://github.com/apache/flink/pull/20671
Best,
Hjw
epo. My fork repo status :This
branch is 4 commits ahead, 11 commits behind apache:release-1.15.
When I rebase the branch from upstream and push to my fork repo, the 11
commits behind apache:release-1.15 also appear in my pr change files.
How can I handle this
I commit a pr to Flink Github .
A error happened in building ci.
[error]The job running on agent Azure Pipelines 6 ran longer than the maximum
time of 310 minutes. For more information, see
https://go.microsoft.com/fwlink/?linkid=2077134
How to solve this problem?
How to triigle the ci building
When I run mvn clean install ,It will run Flink test case .
However , I get Error??
[ERROR] Failures:
[ERROR]
KubernetesClusterDescriptorTest.testDeployApplicationClusterWithNonLocalSchema:155
Previous method call should have failed but it returned:
org.apache.flink.kubernetes.KubernetesCluste
I try to maven clean install Flink 1.15 parent,but fail.
A Error happened in compiling flink-clients.
Error Log:
Failed to execute goal
org.apache.maven.plugins:maven-assembly-plugin:2.4:single
(create-test-dependency) on project flink-clients: Error reading assemblies:
Error locating assembly d
enter restarting .
It is successful to use savepoint command alone.
Error log:
13:33:42.857 [Kafka Fetcher for Source: nlp-kafka-source -> nlp-clean
(1/1)#0] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer
clientId=consumer-hjw-3, groupId=hjw] Kafka consumer has been closed
Flink 1.14 Document display Flink 1.14 kafka connector only
backwards compatible with broker versions 0.10.0 or later.
Unfortunately, I have to use Flink 1.14 comsume Kafka 0.9 version? How can
I do it ??thx.
23 matches
Mail list logo