Thanks Yang for your help.
On Thu, Feb 4, 2021, 8:28 AM Yang Wang wrote:
> Yes, if you are using the CLI(e.g. flink run/list/cancel -t yarn-session
> ...) for the job management,
> it will eventually call the RestClusterClient, which could retrieve the
> leader JobManager address from ZK.
>
> Pl
Oh, I found the solution. I simply need to not use TRACE log level for
Flink.
On Wed, Feb 3, 2021 at 7:07 PM Marco Villalobos
wrote:
>
> Please advise me. I don't know what I am doing wrong.
>
> After I added the blink table planner to my my dependency management:
>
> dependency
> "org.apache.fl
Please advise me. I don't know what I am doing wrong.
After I added the blink table planner to my my dependency management:
dependency
"org.apache.flink:flink-table-planner-blink_${scalaVersion}:${flinkVersion}"
and added it as a dependency:
implementation "org.apache.flink:flink-table-planner-
Yes, if you are using the CLI(e.g. flink run/list/cancel -t yarn-session
...) for the job management,
it will eventually call the RestClusterClient, which could retrieve the
leader JobManager address from ZK.
Please ensure that you have specified the HA related config options in CLI
via -D or set
Hi Wang Li,
The application mode is introduced in release 1.10 and has replaced the old
StandaloneJobClusterEntrypoint.
By default, if you enable the HA, then you will get a ZERO_JOB_ID.
Otherwise, it will be a random uuid.
For standalone application mode, you could use the "./bin/standalone-job.
Hi Robert,
After checking the JarRunHandler implementation, I think you requirement
could be done as following steps.
1. Use the init container to download the user jars or directly baked jars
into the image under path /path/of/flink-jars/flink-web-upload
2. Set the Flink configuration option "we
Hi team,
We're running flink jobs in application mode. Pre Flink 1.7, the job id by
default is ``. However, in Flink 1.11, we
found the job id is random. Is there a way to set job id or we can only
retrieve the job id by ourselves each time? Thanks.
- Li
I have a Kubernetes cluster with Flink running in Session Mode.
Is there a way to drop the jar file into a folder and/or add it to the
Docker image?
--
Robert Cullen
240-475-4490
Hi Till,
thanks for hint. I checked it and found a version conflict with flink-parquet.
With this version it is running:
org.apache.parquet
parquet-avro
1.10.0
But how can I avoid this in the future? I had to add parquet-avro, because
without there were some errors. Do I have t
Hey guys,
I'm pretty new to Flink, I hope I could get some help on getting data out of
a Flink cluster.
I've setup the cluster by following the steps in
https://github.com/ververica/sql-training and now I wanted to retrieve the
data from the Rides table in a Scala program, using the TableAPI. Th
Hi Timo,
The problem with this is I would still have to determine the keys manually,
which is not really feasible in my case. Is there any internal API that
might be of use to extract this information?
On Wed, Feb 3, 2021 at 5:19 PM Timo Walther wrote:
> Hi Yuval,
>
> we changed this behavior a
Please make sure the client and server version are in sync.
On 2/3/2021 4:12 PM, sidhant gupta wrote:
I am getting following error while running the below command with the
attached conf/flink-conf.yaml:
bin/flink run -c firstflinkpackage.someJob ../somejob.jar arg1 arg2 arg3
2021-02-03 15:04
Hi Yuval,
we changed this behavior a bit to be more SQL compliant. Currently,
sinks must be explicitly defined with a PRIMARY KEY constraint. We had
discussions about implicit sinks, but nothing on the roadmap yet. The
`CREATE TEMPORARY TABLE LIKE` clause should make it easy to extend the
ori
Hi Yuval,
yes this is rather a bug. If we support VARCHAR here we should also
support CHAR. Feel free to open an issue.
Regards,
Timo
On 03.02.21 11:46, Yuval Itzchakov wrote:
I can understand that in some sense it's nonsensical to MAX on a CHAR,
since Blink will only determine a CHAR when t
I am getting following error while running the below command with the
attached conf/flink-conf.yaml:
bin/flink run -c firstflinkpackage.someJob ../somejob.jar arg1 arg2 arg3
2021-02-03 15:04:24,113 INFO org.apache.flink.runtime.dispatcher.
StandaloneDispatcher [] - Received JobGraph submission
9
Hi Gorden,
Thank you very much for the detailed response.
I considered using the state-state processor API, however, our enrichment
requirements make the state-processor API a bit inconvenient.
1. if an element from the stream matches a record in the database then it can
remain in the cache a v
Thanks everyone for the responses.
I tried out the JeMalloc suggestion from FLINK-19125 using a patched 1.11.3
image and so far it appears to working well. I see it's included in 1.12.1
and Docker images are available so I'll look at upgrading too.
Best regards,
Randal.
--
Sent from: http://a
Thanks for reaching out to the Flink community. I will respond on the JIRA
ticket.
Cheers,
Till
On Wed, Feb 3, 2021 at 1:59 PM simpleusr wrote:
> Hi
>
> I am trying to upgrade from 1.5.5 to 1.12 and checkpointing mechanism seems
> to be broken in our kafka connector sourced datastream jobs.
>
>
Is it possible to use flink CLI instead of flink client for connecting
zookeeper using network load balancer to retrieve the leader Jobmanager
address?
On Wed, Feb 3, 2021, 12:42 PM Yang Wang wrote:
> I think the Flink client could make a connection with ZooKeeper via the
> network load balancer
>From these snippets it is hard to tell what's going wrong. Could you maybe
give us a minimal example with which to reproduce the problem?
Alternatively, have you read through Flink's serializer documentation [1]?
Have you tried to use a simple POJO instead of inheriting from a HashMap?
The stack
Hi,
I'm reworking an existing UpsertStreamTableSink into the new
DynamicTableSink API. In the previous API, one would get the unique keys
for upsert queries via the `setKeyFields` method, which would calculate
them based on the grouping keys during the translation phase.
Searching around, I saw th
Hi
I am trying to upgrade from 1.5.5 to 1.12 and checkpointing mechanism seems
to be broken in our kafka connector sourced datastream jobs.
Since there is a siginificant version gap and there are many backwards
uncompatible / deprecated changes in flink runtime between versions, I had
to modify o
Some facts are possibly related with these, since another job do not meet
these expectations.
The problem job use a class which contains a field of Class MapRecord, and
MapRecord is defined to extend HashMap so as to accept variable json data.
Class MapRecord:
@NoArgsConstructor
@Slf4j
public cla
I can understand that in some sense it's nonsensical to MAX on a CHAR,
since Blink will only determine a CHAR when there's a constant in the SQL,
but I was surprised that it didn't work with just an identity
implementation.
On Wed, Feb 3, 2021 at 12:33 PM Till Rohrmann wrote:
> Thanks for reachi
Actually the exception is different every time I stop the job.
Such as:
(1) com.esotericsoftware.kryo.KryoException: Unable to find class: g^XT
The stack as I given above.
(2) java.lang.IndexOutOfBoundsException: Index: 46, Size: 17
2021-02-03 18:37:24
java.lang.IndexOutOfBoundsException: Index: 4
Hi Jan,
it looks to me that you might have different parquet-avro dependencies on
your class path. Could you make sure that you don't have different versions
of the library on your classpath?
Cheers,
Till
On Tue, Feb 2, 2021 at 3:29 PM Jan Oelschlegel <
oelschle...@integration-factory.de> wrote:
Thanks for reaching out to the Flink community Yuval. I am pulling in Timo
and Jark who might be able to answer this question. From what I can tell,
it looks a bit like an oversight because VARCHAR is also supported.
Cheers,
Till
On Tue, Feb 2, 2021 at 6:12 PM Yuval Itzchakov wrote:
> Hi,
> I'm
Hi,
could you show us the job you are trying to resume? Is it a SQL job or a
DataStream job, for example?
>From the stack trace, it looks as if the class g^XT is not on the class
path.
Cheers,
Till
On Wed, Feb 3, 2021 at 10:30 AM 赵一旦 wrote:
> I have a job, the checkpoint and savepoint all rig
I have a job, the checkpoint and savepoint all right.
But, if I stop the job using 'stop -p', after the savepoint generated, then
the job goes to fail. Here is the log:
2021-02-03 16:53:55,179 WARN org.apache.flink.runtime.taskmanager.Task
[] - ual_ft_uid_subid_SidIncludeFilter ->
Sure.
https://github.com/apache/flink/tree/master/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint
https://github.com/apache/flink/tree/master/flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper
https://github.com/apache/flink/tree/master/flink-runtime/src/main/java/org
Hi Randal,
Please consider to use jemalloc instead of glibc as default memory allocator
[1] to avoid memory fragmentation. As far as I know, at least two groups of
users, who run Flink on YARN and k8s respectively, have reported similar
problem that memory continues growing up once restart [2].
31 matches
Mail list logo