Hello,
I am running into a very basic problem while working with Table API. I wish
to create a TableEnvironment connected to a remote environment that uses
Blink planner in batch mode. Examples and documentation I have come across
so far recommend the following pattern to create such an environmen
Hi Prasanna,
The job failed because it fails to acquire enough slots to run tasks.
Did you launch any task manager?
The JobNotFound exception is thrown because someone(possibly Flink UI)
sends a query for a job that does not exist in the Flink cluster.
>From the log you attached, the job id of yo
Dear Tom,
This is likely a Scala version issue, you can check the following aspects:
1. Scala version of your development environment;
2. Whether there are multiple versions of Scala in your development environment;
3. Supported Scala versions for each dependency in the pom.xml file.
Hi David,
I guess you could also "just" put a nginx in front of Flink's REST API that
controls access.
On Tue, Jun 2, 2020 at 7:05 PM Khachatryan Roman <
khachatryan.ro...@gmail.com> wrote:
> Hi David,
>
> One option is Ververica Platform which has a notion of Namespaces:
> https://docs.ververic
Hi ,
I am running flink locally in my machine with following configurations.
# The RPC port where the JobManager is reachable.
jobmanager.rpc.port: 6123
# The heap size for the JobManager JVM
jobmanager.heap.size: 1024m
# The heap size for the TaskManager JVM
taskmanager.heap.size: 1024m
Dear Tom,
This is likely a scala version issue.
Can you post your pom.xml?
Regards,
Roman
On Tue, Jun 2, 2020 at 6:34 PM Tom Burgert wrote:
> Dear all,
>
> I am trying to set up flink and after hours I still fail to make a simple
> program run even though I follow every recommended step in th
Hi David,
One option is Ververica Platform which has a notion of Namespaces:
https://docs.ververica.com/administration/namespaces.html
I guess Konstantin can tell you more about it.
Disclaimer: I work for a company that develops this product.
Regards,
Roman
On Tue, Jun 2, 2020 at 5:37 PM Davi
Dear all,
I am trying to set up flink and after hours I still fail to make a simple
program run even though I follow every recommended step in the tutorials.
My operating system is OSX (therefore everything was installed via brew) and I
am using Maven as a build tool. I used the quick start sc
Hi Lec Ssmi,
It depends on the state backend you use.
The heap state backend needs that either some state access happens or some
records get processed in the operator [1].
The RocksDB requires that the state size is big enough (roughly speaking)
to trigger the compaction process to clean the expir
Hi, not sure if this was discussed (for a brief search I couldn't find
anything), but I would like to know if there is an application that uses
Flink REST API to provide some kind of user management, like allow a
certain user to login and manage some jobs running in the link, limit the
parallelizat
As we known, flink can set certain TTL config for the state, so that
the state can be automatically cleared after being idle for a period of
time.
But if there is no new record coming in after setting the TTL config ,
will the state be automatically cleared after a certain time? Or does it
re
Robert,
Thank you once again! We are currently doing the “short” Thread.sleep()
approach. Seems to be working fine.
Cheers
Kumar
From: Robert Metzger
Date: Tuesday, June 2, 2020 at 2:40 AM
To: Senthil Kumar
Cc: "user@flink.apache.org"
Subject: Re: Age old stop vs cancel debate
Hi Kumar,
th
I have a flink job running on version 1.8.2 with parallelism of 12, I took
the savepoint of the application on disk and it is of approx 70GB, now when
I running the application from this particular savepoint checkpoint keeps
getting failed and app could not restart. I am getting following error :
K
Hi ,
I have a Event router Registry as this. By reading this as input i need to
create a Job which would redirect the messages to the correct sink as per
condition.
{
"eventRouterRegistry": [
{ "eventType": "biling", "outputTopic": "billing" },
{ "eventType": "cost", "outputTopic": "cos
Hi,
Can you check if there are any failures on task manager mention in error
message (ip-10-210-5-104.ap-south-1.compute.internal/10.210.5.104:42317)?
Regards,
Roman
On Tue, Jun 2, 2020 at 10:18 AM ApoorvK
wrote:
> I have a flink job running on version 1.8.2 with parallelism of 12, I took
> t
Hi White,
Did you try to increase rest.client.max-content-length [1]?
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html#advanced-options-for-the-rest-endpoint-and-client
Regards,
Roman
On Mon, Jun 1, 2020 at 8:01 AM snack white wrote:
> Hi,
> When I usi
Hi Kumar,
this is more a Java question than a Flink question now :) If it is easily
possible from your code, then I would regularly check the isRunning flag
(by having short Thread.sleeps()) to have a proper cancellation behavior.
If this makes your code very complicated, then you could work with
I think "auto.create.topics.enable" is enabled by default [1]?
Best,
Jark
[1]: https://kafka.apache.org/documentation/#auto.create.topics.enable
On Mon, 1 Jun 2020 at 19:55, Leonard Xu wrote:
> I think @brat is right, I didn’t know the Kafka property
> 'auto.create.topics.enable’ , you can pa
I'm not 100% sure about this answer, that's why I'm CCing Aljoscha to
correct me if needed:
Partitioners are not regular operators (like a map or window), thus they
are not included in the regular Task lifecycle methods (of open() / map()
etc. / close(), with the proper error handling, task cancel
1) It downloads all archives and stores them on disk; the only thing
stored in memory is the job ID or the archive. There is no hard upper
limit; it is mostly constrained by disk space / memory. I say mostly,
because I'm not sure how well the WebUI handles 100k jobs being loaded
into the overvi
20 matches
Mail list logo