Hi,
For the initial DB fetch and state bootstrapping:
That's exactly what the State Processor API is for, have you looked at that
already?
It currently does support bootstrapping broadcast state [1], so that should
be good news for you.
As a side note, I may be missing something, is broadcast sta
I second Yangze's suggestion. You need to get the jobmanager log first. Then
it will be easier to find the root cause. I know that it is not convenient
for users
to access the log via kubectl and we already have a ticket for this[1].
Usually, the reason that Flink resourcemanager could not allocat
Hi Satyam,
I also meet the same issue when I integrate flink with zeppelin. Here's
what I did.
https://github.com/apache/zeppelin/blob/master/flink/src/main/java/org/apache/zeppelin/flink/TableEnvFactory.java#L226
If you are interested in flink on zeppelin, you can refer the following
blogs and
Amend: for release 1.10.1, please refer to this guide [1].
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/native_kubernetes.html#log-files
Best,
Yangze Guo
On Thu, Jun 4, 2020 at 9:52 AM Yangze Guo wrote:
>
> Hi, Kevin,
>
> Regarding logs, you could follow this
Hi, Kevin,
Regarding logs, you could follow this guide [1].
BTW, you could execute "kubectl get pod" to get the current pods. If
there is something like "flink-taskmanager-1-1", you could execute
"kubectl describe pod flink-taskmanager-1-1" to see the status of it.
[1]
https://ci.apache.org/pro
Hi
We are using 1.10.1 with native k8s and while the service appears to be
created and I can submit a job & see it via Web UI, TMs/pods are never
created thus the jobs never start.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
Could not allocate the required slot wit
Thanks, Jark & Godfrey.
The workaround was successful.
I have created the following ticket to track the issue -
https://issues.apache.org/jira/browse/FLINK-18095
Regards,
Satyam
On Wed, Jun 3, 2020 at 3:26 AM Jark Wu wrote:
> Hi Satyam,
>
> In the long term, TableEnvironment is the entry poin
Hi,
My current application makes use of a DynamoDB database too map a key to a
value. As each record enters the system the async-io calls this db and requests
a value for the key but if that value doesn't exist a new value is generated
and inserted. I have managed to do all this in one update
Piotr and Alexander ,
I have fixed the programmatic error in filter method and it is working now.
Thanks for the detailed help from both of you. Am able to add the sinks
based on the JSON and create DAG.
Thanks,
Prasanna.
On Wed, Jun 3, 2020 at 4:51 PM Piotr Nowojski wrote:
> Hi Prasanna,
>
>
I am afraid that in the scenario that I am trying to deploy the QEP it
does not work. Let me explain.
I have 4 machines with 8 cores each. I want to set the parallelism for
all operators to 16. My QEP is:
source(1)->map(16)->flatmap(16)-keyBy->reduce(16)
So I would like to have 8 maps and 8 flatm
Hi, Felipe
sorry for late reply.
You can try to config taskmanager.numberOfTaskSlots = 1 and use different
slotSharingGroup to make sure Task do not placed in same TM.
Best
Weihua Hu
> 2020年5月29日 17:07,Felipe Gutierrez 写道:
>
> Using slotSharingGroup I can do some placement. however, I am usi
If you use IntelliJ IDEA to compile and run directly, you need to set the Scala
version of your project in IntelliJ.
At 2020-06-03 18:58:36, "Tom Burgert" wrote:
Thanks for the reply and the ideas. Does the scala version in e.g. IntelliJ has
an impact on the program running, whe
Hi Prasanna,
1.
> The object probably contains or references non serializable fields.
That should speak for itself. Flink was not able to distribute your code to the
worker nodes.
You have used a lambda function that turned out to be non serialisable. You
should unit test your code and in th
Dear Roman,
this is my pom.xml file, which is the file from the template project from the
official flink website. Only the main class has been changed.
http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="http://maven.a
Hi Satyam,
In the long term, TableEnvironment is the entry point for pure Table/SQL
users. So it should have all the ability of StreamExecutionEnvironment.
I think remote execution is a reasonable feature, could you create an JIRA
issue for this?
As a workaround, you can construct `StreamTableEnv
Thanks for the reply, Godfrey.
I would also love to learn the reasoning behind that limitation.
For more context, I am building a Java application that receives some user
input via a GRPC service. The user's input is translated to some SQL that
may be executed in streaming or batch mode based on
Hi Satyam,
for blink batch mode, only TableEnvironment can be used,
and TableEnvironment do not take StreamExecutionEnvironment as argument.
Instead StreamExecutionEnvironment instance is created internally.
back to your requirement, you can build your table program as user jar,
and submit the jo
17 matches
Mail list logo