Please find attached logs.
The kubernetes cluster is an aws EKS cluster but managed by our infra's
team.
I created a service account "flink" for it and it has permission to create,
list, delete pods along with some other types of resources in the
"team-anti-cheat" namespace.
Below command was us
Hi kevin,
Thanks for sharing the information. I will dig into and create a ticket if
necessary.
Best,
Yang
Bohinski, Kevin 于2020年10月29日周四 上午2:35写道:
> Hi Yang,
>
>
>
> Thanks again for all the help!
>
>
>
> We are still seeing this with 1.11.2 and ZK.
>
> Looks like others are seeing this as we
I second Matthias's suggestion.
If you are using the "standalone Flink on K8s", then you need some external
tools(e.g. K8s operator[1][2])
to help with the lifecycle management.
Also we have the native Kubernetes integration, all the K8s resources will
be cleaned up automatically
when the Flink j
Could you share the JobManager logs so that we could check whether it
received the
registration from TasManager?
In a non-HA Flink cluster, the TaskManager is using the service to talk to
JobManager.
Currently, Flink creates a headless service for JobManager. You could use
`kubectl get svc`
to fin
Hi Diwakar Jha,
>From the logs you have provided, everything seems working as expected. The
JobManager and TaskManager
java processes have been started with correct dynamic options, especially
for the logging.
Could you share the content of $FLINK_HOME/conf/log4j.properties? I think
there's somet
Maybe this is related to this issue?
https://issues.apache.org/jira/browse/FLINK-17683
On Fri, Oct 30, 2020 at 8:43 PM Rex Fenley wrote:
> Correction, I'm using Scala case classes not strictly Java POJOs just to
> be clear.
>
> On Fri, Oct 30, 2020 at 7:56 PM Rex Fenley wrote:
>
>> Hello,
>>
>>
Hello,
I'm trying to filter the rows of a table by whether or not a value exists
in an array column of a table.
Simple example:
table.where("apple".in($"fruits"))
In this example, each row has a "fruits" Array column that could
have 1 or many fruit strings which may or may not be "apple".
Howeve
Hi, community:
We used flink 1.9.1, both SQL and DataStream API to support multiple sinks
for product envs.
For example, tableEnv.sqlUpdate("INSERT INTO dest1 SELECT ...") and
table.toAppendStream[Row].addSink(new RichSinkFunction[Row]
{...}).name("dest2"), and env.execute() to submit t
Hi Yuval,
Yes, The new table source does not support the new Source API in Flink
1.11. The integration is introduced in Flink master (1.12):
https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces
Best,
Jingsong
On Sun, Nov 1, 2020 at 10:54
Hi
I'm running Flink 1.11 on EMR 6.1.0. I can see my job is running fine but
i'm not seeing any taskmanager/jobmanager logs.
I see the below error in stdout.
18:29:19.834 [flink-akka.actor.default-dispatcher-28] ERROR
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerLogFileHandler
- Fai
Hi Yuta,
there are indeed a few important differences between Spark and Flink.
However, please also note that different APIs behave differently on both
systems. So it would be good if you could clarify what you are doing, so I
can go in more detail.
As a starting point, you can always check the a
Hi,
I've implemented a new custom source using the new DataSource API (
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/sources.html)
and I want to connect it to the new DynamicTableSource API.
However, in ScanTableSource, the getScanRuntimeProvider method returns a
ScanRu
Hi,
While trying to compile an application with a dependency on
flink-table-planner_blink_2.12-1.11.2, I receive the following error
message during compilation:
scalac: While parsing annotations in /Library/Caches/Coursier/v1/https/
repo1.maven.org/maven2/org/apache/flink/flink-table-planner-blin
13 matches
Mail list logo