I want to convert dataStream to Table. The type of dataSream is a POJO, which
contains a enum field.
1. The enum field is RAW('classname', '...') in table. When I execute `SELECT *
FROM t_test` and print the result, It throws EOFException.
2. If I assign the field is STRING in schema, It throws
Hi, Muazim
Flink is an incremental computing framework, in streaming mode it considers
data to be unbounded, so every piece of data that comes in triggers the
computation logic because it doesn't know when the data will end.
Based on your description, I understand that you may have a full data
com
Hi Shammon,
Thank you for your prompt reply.Aslo I'm interested to know if there is an
available feature for integrating Apache Flink with Apache Ranger. If so,
could you kindly share the relevant documentation with me?
Thanks & Regards,
Arjun
Hi, Kenan
I think you maybe can get help from Kafka community. IMO, it is just an
info level log, whether it has a real impact?
Best,
Ron
Kenan Kılıçtepe 于2023年8月2日周三 06:04写道:
> I got a lot of these disconnection error logs. Why? My flink and kafka
> clusters are running in Google Cloud and I
Hi, longfeng.
I think you should rebuild your connector according to the new API. The
return type of the method `DynamicTableFactory$Context.getCatalogTable()`
is already changed from `CatalogTable` to `ResolvedCatalogTable`[].
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-21913
lo
Hi arjun,
As @Mate mentioned, the discussion of FLIP-314 has been completed and a
vote will be initiated soon. We would like to introduce the interfaces for
lineage in the next release of Flink after 1.18
Best,
Shammon FY
On Tue, Aug 1, 2023 at 11:07 PM Mate Czagany wrote:
> Hi,
>
> Unfortuna
Hello Community,
I have a operator pipeline like as below, is it ok if "source" task opens
threads by using java thread pool and parallelize the work?
This is needed for accepting multiple client socket connections in "single
custom source server socket function".
Single Custom source server s
Hi Kamal,
For the three points
> 3. What is the difference between high no. of task managers vs high no.
of task slots (with low no. of task managers)?
I think this is mainly limited by the JVM's efficiency in managing memory.
When we use Flink Session cluster as olap engine, we found that when t
Thanks Shammon.
Purpose of opening server socket in Split Enumerator was that it has only one
instance per source and so the server socket too (port binding can happen only
once). And then accepted Socket connections (serversocket.accept()) will act as
splits which will be further processed by
Hi Kamal,
It confuses me a little that what's the purpose of opening a server socket
in SplitEnumerator? Currently there will be only one SplitEnumerator
instance in JobManager for each one source not each source subtask. If
there's only one source in your job, no matter how much parallelism this
Flink 1.13.3
Custom connector Using flink Kafka connector code and little refactoring;
And Custom connector can be load in flink 1.12 when using
StreamTableEnvironment.
Now flink upgrade to 1.13.3, custom connector dependencies also upgraded to
1.13.3
But failed to load:
java.lang.NoSuchMet
I got a lot of these disconnection error logs. Why? My flink and kafka
clusters are running in Google Cloud and I dont think there is a network
issue. Also I got this error even my workload is very low.
2023-08-01 21:54:00,003 INFO org.apache.kafka.clients.NetworkClient
[] - [Prod
Hi,
we are currently migrating some of our jobs into hexagonal architecture and
I have seen that we can use spring as dependency injection framework, see:
https://getindata.com/blog/writing-flink-jobs-using-spring-dependency-injection-framework/
Has anybody analyzed different JVM DI frameworks e.
Hi,
Unfortunately the Atlas hook you've read about is only available in the
Cloudera Flink solution and has not been made open-source.
In the future FLIP-314[1] might offer a simple solution to implement the
Atlas integration.
Best Regards,
Mate
[1]
https://cwiki.apache.org/confluence/display/F
I am looking to integrate Apache Atlas with Apache Flink to capture Job
lineage. I found some references around it from Cloudera (CDP) and they are
using Atlas-flink hook , but I am not able to find any documentation or
implementation.
I had gone through the JIRA link as mentioned below.But in thi
Hi,
I've tinkered around a bit more and found that the problem is actually with
Native mode vs Standalone mode. In the standalone mode, the pod definition
doesn't get a Resource request for nvidia/gpu, whereas in the Native mode
it does. I'll open another question since this isn't related to autos
Hi Team,
I am new to Flink. I have this use case where I have a dataStream of
Doubles and I am trying to get the total sum of whole DataStream.
I have used ReduceFunction and AggregateFunction.
Case 1: In Reduced function the output is dataStream of rolling Sum. To get
the final Sum I have to tra
The autoscaler only works for FlinkDeployments in Native mode. You should
turn off the reactive scheduler mode as well because that's something
completely different.
After that you can check the autoscaler logs for more info.
Gyula
On Tue, Aug 1, 2023 at 10:33 AM Raihan Sunny via user
wrote:
>
Hi,
I have a workload that depends on the GPU. I have only 1 GPU card. As per
the documentation I have added the necessary configurations and can run the
GPU workload in standalone REACTIVE mode with as many taskmanager instances
as required.
I have set the number of task slots to 1 so that a rai
Hi, Kamal
> How many task managers a job manager can handle? Is there any upper limit
also?
There is no clear limit to how many TMs a JM can cover, and based on my
past experience, it can handle TMs over 1000+, even more.
> How to decide no. of task managers, is there any way?
I don't think the
The autoscaler scales jobs based on incoming data and processing
throughput. It's completely different from the reactive mod, if the
throughput/processing rate doesn't change it will not scale up even if you
have more resources available.
Also in native mode you cannot add pods to the cluster, Fli
up
On Tue, 18 Jul 2023 at 09:27, Brendan Cortez
wrote:
> Hi all!
>
> I'm trying to use the jacoco-maven-plugin and run unit-tests for Flink
> Table API, but they fail with an error (see file error_log_flink_17.txt for
> full error stacktrace in attachment):
> java.lang.IllegalArgumentException:
22 matches
Mail list logo