Let's assume I have following class :
public class TestFlatMap extends RichFlatMapFunction {
private Connection connection ;
@Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
// Open Connection
Thanks for your time in helping me here.
So as long as the parallelism of my kafka source and sink operators is 1,
all the subsequent operators (multiple filters to create multiple streams,
and then individual CEP and Process operators per stream) will be executed
in the same task slot?
I cannot
Hi,
We noticed that we couldn't parallelize our flink docker containers and
this looks like an issue that other have experienced. In our environment we
were not setting any hostname in the flink configuration. This worked for
the single node, but it looks like the taskmanagers would have the
excep
Hi Timo,
Thanks for your reply. I do notice that the document says "A Table is always
bound to a specific TableEnvironment. It is not possible to combine tables of
different TableEnvironments in the same query, e.g., to join or union them.”
Does that mean there is no way I can make operations,
Hi,
I was reading that I should avoid using dynamic classloading and so copy the
job's jar into the /lib directory (RE: below)
1. How can I confirm that the jar was copied over? I only see the following
below:
2017-11-20 15:36:52,724 INFO org.apache.flink.yarn.Utils
Hey Timo,
thanks for your warm welcome and for creating a ticket to fix this!
My scenario is the following:
I receive different JSON entities from an AMQP queue. I have a source to
collect the events, after that I parse them into the different internal case
classes and split the stream via the spl
Hi Mans,
For understanding the difference between FIRE and FIRE_AND_PURGE it's helpful
to look at the cases where it really makes a difference. In my opinion this
only makes a difference when you have event-time windowing and when you have
multiple firing for the same window (i.e. multiple firi
Hi Wangsan,
yes, the Hive integration is limited so far. However, we provide an
external catalog feature [0] that allows you to implement custom logic
to retrieve Hive tables. I think it is not possible to do all you
operations in Flink's SQL API right now. For now, I think you need to
combin
Hi Nishu,
did you compile Flink from sources as recommended here [1]?
Regards,
Federico
[1] https://ci.apache.org/projects/flink/flink-docs-
release-1.3/setup/building.html#vendor-specific-versions
2017-11-20 13:53 GMT+01:00 Nishu :
> Hi,
>
> I am trying to start flink session(v1.3.2) on yarn(
Hi,
instead of using the RequestHedgingRMFailoverProxyProvider you could try
to use the org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.
You can configure this in the YARN configs =>
yarn.client.failover-proxy-provider.
Kind regards,
Thomas
On 11/20/2017 01:53 PM, Nishu wrote:
>
Hi,
I am trying to start flink session(v1.3.2) on yarn(v 2.7) on HDInsight
cluster. But it throws following error:
*Error while deploying YARN cluster: Couldn't deploy Yarn cluster*
*java.lang.RuntimeException: Couldn't deploy Yarn cluster*
*at
org.apache.flink.yarn.AbstractYarnClusterDe
Hi all,
I am currently learning table API and SQL in Flink. I noticed that Flink does
not support Hive tables as table source, and even JDBC table source are not
provided. There are cases we do need to join a stream table with static Hive or
other database tables to get more specific attributes
Hi,
>
> "In the first case, it is a new window without the previous elements, in the
> second case the window reflects the old contents plus all changes since the
> last trigger."
>
> I am assuming the first case is FIRE and second case is FIRE_AND_PURGE - I
> was thinking that in the first c
We have a flink job containing almost 20 process functions (map, flatMap,
process, filter, etc.) The state dependencies among those process functions are
very complex:
* Shared states are several key-value maps.
* Different functions share different states.
* Functions may query and
Hi,
thanks for writing on the mailling list. I could reproduce your error
and opened an issue for it
(https://issues.apache.org/jira/browse/FLINK-8107). UNNEST currently
only supports unnesting and joining an array of the same relation.
However joining of two relations will be supported soon
15 matches
Mail list logo