Hi,
The RichProcessFunction variant gives access to the functionality provided by
the RichFunction interface (e.g. allows overriding the `open` function
lifecycle method, access to the runtime context, etc.).
Whether or not to use it depends on your use case. You should be able to just
continue
Hi,
Thanks for reporting this.
This is a know issue to be fixed in the upcoming 1.3.1:
https://issues.apache.org/jira/browse/FLINK-6844.
Regards,
Gordon
On 6 June 2017 at 1:47:53 AM, rhashmi (rizhas...@hotmail.com) wrote:
After upgrade i started getting this exception, is this a bug?
2017
I find it never call the spout fail or ack method from the code.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Storm-topology-running-in-flink-tp13495.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabb
Hello,
I'm having issues editing the default Flink memory settings. I'm attempting to
run a Flink task on a cluster at scale. The log shows my edited config settings
having been read into the program, but they're having no effect. Here's the
trace:
17/06/05 23:45:41 INFO flink.FlinkRunner: Start
After upgrade i started getting this exception, is this a bug?
2017-06-05 23:45:03,423 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph- UTCStream ->
Sink: UTC sink (2/12) (f78ef7f7368d27f414ebb9d0db7a26c8) switched from
RUNNING to FAILED.
java.lang.Exception: Could not perfor
with 1.3 what should we use for processFunction?
org.apache.flink.streaming.api.functions.RichProcessFunction
org.apache.flink.streaming.api.functions.ProcessFunction.{OnTimerContext,
Context}
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com
Hello,
I just reading about this, because I am developing my degree final project
about how performance spark and flink.
I've developed a machine learning algorithm, and I want to trigger the
execution in Flink.
When I do it with my code it takes around 5 minutes (all this time just in
the collect
Hi Gordon,
The yarn session gets created when I try to run the following command:
yarn-session.sh -n 4 -s 2 -jm 1024 -tm 3000 -d --ship deploy-keys/
However when I try to access the Job Manager UI, it gives me exception as :
javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorExc
Hi Gordan,
Thank you for your response.
I have done the necessary configurations by adding all the node ip's from
Resource Manager , is this correct ?
Also I will try to check if wildcard works as all our hostname begins with a
same pattern.
For ex : SAN=dns:ip-192-168.* should work , right ?
F
Hello everyone,
I am a completely newcomer of streaming engines and big data platforms but I am
considering using Flink for my master thesis and before starting using it I am
trying to do some kind of evaluation of the system. In particular I am
interested in observing how the system reacts in
Hi Vinay!
1. Will the existing functionality provided by Amazon to configure
in-transit data encrytion work for Flink as well. This is explained here:
http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-encryption-enable-security-configuration.html
http://docs.aws.amazon.com/emr/latest/Release
Thank you Till.
Gordon can you please help.
Regards,
Vinay Patil
On Fri, Jun 2, 2017 at 9:10 PM, Till Rohrmann [via Apache Flink User
Mailing List archive.] wrote:
> Hi Vinay,
>
> I've pulled my colleague Gordon into the conversation who can probably
> tell you more about Flink's security feat
Hi everybody,
in my job I have a groupReduce operator with parallelism 4 and one of the
sub-tasks takes a huge amount of time (wrt the others).
My guess is that the objects assigned to that slot have much more data to
reduce (an thus are somehow computationally heavy within the groupReduce
operator
Hi Chesnay, thank you very much for your help.
If naming datastreams, counter “Records sent” are correct, i.e.,
map.filter(condition1).name.filter(condition2).name
thanks,
best regards
Luis.
Hi Robert,
I have few more questions to clarify.
1) Why do you say printing the values to the standard out would display
duplicates even if exactly once works? What is the reason for this? Could
you brief me on this?
2) I observed duplicates (by writing to a file) starting from the
FlinkKafkaCon
15 matches
Mail list logo