Hi ,
I think there is no difference between JobVertex(A) and JobVertex(B).
Because the JobVertex(C) is not shown in the right graph, it may mislead
you.There should be another intermediate result partition between JobVertex(B)
and JobVertex(C) for each parallelism, and that is the same case
Hi Aniket!
Thanks for also looking into the problem!
I think checking `getAuthenticationMethod` on the UGI subject is the way to go.
At the moment I don’t think there’s a better “proper” solution for this.
As explained in the JIRA, we simply should not be checking for Kerberos
credentials for al
Using Flink 1.2.
Hi all, I have a question about Batch processing and Sinks. I have a Flink job
that parses a User Request log file that contains performance data per request.
It accumulates metric data values into 15 minute time windows. Each log line
is mapped to multiple records, so that e
So, I was able to circumvent this issue. This is in no way a permanent
solution, but I thought I should let you (and anybody who encounters this
problem in future) know some of my observations.
What I fount out was that,
1. In Mapr's version of hadoop, they do the authentication inside
initialize(
Thanks Suneel. Exactly what I was looking for.
On Mon, Mar 13, 2017 at 10:31 AM, Suneel Marthi wrote:
> For an example implementation using Flink, check out https://github.com/
> bigpetstore/bigpetstore-flink/blob/master/src/main/java/org/
> apache/bigtop/bigpetstore/flink/java/FlinkStreamingRec
Hi Chet,
the following thins may create the error you mentioned:
* the job ID of the query must match the ID of the running job
* the job is not running anymore
* the queryableStateName does not match the string given to
setQueryable("query-name")
* the queried key does not exist (note that you ne
For an example implementation using Flink, check out
https://github.com/bigpetstore/bigpetstore-flink/blob/master/src/main/java/org/apache/bigtop/bigpetstore/flink/java/FlinkStreamingRecommender.java
On Mon, Mar 13, 2017 at 1:29 PM, Suneel Marthi wrote:
> A simple way is to populate a Priority Q
A simple way is to populate a Priority Queue of max size 'k' and implement
a comparator on ur records. That would ensure that u always have Top k
records at any instant in time.
On Mon, Mar 13, 2017 at 1:25 PM, Meghashyam Sandeep V <
vr1meghash...@gmail.com> wrote:
> Hi All,
>
> I'm trying to u
Hi All,
I'm trying to use Flink for a use case where I would want to see my top
selling products in time windows in near real time (windows of size 1-2
mins if fine). I guess this is the most common use case to use streaming
apis in e-commerce. I see that I can iterate over records in a windowed
s
Zhijiang is right, it is not possible to tell this from these logs.
The Yarn logs probably hold the cause for this.
On Mon, Feb 20, 2017 at 9:21 AM, lining jing wrote:
> I have seen the log, did not find any information. Just get some
> information about the machine run this node. Disk less 10%
I think we need to get away from the dynamic class loading as much as
possible. It breaks way to soon and causes easily class leaks.
I would be in favor if understanding how to fix this on the Flink side,
i.e., either:
- Having flags for disabling it optionally
- Having an option of "user cod
What Robert said is correct. However, that behaviour depends on the
Trigger. You can write your own Trigger that behaves differently when
late data arrives, that is, you could write a trigger that never fires
for late data. In that case, you can also simply set the allowed
lateness to zero, however
For context, this is from the Flink doc:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/job_scheduling.html
I think Ufuk (cc-ed) could know more about this>
On Mon, Mar 13, 2017, at 05:42, 윤형덕 wrote:
> Hi All,
>
> figure1
> https://ci.apache.org/projects/flin
Hi Sam,
could you please also send the code of the timestamp/watermark assigner?
This could also affect things.
Best,
Aljoscha
On Thu, Mar 9, 2017, at 19:58, Sam Huang wrote:
> Hi Aljoscha,
>
> Here's the code:
> private static class DataFilterFunImpl extends
> RichCoFlatMapFunction {
I think the change reduces the chances to run into classloading issues in
case there's a bug in Flink (= it is using the wrong CL)
I've filed a JIRA for the problem:
https://issues.apache.org/jira/browse/FLINK-6031
On Fri, Feb 24, 2017 at 9:29 PM, Gyula Fóra wrote:
> Hi,
> I am wondering whethe
Any guidance on troubleshooting error Error: No KvStateLocation found for KvState instance with name X when trying to make a queryable state call in Flink 1.2.0? I do know the server is receiving the call made from the remote client. (query.server.enable = true on the server in flink-conf.yaml,
Hi Chet,
Nico or Ufuk (in CC) should be able to help you.
Thanks,
Fabian
2017-03-13 11:23 GMT+01:00 Chet Masterson :
> Any guidance on troubleshooting error
>
> Error: No KvStateLocation found for KvState instance with name X
>
> when trying to make a queryable state call in Flink 1.2.0?
Hi Ismaël,
as far as I know, there is no official policy about this.
Looking at the release history, we have so far only covered the last
official minor release with bugfix releases which would be the 1.2.x line
at the moment.
With growing user base and production deployments, it makes sense to de
Hello
I would like to know what is the official policy on version maintenance for
flink, is only the latest stable maintained or are also earlier versions
supported too.
Looking at the wikipedia flink page they mention that flink 1.0 is still
maintained, is this the case ? I have not found this
Hi guys,
I'm retrying to send some app related custom metrics from Flink to
Datadog via StatsD.
I followed https://ci.apache.org/projects/flink/flink-docs-
release-1.2/monitoring/metrics.html to set up flink-conf.yaml and test code
like this
// flink-conf.yaml
metrics.reporters: stsd
20 matches
Mail list logo