Hi Shailesh,
I am afraid it is gonna be hard to help you, as this branch differs
significantly from 1.4.2 release (I've done diff across your branch and
tag/release-1.4.2). Moreover the code in the branch you've provided
still does not correspond to the lines in the exception you've posted
previou
Hi,
I recently tried to update a flink job from 1.3.2 to 1.6.0. It deploys
successfully as usual, but logs the following exception shortly after
starting:
Caused by: org.apache.avro.AvroRuntimeException:
avro.shaded.com.google.common.util.concurrent.UncheckedExecutionException:
org.apache.avro.Av
Hi,
One of my Flink on YARN jobs got into a weird situation after a fail-fast
restart. The restart was triggered by loss of a TaskManager, but when the job
was recovering, one of its subtask (1/24) couldn’t be deployed, and finally
failed with NoResourceAvailableException.
Looking into the lo
Gentle reminder on this question.
From: V N, Suchithra (Nokia - IN/Bangalore)
Sent: Monday, September 24, 2018 3:56 PM
To: user@flink.apache.org
Subject: Information required regarding SSL algorithms for Flink 1.5.x
Hello,
We have a query regarding SSL algorithms available for Flink versions. Fr
So there's 2 issues here:
1) your reporter configuration is wrong, configuration values for a
specific reporter are prefixed with "metrics.reporter", not
"metrics.reporters" (note the "s"). See below for a correct config.
metrics.reporters: varuy
metrics.reporter.varuy.host: tracing115
metrics
Please see
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#security-ssl-algorithms
for the SSL algorithms that are available by default for 1.5 .
On 27.09.2018 13:24, V N, Suchithra (Nokia - IN/Bangalore) wrote:
Gentle reminder on this question.
*From:*V N, Suchith
Hi Averell,
From what I understand for your use case, it is possible to do what you want
with Flink.
If you are implementing a function, then you have access to the metric system
through
the runtime context (see [1] for more information).
Some things to take into consideration:
1) Metrics are
Hi Vishal,
Currently there is no way to share (user-defined) resources between tasks on
the same TM.
So I suppose that a singleton is the best way to go for now.
Cheers,
Kostas
> On Sep 27, 2018, at 3:43 AM, Hequn Cheng wrote:
>
> Hi vishal,
>
> Yes, we can define a static connection to reus
Hi Paul,
I am also cc’ing Till and Gary who may be able to help, but to give them more
information,
it would help if you told us which Flink version you are using.
Cheers,
Kostas
> On Sep 27, 2018, at 1:24 PM, Paul Lam wrote:
>
> Hi,
>
> One of my Flink on YARN jobs got into a weird situatio
Hi Kostas,
Sorry, I forget that. I'm using Flink 1.5.3.
Best,
Paul Lam
Kostas Kloudas 于2018年9月27日周四 下午8:22写道:
> Hi Paul,
>
> I am also cc’ing Till and Gary who may be able to help, but to give them
> more information,
> it would help if you told us which Flink version you are using.
>
> Cheers
Hi,
I want a top n result on each hop window result, but some error throws
out when I add the order by sentence or the limit sentence, so how do I
implement such case ?
Thanks a lot.
SELECT
trackId as id,track_title as description, count(*) as cnt
FROM
play
WHERE
appN
Hi Kostas,
Yes, I want them as metrics, as they are purely for monitoring purpose.
There's no need of fault tolerance.
If I use side-output, for example for that metric no.1, I would need a
tumbling AllWindowFunction, which, as I understand, would introduce some
delay to both the normal processin
Hi Averell,
> On Sep 27, 2018, at 3:09 PM, Averell wrote:
>
> Hi Kostas,
>
> Yes, I want them as metrics, as they are purely for monitoring purpose.
> There's no need of fault tolerance.
>
> If I use side-output, for example for that metric no.1, I would need a
> tumbling AllWindowFunction, wh
Makes sense. An additional query.. How does flink handle class loading. Is
there a separate class loader per job ? In essence if I have a static
member in a class in a job, it would be highly inapprpraite that that
static member is available to another job.
On Thu, Sep 27, 2018, 8:15 AM Kostas Kl
Hi Henry,
Currently, Order By is supported in Streaming&Batch while Limit is only
supported in Batch. Another thing to be noted is, for Order by, the result
of streaming queries must be primarily sorted on an ascending time
attribute[1].
[1]
https://ci.apache.org/projects/flink/flink-docs-master/
Hi again,
We managed at the end to get data into Kinesalite using the
FlinkKinesisProducer, but to do so, we had to use different configuration,
such as ignoring the 'aws.endpoint' setting and going for the ones that the
Kinesis configuration will expect. So, to our FlinkKinesisProducer we pass
co
Hey guys,
I'm seeing a weird error happening here: We have our JobManager configured
in HA mode, but with a single JobManager in the cluster (the second one was
in another machine that start showing flaky network, so we removed it).
Everything is running in Standalone mode.
Sometimes, the jobs ar
Yun,
> Then I would share some experience about tuning RocksDB performance. Since
> you did not cache index and filter in block cache, it's no worry about the
> competition between data blocks and index&filter blocks[1]. And to improve
> the read performance, you should increase your block cach
Hi,
We are new to Flink and would like to get some ideas of achieving parallelism
and for maintaining state in Flink for few interesting problems.
We receive files that contain multiple types of events. We need to process them
in the following way:
1. Multiple event sources send event files to o
Hi Julio,
If the single JobManager lost temporarily and reconnected later, it could
be regranted leadership. And if you use Flink on Yarn, the Yarn RM
(according to configuration) would start a new ApplicationMaster to act as
a take-over JobManager.
Best,
tison.
Julio Biason 于2018年9月28日周五 上午3:
Hi Julio,
Which version of Flink are you using? If it is 1.5+, then you can try to
increase the heartbeat timeout by configuring it[1].
In addition, the possible cause is that the load of tm is too heavy, for
example, because the Full GC causes JVM stalls,
or deadlocks and other issues may cause h
Hi Hequn,
If limit n is not supported in streaming, how to solve top n problem in
stream scenario?
Best
Henry
> 在 2018年9月28日,上午12:03,Hequn Cheng 写道:
>
> Hi Henry,
>
> Currently, Order By is supported in Streaming&Batch while Limit is only
> supported in Batch. Another thing to be not
Hi Julio,
If you are using a HA mode depending on other services like ZooKeeper, you
can also check whether that service is OK when the JM lost leadership.
In our experience, network partitioning of JM to ZK and ZK object exceeding
max size limit can also lead to JM leadership lost.
vino yang 于2
Aah got it.
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/debugging_classloading.html
On Thu, Sep 27, 2018 at 11:04 AM Vishal Santoshi
wrote:
> Makes sense. An additional query.. How does flink handle class loading.
> Is there a separate class loader per job ? In essence i
Hi everyone,
Today I get this error again in another job, and I find some logs indicating
it’s probably related to the delegation token.
```
2018-09-28 09:49:09,668 INFO org.apache.flink.yarn.YarnResourceManager
- Closing TaskExecutor connection
container_1536852167599_12
Hi,
I saw Apache Flink User Mailing List archive. - static/dynamic lookups in flink
streaming being discussed, and then I saw this FLIP
https://cwiki.apache.org/confluence/display/FLINK/FLIP-17+Side+Inputs+for+DataStream+API.
I know we havent made much progress on this topic. I still wanted to
Hi.
We have created our own database source that pools the data with a configured
interval. We then use a co processed function. It takes to input one from our
database and one from our data input. I require that you keyby with the
attributes you use lookup in your map function.
To delay your
Hi Dawid,
Thanks for your time on this. The diff should have pointed out only the top
3 commits, but since it did not, it is possible I did not rebase my branch
against 1.4.2 correctly. I'll check this out and get back to you if I hit
the same issue again.
Thanks again,
Shailesh
On Thu, Sep 27,
Hi,
As we are aware, Currently we cannot use RichAggregateFunction in
aggregate() method upon windowed stream. So, To access the state in your
customAggregateFunction, you can implement it using a ProcessFuntion.
This issue is faced by many developers.
So, someone must have implemented or tried to
29 matches
Mail list logo