I think I got this to work, although with "nasty" workaround.
I've debugged that configuration for this testHarnes operator was missing
two entries:
"edgesInOrder"
"typeSerializer_in_1"
I added conditional break points to InstantiationUtils.readObjectFromConfig
method for those two keys and I ran
Hi all,
Just wondering what is the status at this point?
On Thu, Sep 19, 2019 at 4:38 PM Hequn Cheng wrote:
> Hi,
>
> Fabian is totally right. Big thanks to the detailed answers and nice
> examples above.
>
> As for the PR, very sorry about the delay. It is mainly because of the
> merge of blin
Hi
>From my experience, you can first check the jobmanager.log, find out
whether the checkpoint expired or was declined by some task, if expired,
you can follow the adivce of seeksst given above(maybe enable debug log can
help you here), if was declined, then you can go to the taskmanager.log to
f
Hi
As far as I know, the latency-tracking feature is for debugging usages, you
can use it to debug, and disable it when running the job on production.
>From my side, use $current_processing - $event_time is something ok, but
keep the things in mind: the event time may not be the time ingested in
Fl
Hi,
I am looking for end to end latency monitoring of link job. Based on my
study, I have two options:
1. flink provide a latency tracking feature. However, the documentation
says it cannot show actual latency of business logic as it will bypass all
operators.
https://ci.apache.org/projects/flink
I've debug it a little bit and I found that it fails in
InstantiationUtil.readObjectFromConfig method when we execute
byte[] bytes = config.getBytes(key, (byte[])null); This returns null.
The key that it is looking for is "edgesInOrder". In the config map, there
are only two entries though.
For
Hi,
Im trying to test my RichAsyncFunction implementation with
OneInputStreamOperatorTestHarness based on [1]. I'm using Flink 1.9.2
My test setup is:
this.processFunction = new MyRichAsyncFunction();
this.testHarness = new OneInputStreamOperatorTestHarness<>(
new AsyncWaitOperator<>(
Hi Jark,
Many thanks for creating the issue on Jira and nice summarization :-)
Best,
Dongwon
On Sat, Mar 28, 2020 at 12:37 AM Jark Wu wrote:
> Hi Dongwon,
>
> I saw many requirements on this and I'm big +1 for this.
> I created https://issues.apache.org/jira/browse/FLINK-16833 to track this
> e
Hi Dongwon,
I saw many requirements on this and I'm big +1 for this.
I created https://issues.apache.org/jira/browse/FLINK-16833 to track this
effort. Hope this can be done before 1.11 release.
Best,
Jark
On Fri, 27 Mar 2020 at 22:22, Dongwon Kim wrote:
> Hi, I tried flink-jdbc [1] to read dat
Could you also check the jobmanager logs whether the flink akka is also
bound to
and listening at the hostname "prod-bigd-dn11"? Otherwise, all the package
from
taskmanager will be discarded.
Best,
Yang
Vitaliy Semochkin 于2020年3月27日周五 下午3:35写道:
> Hello Zhu,
>
> The host can be resolved and th
Hi, I tried flink-jdbc [1] to read data from Druid because Druid implements
Calcite Avatica [2], but the connection string, jdbc:avatica:remote:url=
http://BROKER:8082/druid/v2/sql/avatica/, is not supported by any of
JDBCDialects [3].
I implement custom JDBCDialect [4], custom StreamTableSourceFa
Hi KristoffSC,
the short answer is: you have probably differently configured logger. They
log in a different format or level.
The longer answer: all source connectors currently use the legacy source
thread. That will only change with FLIP-27 [1] being widely adapted. It was
originally planned to
Hi all,
When I run Flink from IDE i can see this prefix in logs
"Legacy Source Thread"
Running the same job as JobCluster on docker, this prefix is not present.
What this prefix means?
Btw, I'm using [1] as ActiveMQ connector.
Thanks.
[1]
https://github.com/apache/bahir-flink/tree/master/flink-c
Hi, there.
In release-1.10, the memory setup of task managers has changed a lot.
I would like to provide here a third-party tool to simulate and get
the calculation result of Flink's memory configuration.
Although there is already a detailed setup guide[1] and migration
guide[2] officially, the
I want to do a bit different hacky PoC:
* I will write a sink, that caches the results in "JVM global" memory. Then
I will write a source, that reads this cache.
* I will launch one job, that reads from Kafka source, shuffles the data to
the desired partitioning and then sinks to that cache.
* Then
Hi Apoorv,
Sorry for the late reply, have been quite busy with backlog items the past
days.
On Fri, Mar 20, 2020 at 4:37 PM Apoorv Upadhyay <
apoorv.upadh...@razorpay.com> wrote:
> Thanks Gordon for the suggestion,
>
> I am going by this repo :
> https://github.com/mrooding/flink-avro-state-seri
Hello Zhu,
The host can be resolved and there are no filewalls in the cluster, so all
ports are opened.
Regards,
Vitaliy
On Fri, Mar 27, 2020 at 8:32 AM Zhu Zhu wrote:
> Hi Vitaliy,
>
> >> *Cannot serve slot request, no ResourceManager connected*
> This is not a problem, just that the JM need
If you are using both the Hadoop S3 and Presto S3 filesystems, you should
use s3p:// and s3a:// to distinguish between the two.
Presto is recommended for checkpointing because the Hadoop implementation
has very high latency when creating files, and because it hits request rate
limits very quickly.
18 matches
Mail list logo