Hi.
1) The url pattern example :
file:///root/flink-test/lib/dependency.jar
2) I’m trying to simulate the same issue on a separate flink installation
with a sample job so that I can share the logs. (However so far I’ve been
unable to simulate it. Though on our product setup it can b
Hi Hynek,
In execution, matrices.first(6).print() is different from
matrices.print().
It is adding a reducer operator to the job which only collects the first
6000 records from the source.
So if your InputFormat can generate more than 6 (which can be
unexpected though), and the trailing da
Hi!
Voting on RC2 for Apache Flink 1.9.0 has started:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-td31542.html
Please check this out if you want to verify your applications against this
new Flink release.
Best,
Gordon
--
bq. Yes, we recompiled Flink with rocksdb to have JNI, to enable the
write_buffer_manager after we read that Jira.
I see, then which way are you using to limit the rocksdb memory? Setting
write buffer and block cache size separately or with the "cost memory used
in memtable into block cache" [1] fe
Hi,
I want to connect my Flink streaming job to Hive.
At the moment, what is the best way to connect to Hive.
Some features seems to be in development.
Some really cool features have been described here:
https://fr.slideshare.net/BowenLi9/integrating-flink-with-hive-xuefu-zhang-and-bowen-li-seatt
Hi,
First I was trying to mimic the implementation of the IntervalJoinOperator
which uses a KeyedStream > connect > keyBy > and transform. It was a little
confusing to me because I was not sure if I had to use a window
transformation after transform() or not.
Then I found this answer (https://stac
Hi Yu,
Yes, we recompiled Flink with rocksdb to have JNI, to enable the
write_buffer_manager after we read that Jira.
One quick question, I noticed that our disk usage (SSD) for RocksDb is
always stay around %2 (or 2.2 GB), which is not the case before we enable
RocksDb state backend. So wondering
Hi,
I'm trying to implement a custom FileInputFormat (to read the MNIST
Dataset).
The creation of Flink DataSet (DataSet matrices) seems to be OK,
but when I try to print it using either
matrices.print();
or
matrices.collect();
It finishes with exit code -17.
(Before, I compiled using Java 11 and
Great, thank you!
Am Do., 8. Aug. 2019 um 02:15 Uhr schrieb Jacky Du :
> thanks Fabian , I created a Jira ticket with a code sample .
>
>
> https://issues.apache.org/jira/projects/FLINK/issues/FLINK-13603?filter=allopenissues
>
> I think if the root cause I found is correct, fix this issue could
Hi guys,
I have opposite issue :-) I would like to unit test negative behavior
- that the Event Time timer is not fired when no further event arrives
(which would advance the watermarks).
But due to StreamSource firing Long.MAX_VALUE watermark after enclosed
finite FromElementsFunction run method d
Hi Biao,
Thanks for your reply, I will give it a try (1.8+)!
Best,
Victor
From: Biao Liu
Date: Friday, August 9, 2019 at 5:45 PM
To: Victor Wong
Cc: "user@flink.apache.org"
Subject: Re: **RegistrationTimeoutException** after TaskExecutor successfully
registered at resource manager
Hi Victor
Hi Victor,
There used to be several relevant issues reported [1] [2] [3]. I guess you
have encountered the same problem.
This issue has been fixed in 1.8 [4]. Could you try it on a later version
(1.8+)?
1. https://issues.apache.org/jira/browse/FLINK-11137
2. https://issues.apache.org/jira/browse/
Hi pengchengling,
Does this issue happen before you submitting any job to the cluster or
after some jobs are terminated?
If it's the latter case, didi you wait for a while to see if the
unavailable slots became available again?
Thanks,
Zhu Zhu
pengcheng...@bonc.com.cn 于2019年8月9日周五 下午4:55写道:
>
Hi,
Could you attach the stack trace in exception or relevant logs?
Best,
tison.
pengcheng...@bonc.com.cn 于2019年8月9日周五 下午4:55写道:
> Hi,
>
> Why are some slots unavailable?
>
> My cluster model is standalone,and high-availability mode is zookeeper.
> task.cancellation.timeout: 0
> some slots ar
Hi,
Why are some slots unavailable?
My cluster model is standalone,and high-availability mode is zookeeper.
task.cancellation.timeout: 0
some slots are not be available,when job is not running.
pengcheng...@bonc.com.cn
Hi Zhenghua,
Blink planner support lazy translation for multiple SQLs, and the common
> nodes will be reused in a single job.
>
It is very helpful, and thanks for your clarification.
> The only thing you need note here is the unified TableEnvironmentImpl do
> not support conversions between Tab
Hi,
I’m using Flink version 1.7.1, and I encountered this exception which was a
little weird from my point of view;
TaskManager successfully registered at resource manager, however after 5
minutes (which is the default value of taskmanager.registration.timeout config)
it threw out RegistrationTi
Hi Cam,
Which flink version are you using?
Actually I don't think any existing flink release could take usage of the
write buffer manager natively through some configuration magic, but
requires some "developing" efforts, such as manually building flink with a
higher version rocksdb to have the JN
18 matches
Mail list logo