Thank you, Ana and Yang!
On Tue, Nov 19, 2019, 9:29 PM Yang Wang wrote:
> Hi Pankaj,
>
> First, you need to prepare a hadoop environment separately, including hdfs
> and Yarn. If you are familiar
> with hadoop, you could download the binary[1] and start the cluster on you
> nodes manually. Other
We are using flink 1.6.2. For filesystem backend, we want to monitor
the state size in memory. Once the state size becomes bigger, we can get
noticed and take measures such as rescaling the job, or the job may fail
because of the memory.
We have tried to get the memory usage for the jvm
Hi Yang Wang,
Thanks for your reply, I MAY HAVE setup a ha cluster succefully. The reason I
can't setup before may be some bug about s3 in flink, after change to hdfs,I
can run it suceefully.
But after about one day of running ,the job-manager will crash and can't
recover automatic, I must a
Thanks for your participation!
@Yang: Great to hear. I'd like to know whether or not a remote flink jar
path conflicts with FLINK-13938. IIRC FLINK-13938 auto excludes local flink
jar from shipping which possibly not works for the remote one.
@Thomas: It inspires a lot URL becomes the unified rep
Hi Gwenhael,
I'm afraid that we could not set different cut-off to jobmanager and
taskmanager. You could set
the jvm args manually to work around.
For example, 'env.java.opts.jobmanager=-Xms3072m -Xmx3072m'.
In most jvm implementation, the rightmost Xmx Xms will take effect. So i
think it should w
Hi Pankaj,
First, you need to prepare a hadoop environment separately, including hdfs
and Yarn. If you are familiar
with hadoop, you could download the binary[1] and start the cluster on you
nodes manually. Otherwise,
some tools may help you to deploy a hadoop cluster, ambari[2] and cloudera
manag
Hi,
I was able to run Flink on YARN by installing YARN and Flink separately.
Thank you.
Ana
On Wed, Nov 20, 2019 at 10:42 AM Pankaj Chand
wrote:
> Hello,
>
> I want to run Flink on YARN upon a cluster of nodes. From the
> documentation, I was not able to fully understand how to go about it. S
Hello,
I want to run Flink on YARN upon a cluster of nodes. From the
documentation, I was not able to fully understand how to go about it. Some
of the archived answers are a bit old and had pending JIRA issues, so I
thought I would ask.
Am I first supposed to install YARN separately, and then dow
Hi,
We're in the middle of testing the upgrade of our data processing flows from
Flink 1.6.4 to 1.9.1. We're seeing that flows which were running just fine on
1.6.4 now fail on 1.9.1 with the same application resources and input data
size. It seems that there have been some changes around how t
Great work, glad to see this finally happening!
On Tue, Nov 19, 2019 at 6:26 AM Robert Metzger wrote:
> Thanks.
>
> I added a ticket for this nice idea:
> https://github.com/ververica/flink-ecosystem/issues/84
>
> On Tue, Nov 19, 2019 at 11:29 AM orips wrote:
>
>> This is great.
>>
>> Can we ha
Would that be a feature specific to Yarn? (and maybe standalone sessions)
For containerized setups, and init container seems like a nice way to solve
this. Also more flexible, when it comes to supporting authentication
mechanisms for the target storage system, etc.
On Tue, Nov 19, 2019 at 5:29 PM
I have implemented this feature in our env, Use ‘Init Container’ of docker to get URL of a jar file ,It seems a good idea.
Hello,
In a setup where we allocate most of the memory to rocksdb (off-heap) we ha= ve
an important cutoff.
Our issue is that the same cutoff applies to both task and job managers : the
heap size of the job manager then becomes too low.
Is there a way to apply different cutoffs to job an
Thanks.
I added a ticket for this nice idea:
https://github.com/ververica/flink-ecosystem/issues/84
On Tue, Nov 19, 2019 at 11:29 AM orips wrote:
> This is great.
>
> Can we have RSS feed for this?
>
> Thanks
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nab
Hi,
I asked the same question on StackOverflow but received no response so
far, so I thought I could also post it here. The original question can
be found at: https://stackoverflow.com/q/58922246/477168 Feel free to
(also) reply there.
For convenience find the original text below.
---
So the co
This is great.
Can we have RSS feed for this?
Thanks
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks for your message.
It would be great if you could provide code to reproduce the issue (it does
not have to be your exact code, a simplified example is also fine).
Maybe your program is not directly causing the issue, but it seems that the
code generator is producing something we can not comp
Hi Amran,
thanks a lot for your message.
I think this is a valid feature request. I've created a JIRA ticket to
track it: https://issues.apache.org/jira/browse/FLINK-14856 (this does not
mean this gets addressed immediately. However, there are currently quite
some improvements to the configuration
Hi Robert,
Just added it under the "Tools" category[1].
[1]: https://flink-packages.org/packages/kylin-flink-cube-engine
Best,
Vino
Robert Metzger 于2019年11月19日周二 下午4:33写道:
> Thanks.
> You can add Kylin whenever you think it is ready.
>
> On Tue, Nov 19, 2019 at 9:07 AM vino yang wrote:
>
> >
HI,
I need help with handling errors with the elasticsearch sink as below
2019-11-19 08:09:09,043 ERROR
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase -
Failed Elasticsearch item request:
[flink-index-deduplicated/nHWQM0XMSTatRri7zw_s_Q][[flink-index-deduplicated][1
Thanks.
You can add Kylin whenever you think it is ready.
On Tue, Nov 19, 2019 at 9:07 AM vino yang wrote:
> Thanks Robert. Great job! The web site looks great.
>
> In the future, we can also add my Kylin Flink cube engine[1] to the
> ecosystem projects list.
>
> [1]: https://github.com/apache/k
Thanks Robert. Great job! The web site looks great.
In the future, we can also add my Kylin Flink cube engine[1] to the
ecosystem projects list.
[1]: https://github.com/apache/kylin/tree/engine-flink
Best,
Vino
Oytun Tez 于2019年11月19日周二 上午12:09写道:
> Congratulations! This is exciting.
>
>
> --
22 matches
Mail list logo