e:
>
>> Hi, Martin:
>> I think a better solution would be to set the number of cores of each
>> container equals to that of a physical server if this mesos cluster is
>> dedicated to your flink cluster.
>>
>> On Mon, Sep 17, 2018 at 5:28 AM Martin Eden
>>
d use them to
> control the task placement.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#mesos-constraints-hard-hostattribute
>
> Cheers,
> Till
>
> On Fri, Sep 14, 2018 at 3:08 PM Martin Eden
> wrote:
>
>> Thanks Vino!
&
Thanks Vino!
On Fri, Sep 14, 2018 at 3:37 AM vino yang wrote:
> Hi Martin,
>
> Till has done most of the work of Flink on Mesos. Ping Till for you.
>
> Thanks, vino.
>
> Martin Eden 于2018年9月12日周三 下午11:21写道:
>
>> Hi all,
>>
>> We're using Flink
Hi all,
We're using Flink 1.3.2 with DCOS / Mesos.
We have a 3 node cluster and are running the Flink DCOS package (Flink
Mesos framework) configured with 3 Task Managers.
Our goal is to run each of them on separate hosts for better load balancing
but it seems the task managers end up running on
rievalService member in Flink
>>> 1.5.
>>>
>>>
>>>
>>> I wonder if I can use RestClusterClient@v1.5 on my client side, to
>>> retrieve the leader JM of Flink v1.4 Cluster.
>>>
>>>
>>>
>>> Thanks
>>>
>&
Hi,
This is actually very relevant to us as well.
We want to deploy Flink 1.3.2 on a 3 node DCOS cluster. In the case of
Mesos/DCOS, Flink HA runs only one JobManager which gets restarted on
another node by Marathon in case of failure and re-load it's state from
Zookeeper.
Yuan I am guessing you
Hi,
I'm reviving this thread because I am probably hitting a similar issue with
loading a native library. However I am not able to resolve it with the
suggestions here.
I am using Flink 1.3.2 and the Jep library to call Cpython from a
RichFlatMapFunction with a parallelism of 10. I am instantiati
The above applies to Mesos/DCOS as well. So if someone would also share
insights into automatic job deployment in that setup would very useful.
Thanks.
M
On Fri, Dec 22, 2017 at 6:56 PM, Maximilian Bode <
maximilian.b...@tngtech.com> wrote:
> Hi everyone,
>
> We are beginning to run Flink on K8s
Hi,
Our team is looking at replacing Redis with Flink's own queryable state
mechanism. However our clients are using python.
1. Is there a python integration with the Flink queryable state mechanism?
Cannot seem to be able to find one.
2. If not, is it on the roadmap?
3. Our current solution is
Hi,
Not merged in yet but this is an example pr that is mocking metrics and
checking they are properly updated:
https://github.com/apache/flink/pull/4725
On Fri, Oct 13, 2017 at 1:49 PM, Aljoscha Krettek
wrote:
> I think we could add this functionality to the (operator) test harnesses.
> I.e. a
Hi all,
Just a quick one.
I have a task that looks like this (as printed in the logs):
17-09-29 0703510695 INFO TaskManager.info: Received task Co-Flat Map ->
Process -> (Sink: sink1, Sink: sink2, Sink: sink3) (2/2)
After looking a bit at the code of the streaming task I suppose the sink
operat
Any follow-up on this? Jira? PR?
On Wed, Sep 13, 2017 at 11:30 AM, Tzu-Li (Gordon) Tai
wrote:
> Hi Aitozi,
>
> Yes, I think we haven’t really pin-pointed out the actual cause of the
> problem, but if you have a fix for that and can provide a PR we can
> definitely look at it! That would be helpf
any overhead to the checkpoint.
>
> Best,
> Stefan
>
> > Am 08.09.2017 um 10:20 schrieb Martin Eden :
> >
> > Hi all,
> >
> > I have a Flink 1.3.1 job with a source that implements
> CheckpointingFunction.
> >
> > As I understand it, the notifyC
Hi all,
I have a Flink 1.3.1 job with a source that implements
CheckpointingFunction.
As I understand it, the notifyCheckpointComplete callback is called when
all the downstream operators in the DAG successfully finished their
checkpoints.
Since I am doing some work in this method, I would like
sily.
> Otherwise, you had to route all data and all lambda to the same node to
> guarantee that every lambda won't lose any their arguments' state.
>
> Best,
> Tony Wei
>
> 2017-09-07 14:31 GMT+08:00 Martin Eden :
>
>> Hi Tony,
>>
>> Yes exactly I
o go. Can this assumption be acceptable in your case?
> What do you think?
>
> Best,
> Tony Wei
>
> 2017-09-06 22:41 GMT+08:00 Martin Eden :
>
>> Hi Aljoscha, Tony,
>>
>> We actually do not need all the keys to be on all nodes where lambdas
>> are.
Do we now re-route all previous
>> lambdas and inputs to different machines? They all have to go to the same
>> machine, but which one? I'm currently thinking that there would need to be
>> some component that does the routing, but this has to be global, so it's
>
ght.
> But the first step also needs to be fulfilled by SideInput. I'm not sure
> how to achieve this in the current release.
>
> Best,
> Tony Wei
>
> Martin Eden 於 2017年8月31日 週四,下午11:32寫道:
>
>> Hi Aljoscha, Tony,
>>
>> Aljoscha:
>> Yes it's t
What do you think? Could this solution solve your problem?
>
> Best,
> Tony Wei
>
> 2017-08-31 20:43 GMT+08:00 Martin Eden :
>
>> Thanks for your reply Tony,
>>
>> Yes we are in the latter case, where the functions/lambdas come in the
>> control stream. Thi
function in the runtime.
>
> Best,
> Tony Wei
>
> 2017-08-31 18:33 GMT+08:00 Martin Eden :
>
>> Thanks for your reply Tony.
>>
>> So there are actually 2 problems to solve:
>>
>> 1. All control stream msgs need to be broadcasted to all tasks.
>>
>
t and can be assigned to the
> other stream, then the data in SideInput stream will be broadcasted.
>
> So far, I have no idea if there is any solution to solve this without
> SideInput.
>
> Best,
> Tony Wei
>
> 2017-08-31 16:10 GMT+08:00 Martin Eden :
>
>> Hi all,
Hi all,
I am trying to implement the following using Flink:
I have 2 input message streams:
1. Data Stream:
KEY VALUE TIME
.
.
.
C V66
B V66
A V55
A V44
C V33
A V33
B V33
B V22
A V1
actual time-series databases that specialize in these things.
>
>
> On 28.08.2017 19:08, Martin Eden wrote:
>
> Hi all,
>
> Just 3 quick questions both related to Flink metrics, especially around
> sinks:
>
> 1. In the Flink UI Sources always have 0 input records / bytes a
Hi all,
Just 3 quick questions both related to Flink metrics, especially around
sinks:
1. In the Flink UI Sources always have 0 input records / bytes and Sinks
always have 0 output records / bytes? Why is it like that?
2. What is the best practice for instrumenting off the shelf Flink sinks?
Cu
Hey Desheng,
Some options that come to mind:
- Cave man style: Stop and restart job with new config.
- Poll scenario: You could build your own thread that periodically loads
from the db into a per worker accessible cache.
- Push scenario: have a config stream (based off of some queue) which you
co
Hey Vishnu,
For those of us on the list that are not very familiar with Flink and Avro
can you give a pointed to the docs you are referring to and how you intend
to use it? Just so we gain understanding as well.
Thanks,
Martin
On Tue, Jul 18, 2017 at 9:12 PM, Vishnu Viswanath <
vishnu.viswanat..
it will
> come out soon (~1 month) and it will include a lot of new features and
> bug-fixes. Until then, it may change a bit, but the APIs
> that you will be using, will not change.
>
> So why not going straight for the more future-proof way?
>
> Thanks,
> Kostas
>
Hi Robert,
Any updates on the below for the community?
Thanks,
M
On Tue, Apr 25, 2017 at 8:50 AM, Robert Schmidtke
wrote:
> Hi Ufuk, thanks for coming back to me on this.
>
> The records are 100 bytes in size, the benchmark being TeraSort, so that
> should not be an issue. I have played around
Hi everyone,
Are there any examples of how to implement a reliable (zero data loss)
Flink source reading from a system that is not replay-able but supports
acknowledging messages?
Or any pointers of how one can achieve this and how Flink can help?
I imagine it should involve a write ahead log bu
29 matches
Mail list logo