Hello, Theodore
Could you please move vectors of development and their prioritized
positions from *## Executive summary* to the google doc?
Could you please also create some table in google doc, that is representing
the selected directions and persons, who would like to drive or participate
in
Thank you, Theodore.
Shortly speaking I vote for:
1) Online learning
2) Low-latency prediction serving -> Offline learning with the batch API
In details:
1) If streaming is strong side of Flink lets use it, and try to support
some online learning or light weight inmemory learning algorithms. Try
> that document? We need action.
>
> Looking forward to work on this (whatever that might be) ;) Also are there
> any data supporting one direction or the other from a customer perspective?
> It would help to make more informed decisions.
>
> On Thu, Feb 23, 2017 at 2:23 PM, Katheri
ble?
>
> On 2017-02-23 12:34, Katherin Eri wrote:
>
> > I'm not sure that this is feasible, doing all at the same time could mean
> > doing nothing
> > I'm just afraid, that words: we will work on streaming not on batching,
> we
> > have no commiter
I'm not sure that this is feasible, doing all at the same time could mean
doing nothing
I'm just afraid, that words: we will work on streaming not on batching, we
have no commiter's time for this, mean that yes, we started work on
FLINK-1730, but nobody will commit this work in the end, as it a
Till, thank you for your response.
But I need several points to clarify:
1) Yes, batch and batch ML is the field full of alternatives, but in my
opinion that doesn’t mean that we should ignore the problem of not
developing batch part of Flink. You know: Apache Beam, Apache Mahout they
both feel th
Hello guys,
May be we will be able to focus our forces on some E2E scenario or show
case for Flink as also ML supporting engine, and in such a way actualize
the roadmap?
This means: we can take some real life/production problem, like Fraud
detection in some area, and try to solve this problem f
ot assigned to anyone we would like to take this
ticket to work (my colleges could try to implement it).
Further discussion of the topic related to FLINK-1730 I would like to
handle in appropriate ticket.
пт, 10 февр. 2017 г. в 19:57, Katherin Eri :
> I have created the ticket to discuss GPU relat
I have created the ticket to discuss GPU related questions futher
https://issues.apache.org/jira/browse/FLINK-5782
пт, 10 февр. 2017 г. в 18:16, Katherin Eri :
> Thank you, Trevor!
>
> You have shared very valuable points; I will consider them.
>
> So I think, I should create fi
> > https://github.com/apache/mahout/tree/master/viennacl
> >
> > Best,
> > tg
> >
> >
> > Trevor Grant
> > Data Scientist
> > https://github.com/rawkintrevo
> > http://stackexchange.com/users/3002022/rawkintrevo
> > http://trevorgr
) I have no idea about the GPU implementation. The SystemML mailing list
> will probably help you out their.
>
> Best regards,
> Felix
>
> 2017-02-08 14:33 GMT+01:00 Katherin Eri :
>
> > Thank you Felix, for your point, it is quite interesting.
> >
> > I will take a l
moment I am trying to tackle the broadcast
> issue. But caching is still a problem for us.
>
> Best regards,
> Felix
>
> 2017-02-07 16:22 GMT+01:00 Katherin Eri :
>
> > Thank you, Till.
> >
> > 1) Regarding ND4J, I didn’t know about such a pity and critical
&g
should be feasible to run DL4J on Flink given that it
also runs on Spark. Have you already looked at it closer?
[1] https://issues.apache.org/jira/browse/FLINK-5131
Cheers,
Till
On Tue, Feb 7, 2017 at 11:47 AM, Katherin Eri
wrote:
> Thank you Theodore, for your reply.
>
> 1)Reg
e engineering burden would be too much
> otherwise.
>
> Regards,
> Theodore
>
> On Mon, Feb 6, 2017 at 11:26 AM, Katherin Eri
> wrote:
>
> > Hello, guys.
> >
> > Theodore, last week I started the review of the PR:
> > https://github.com/apache/flin
ependently **from
integration to DL4J.*
Could you please provide your opinion regarding my questions and points,
what do you think about them?
пн, 6 февр. 2017 г. в 12:51, Katherin Eri :
> Sorry, guys I need to finish this letter first.
> Full version of it will come shortly.
>
&g
Sorry, guys I need to finish this letter first.
Full version of it will come shortly.
пн, 6 февр. 2017 г. в 12:49, Katherin Eri :
> Hello, guys.
> Theodore, last week I started the review of the PR:
> https://github.com/apache/flink/pull/2735 related to *word2Vec for Flink*.
>
&
Hello, guys.
Theodore, last week I started the review of the PR:
https://github.com/apache/flink/pull/2735 related to *word2Vec for Flink*.
During this review I have asked myself: why do we need to implement such a
very popular algorithm like *word2vec one more time*, when there is already
availab
17 matches
Mail list logo