Hello,
there are collected and are accessible under the metrics tab.
Currently we can't display them directly since the backing job
data-structure only sees tasks(aka chains) and
not the individual operators. There are some ongoing efforts to change
that though, so we may be able
to improve t
I would like to now if there is anyone who used an imdg with flink. Trying to
see whether apache ignite, apache geode or hazelcast might be a good choice.
Thanks,
Nuno
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/In-Memory-data-grid-tp1
Hi Konstantin,
What you are explaining here makes sense from a technical perspective!
However, functionality wise, I still think it makes sense for the platform
to collect and report, somehow, those metrics per job and task out of the
box.
Thanks,
Mohammad
---
Hi to all,
I'm building a recommendation system to my application.
I have a set of logs (that contains the user info, the hour, the button
that was clicked ect...) that arrive to my Flink by kafka, then I save
every log in a HDFS (HADOOP), but know I have a problem, I want to apply ML
to (all) my
In this case they are proxied through YARN, you can check the list auf
running applications and click on the Flink app master UI link. Then
you have the host and port for the REST calls. Does this work?
On Fri, Mar 31, 2017 at 1:51 AM, Mohammad Kargar wrote:
> How can I access the REST APIs for m
Sorry, I just realized our previous conversation on this question was done via
private email and not to user@flink.apache.org
Forwarding the previous content of the conversation back to the mailing list:
On March 30, 2017 at 4:15:46 PM, rimin...@sina.cn (rimin...@sina.cn) wrote:
the job can run
I'm not too familiar with what's happening here, but maybe Klou (cc'd) can help?
On Thu, Mar 30, 2017 at 6:50 PM, Andrea Spina
wrote:
> Dear Flink community,
>
> I started to use Async Functions in Scala, Flink 1.2.0, in order to retrieve
> enriching information from MariaDB database. In order to
What is the error message/stack trace you get here?
On Thu, Mar 30, 2017 at 9:33 AM, wrote:
> hi,all,
> i run a job,it is :
> -
> val data = env.readTextFile("hdfs:///")//DataSet[(String,Array[String])]
> val dataVec =
> computeDataVect
Hi,
Yes it does – thanks a lot
Knowing that this is the order
time = 2, onTimer(2) -> access state with key t=2-1, get A, B
time = 2, pocessElement(C) -> put C in state keyed to t=2, registerProcTimer(3)
is useful!
Dr. Radu Tudoran
Senior Research Engineer - Big Data Expert
IT R&D Division
[cid
Hi Kostas,
thank you very much for the fast response. This is what I thought, but now
I have clarity :) Thanks a lot.
Best,
Nico
2017-03-31 12:23 GMT+02:00 Kostas Kloudas :
> Hi Nico,
>
> No, you can have as many parallel tasks doing async IO operations as you
> want.
> What the documentation s
Hi Dominik,
I see, thanks for explaining the diagram.
This is expected because the 1 minute window in your case is aligned with the
beginning of every minute.
For example, if the first element element comes at 12:10:45, then the element
will be put in the window of 12:10:00 to 12:10:59.
Theref
Hi Nico,
No, you can have as many parallel tasks doing async IO operations as you want.
What the documentation says is that in each one of these tasks, there is one
thread handling
the requests.
Hope this helps,
Kostas
> On Mar 31, 2017, at 12:17 PM, Nico wrote:
>
> Hi,
>
> I have a short
Hi,
I have a short question regarding the AsyncFunction of Flink 1.2.0. In the
documentation it says:
"[...] that the AsyncFunction is not called in a multi-threaded fashion.
There exists only one instance of the AsyncFunction and it is called
sequentially for each record in the respective partit
Hi Yassine,
I forgot to say thank you for poiting to the post. It was really useful.
Best,
Nico :)
2017-03-15 20:19 GMT+01:00 Yassine MARZOUGUI :
> Hi Nico,
>
> You might check Fabian's answer on a similar question I posted previousely
> on the mailing list, it can be helpful :
> http://apache-
Hi Radu,
timers are fired in order of their time stamps.
Multiple timers on the same time are deduplicated.
if you have the following logic:
time = 1, processElement(A) -> put A in state keyed to t=1,
registerProcTimer(2)
time = 1, processElement(B) -> put B in state keyed to t=1,
registerProcTi
yep I meant 120 per second :)
On Fri, Mar 31, 2017 at 11:19 AM, Ted Yu wrote:
> The 1,2million seems to be European notation.
>
> You meant 1.2 million, right ?
>
> On Mar 31, 2017, at 1:19 AM, Kamil Dziublinski <
> kamil.dziublin...@gmail.com> wrote:
>
> Hi,
>
> Thanks for the tip man. I tr
Hi,
Thanks Fabian. But is there also a fixed order that is imposed in their
execution?
I am asking this because it is not enough just to have them executed
atomically. If once you have the processElement() being called and then
onTimer(), and in the next called you have them vice versa, it wou
The 1,2million seems to be European notation.
You meant 1.2 million, right ?
> On Mar 31, 2017, at 1:19 AM, Kamil Dziublinski
> wrote:
>
> Hi,
>
> Thanks for the tip man. I tried playing with this.
> Was changing fetch.message.max.bytes (I still have 0.8 kafka) and also
> socket.receive.buf
Hi Radu,
the processElement() and onTimer() calls are synchronized by a lock, i.e.,
they won't be called at the same time.
Best, Fabian
2017-03-31 9:34 GMT+02:00 Radu Tudoran :
> Hi,
>
>
>
> I would like to use a processFunction to accumulate elements. Therefore in
> the processElement function
Hi,
Thanks for the tip man. I tried playing with this.
Was changing fetch.message.max.bytes (I still have 0.8 kafka) and
also socket.receive.buffer.bytes. With some optimal settings I was able to
get to 1,2 million reads per second. So 50% increase.
But that unfortunately does not increase when I
Hi,
I would like to use a processFunction to accumulate elements. Therefore in the
processElement function I will accumulate this element into a state. However, I
would like to emit the output only 1ms later. Therefore I would register a
timer to trigger one second later and read the state and
21 matches
Mail list logo