Hi!
I think this is a solid fix. Adding the classloader that loads Flink's
classes as the parent is a good.
Do you want to open a pull request with that?
Greetings,
Stephan
On Thu, Jan 14, 2016 at 2:26 AM, Prez Cannady
wrote:
> Simply passing FlinkUserCodeClassLoader.class.getClassLoader to
Hi,
using .keyBy(0) on a Scala DataStream[Tuple2] where Tuple2 is a Scala Tuple
should work. Look, for example, at the SocketTextStreamWordCount example in
Flink.
Cheers,
Aljoscha
> On 13 Jan 2016, at 18:25, Tzu-Li (Gordon) Tai wrote:
>
> Hi Saiph,
>
> In Flink, the key for keyBy() can be pro
Hi!
That is a super interesting idea. If I understand you correctly, you are
suggesting to try and reconcile the TaskManagers and the JobManager before
restarting the job. That would mean that in case of a master failure, the
jobs may simply continue to run. That would be a nice enhancements, but
Hi,
I'm working on a streaming application using Flink.
Several steps in the processing are state-full (I use custom Windows and
state-full operators ).
Now if during a normal run an worker fails the checkpointing system will be
used to recover.
But what if the entire application is stopped (del
Hello,
You are probably looking for this feature:
https://issues.apache.org/jira/browse/FLINK-2976
Best,
Gábor
2016-01-14 11:05 GMT+01:00 Niels Basjes :
> Hi,
>
> I'm working on a streaming application using Flink.
> Several steps in the processing are state-full (I use custom Windows and
> s
Hi,
I can submit the topology without any problems. Your code is fine.
If your program "exits silently" I would actually assume, that you
submitted the topology successfully. Can you see the topology in
JobManager WebFrontend? If not, do you see any errors in the log files?
-Matthias
On 01/14/2
Hey Kovas
sorry for the long delay.
> On 10 Jan 2016, at 06:20, kovas boguta wrote:
> 1) How can I prevent ResultPartitions from being released?
>
> In interactive use, RPs should not necessarily be released when there are no
> pending tasks to consume them.
Max Michels did some work along t
Dear Matthias,
Thank you for a quick reply. It failed again, however I was able to
access to its WebFrontend and it gave me some logs. I wanted to show
logs immediately before digging down into it.
19:48:18,011 INFO org.apache.flink.runtime.jobmanager.JobManager
- Submitting job 6f5228
Hey Niels,
as Gabor wrote, this feature has been merged to the master branch recently.
The docs are online here:
https://ci.apache.org/projects/flink/flink-docs-master/apis/savepoints.html
Feel free to report back your experience with it if you give it a try.
– Ufuk
> On 14 Jan 2016, at 11:09
Dear Matthias,
Thank you very much again. I changed the value of
taskmanager.numberOfTaskSlots from 1 to 128 in
flink-0.10.1/conf/flink-conf.yaml, stopped the local cluster and
started local cluster again. And it works fine and well. (It is still
running and I can check it clear on the webfrontend
Hi,
I'm trying to figure out what graph the execution plan represents when you
call env.getExecutionPlan on the StreamExecutionEnvironment. From my
understanding the StreamGraph is what you call an APIGraph, which will be
used to create the JobGraph.
So is the ExecutionPlan is a full representatio
Hi,
the logs shows:
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Resources available to scheduler: Number of instances=1, total number of
> slots=1, available slots=0
You need to increase your task slots in conf/flink-conf.yaml. Look for
parameter "taskmanager
Just saw your email after my answer...
Have a look here about task slots.
https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/config.html#configuring-taskmanager-processing-slots
Also have a look here (starting from 18:30):
https://www.youtube.com/watch?v=UEkjRN8jRx4
-Matthias
O
Yes, that is exactly the type of solution I was looking for.
I'll dive into this.
Thanks guys!
Niels
On Thu, Jan 14, 2016 at 11:55 AM, Ufuk Celebi wrote:
> Hey Niels,
>
> as Gabor wrote, this feature has been merged to the master branch recently.
>
> The docs are online here:
> https://ci.apac
Hi,
Thanks Aljoscha, the libraries solved the problem. It worked perfectly!.
On 12 January 2016 at 14:03, Aljoscha Krettek
wrote:
> Hi,
> could you please try adding the lucene-core-4.10.4.jar file to your lib
> folder of Flink. (
> https://repo1.maven.org/maven2/org/apache/lucene/lucene-core/4
Hey Alex,
Flink has 3 abstractions having a Graph suffix in place currently for
streaming jobs:
* StreamGraph: Used for representing the logical plan of a streaming job
that is under construction in the API. This one is the only streaming
specific in this list.
* JobGraph: Used for representi
Hi
Is there a way to map a JSON representation back to an executable flink
job? If there is no such API, what is the best starting point to implement
such a feature?
Best
Christian
2016-01-14 15:18 GMT+01:00 Márton Balassi :
> Hey Alex,
>
> Flink has 3 abstractions having a Graph suffix in pl
Hi Márton,
Thanks for your answer. But now I'm even more confused as it somehow
conflicts with the documentation. ;)
According to the wiki and the stratosphere paper the JobGraph will be
submitted to the JobManager. And the JobManager will then translate it into
the ExecutionGraph.
> In order to
Sure thing. Opened pull requests #1506 against master. Also submitted pull
request #1507 against release-0.9.1.rc1 seeing as we’re still pinned to 0.9.1
for our project. Not sure if you guys are interested in hot fixes to previous
releases, but there you have it.
Prez Cannady
p: 617 500 33
@Christian: I don't think that is possible.
There are quite a few things missing in the JSON including:
- User function objects (Flink ships objects not class names)
- Function configuration objects
- Data types
Best, Fabian
2016-01-14 16:02 GMT+01:00 lofifnc :
> Hi Márton,
>
> Thanks for your
On Thu, Jan 14, 2016 at 5:52 AM, Ufuk Celebi wrote:
> Hey Kovas
>
> sorry for the long delay.
>
It was worth the wait! Thanks for the detailed response.
> Ideally, I could force certain ResultPartitions to only be manually
> releasable, so I can consume them over and over.
>
> How would you lik
On Thu, Jan 14, 2016 at 4:00 PM, kovas boguta
wrote:
>
> For a "real" solution, the REPL needs seem related to the WebUI, which I
> haven't studied yet. One would want a fairly detailed view into the running
> execution graph, possibly but not necessarily as an HTTP api.
>
> I haven't studied how
I would also find a 0.10.2 release useful :)
On Wed, Jan 13, 2016 at 1:24 AM, Welly Tambunan wrote:
> Hi Robert,
>
> We are on deadline for demo stage right now before production for
> management so it would be great to have 0.10.2 for stable version within
> this week if possible ?
>
> Cheers
>
Hi,
I have a scala project depending on flink scala_2.11 and am seeing a
compilation error when using sbt.
I'm using flink 1.0-SNAPSHOT and my build was working yesterday. I was
wondering if maybe a recent change to flink could be the cause?
Usually we see flink resolving the scala _2.11 counter
Hi Stephan,
Thank you for detail explaination. As you said, my opition is to keep task
still running druing jobmanager failover, even though sending update status
failed.
For the first reason you mentioned, if i understand correctly, the key issue is
status out of sync between taskmanager and
25 matches
Mail list logo