Hi,
Is there a way I can get job id of my application programmatically or
through api's for doing savepoint?
Rahul Raj
Update:
Following other discussions I even tried to reduce memory.fraction to 10%
without success.
How can I set G1 as garbage collector?
the key is env.java.opts but the value?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
On Fri, Sep 15, 2017 at 10:02 AM, Eron Wright wrote:
> Aljoscha, would it be correct to characterize your idea as a 'pull' source
> rather than the current 'push'? It would be interesting to look at the
> existing connectors to see how hard it would be to reverse their
> orientation. e.g. the
Hi, sorry for re-vive this old conversation.
I have exactly the same problem, can you provide more details about your
solution?
Have you used another garbage collector as G1? How can I set it?
I've seen on configuration guideline I have to set the option: env.java.opts
but I don't know which is th
I think the short-term approach is to place an nginx proxy in front, in
combination with some form of isolation of the underlying endpoint. That
addresses the authentication piece but not fine-grained authorization. Be
aware that the Flink JM is not multi-user due to lack of isolation among
jobs
Eventually I'll have a class named Element which holds an array of Parameter
Do I need typeinfo, comparator, factory and serializer for both of them?
Thanks
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Aljoscha, would it be correct to characterize your idea as a 'pull' source
rather than the current 'push'? It would be interesting to look at the
existing connectors to see how hard it would be to reverse their
orientation. e.g. the source might require a buffer pool.
On Fri, Sep 15, 2017 at 9:
Late response, but a common reason for disappearing TaskManagers is termination
by the Linux out-of-memory killer, with the recommendation to decrease the
allotted memory.
> On Sep 5, 2017, at 9:09 AM, ShB wrote:
>
> Hi,
>
> I'm running a Flink batch job that reads almost 1 TB of data from
Great, thanks!
The fact that it's actually written on the documentation is really
misleading.
Thank you very much for your response
Federico D'Ambrosio
Il 15 set 2017 13:26, "Gábor Gévay" ha scritto:
> Hi Federico,
>
> Sorry, nested field expressions are not supported in these methods at
> th
Also relevant for this discussion: Several people (including me) by now were
floating the idea of reworking the source interface to take away the
responsibility of stopping/canceling/continuing from a specific source
implementation and to instead give that power to the system. Currently each
so
I tried also to set the only job manager on the first node and reconfiguring
the cluster admitting just two task manager. In this way I obtain
immediately a NoResourceAvailable error
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I investigated the semantics of cpu percentage on top. I have to correct my
sentence:
When I start the program it has a peak at 160% (max is 200%), but after a
second it falls down until the 4%.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Aljoscha,
Thanks for your reply. It looks great to have hat feature. I will create a
Jira issue for that and try to solve it.
Best Regards,
Tony Wei
2017-09-15 20:51 GMT+08:00 Aljoscha Krettek :
> Hi,
>
> I think calling getPath() on the URL returned from getResource() loses
> some of the in
Hi Navneeth,
If you increase the timeout, everything works ok?
I suppose from your config that you are running in standalone mode, right?
Any other information about the job (e.g. code and/or size of state being
fetched) and
the cluster setup that can help us pin down the problem, would be appr
Sorry, I was discussing this with Stephan before posting it here.
Basically main wrapper holds an array with a custom object and because its
size can change thoughtout the stream and users can customize their sources
dynamically, it make it difficult to create a generic pojo or use tuple for
this p
Hi,
1. Question: When you are throwing an exception within your user code,
Flink will cancel the execution of all tasks and schedule them again (if
you've configured a restart strategy).
2. Question: You'll need to configure the MiniCluster in HA mode. I believe
that should be possible by passing
I'm using nabble and seems that it has removed the code between raw tags.
Here it is again:
import com.typesafe.scalalogging.LazyLogging
import org.apache.flink.api.common.functions.ReduceFunction
import org.apache.flink.api.common.state.{ReducingStateDescriptor,
ValueStateDescriptor}
import
org.a
Hi Nuno,
Because of this, we have a legacy structure that I showed before.
Could you probably include more information about this legacy structure you
mentioned here in this mail thread? I couldn’t find any other reference to
that. That could be helpful to understanding your use case more here.
Sure, but how does the Trigger actually work?
> On 15. Sep 2017, at 12:20, gerardg wrote:
>
> Sure:
>
>
>
> The application is configured to use processing time.
>
> Thanks,
>
> Gerard
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
I think calling getPath() on the URL returned from getResource() loses some of
the information that is required to resolve the file in the jar. The solution
should be to allow passing a "File" to ParameterTool.fromPropertiesFile() or to
allow passing an InputStream to ParameterTool.fromProp
I think it might be that the computation is to CPU heavy, which makes the
TaskManager unresponsive to any JobManager messages and so the JobManager
thinks that the TaskManager is lost.
@Till, do you have another idea about what could be going on?
> On 15. Sep 2017, at 13:52, AndreaKinn wrote:
the job manager log probably is more interesting:
2017-09-15 12:47:45,420 WARN org.apache.hadoop.util.NativeCodeLoader
- Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2017-09-15 12:47:45,650 INFO org.apache.flink.runt
Hi Federico,
Sorry, nested field expressions are not supported in these methods at
the moment. I have created a JIRA issue for this:
https://issues.apache.org/jira/browse/FLINK-7629
I think this should be easy to fix, as all the infrastructure for
supporting this is already in place. I'll try to d
This is the log:
2017-09-15 12:47:49,143 WARN org.apache.hadoop.util.NativeCodeLoader
- Unable to load native-hadoop library for your platform... using
builtin-java classe$
2017-09-15 12:47:49,257 INFO
org.apache.flink.runtime.taskmanager.TaskManager -
---
Hi,
First of all, great #FF17, really enjoyed it.
After attending some of the dataArtisans folks talks, realized that
serialization should be optimized if there is no way to use supported
objects.
In my case, users can configure their source in our application online which
gives them freedom to dy
Hi Fabian,
Thank you for your description.
This is my understanding.
1, At the exact time execute() method called, Flink creates JobGraph,
submit it to JobManager, deploy tasks to TaskManagers and DOES NOT
execute each operators.
2, Operators are executed when they needed.
3, Sources(kafka-co
Sure:
The application is configured to use processing time.
Thanks,
Gerard
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
Can you check in the TaskManager logs whether there is any message that
indicates why the TaskManager was lost? Also, there might be information in
your machine logs, i.e. "dmesg" or /var/log/messages or some such.
Best,
Aljoscha
> On 14. Sep 2017, at 22:28, AndreaKinn wrote:
>
> P.S.: I
Hi,
Could you maybe show the code of your trigger?
Best,
Aljoscha
> On 15. Sep 2017, at 11:39, gerardg wrote:
>
> Hi,
>
> I have the following operator:
>
> mainStream
> .coGroup(coStream)
> .where(_.uuid).equalTo(_.uuid)
> .window(GlobalWindows.create())
> .trigger(trigg
Hi,
I have the following operator:
mainStream
.coGroup(coStream)
.where(_.uuid).equalTo(_.uuid)
.window(GlobalWindows.create())
.trigger(triggerWhenAllReceived)
.apply(mergeElements)
TLDR; It seems that the checkpointed state of the operator keeps growing
forever ev
Hi Yuta,
when the execute() method is called, the a so-called JobGraph is
constructed from all operators that have been added before by calling
map(), keyBy() and so on.
The JobGraph is then submitted to the JobManager which is the master
process in Flink. Based on the JobGraph, the master deploys
31 matches
Mail list logo