Hi Chandrika,
I cannot reproduce the problem with the code you provided. I created a
public GIT repo with your code from the previous post and suggest this:
1. You clone the repo: git clone
https://github.com/kukushal/ignite-userlist-joblisteners.git
2. Try to reproduce the problem with
Hello Alexey,
could u please guide me with the above post . Thanks.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello Alexey,
on single node it is working fine till the job 8, from job7 there is no
execution as given in the below console logs: jobs getting executed are 12,
1, 11, 3, 2, 13, 9, 8..after that the jobs 7,4,5,6,10 are never getting
executed.
executed the job
Hi Chandrika,
I can run your task on 1 node OK (see the output below) and I really do not
see what might cause a deadlock in your code. You said "*with one node it
was always failing causing a deadlock*" - what do you mean by "failing"? Do
you see an exception? Can you reproduce the problem with v
Hello Alexey,
Even i could make my code work on three nodes even earlier, but with one
node it was always failing causing a deadlock, please let me know how to go
about it cause the issue was with one node.
thanks
chandrika
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi, your jobs shall not cause any deadlocks since you have no
synchronisation inside execute(). I ran your job on 3 nodes on the same
machine OK - the job completed in about 9 seconds, which matches the random
delay inside execute(). I only had to replace executeAsync() with
execute(). The problem
Hello Alexey,
the sample code is as given below:
@ComputeTaskSessionFullSupport
public class SplitExampleJgraphWithComplexDAGIgniteCachesample extends
ComputeTaskSplitAdapter ,
Integer> {
// Auto-injected task session.
@TaskSessionResource
private ComputeTaskSession
Hi Chandrika,
Is it possible for you to share your Ignite task implementation? Or are you
just running the above example I created? It looks you have some deadlock
and it is hard to guess without having the code.
Hello Alexey,
Thanks for the valuable information, i hare tried executing a list of
dependent tasks using a DAG using session.setAttribute("COMPLETE",true), and
it is working fine as long as there are three nodes, cause there are 3 or
less parallel tasks to execute.
But when i run the same code o
Hi Chandrika - sorry for the delay - did not have time to review this list.
Once you start using custom types in your compute jobs, the server nodes
will have to know your custom types. You have two options to let the server
nodes know your types:
1. Manual deployment: copy all JARs containing
Hello Alexey,
thanks a lot for the input it was very useful, there are two more things i m
stuck at:
1. when i run the above in cluster environment(with more than one node) with
value as an object in session.setAttribute(key, value) value being an
object, then i m unable to proceed further as one
Hi Chandrika,
You would need to make an assumption about your jobs duration to display
remaining time or percentage. For example, the below task predicts
remaining duration assuming remaining jobs will have duration equal to the
average of the completed ones :
@ComputeTaskSessionFullSupport
publi
Hello Alexey,
Thanks a lot for the information it was pretty useful to us,also wanted to
know if we could know the percentage of the JobSibling Completed or task
Completed, as in 70% of it is finished its execution and 30% more is left to
finish the execution or the duration of time taken for exec
Hi Chandrika,
I would use ComputeTaskSession and ComputeTaskSessionAttributeListener to
achieve that:
- Inject ComputeTaskSession to your task and/or jobs like
this: @TaskSessionResource private ComputeTaskSession taskSes;
You also need to annotate your task with @ComputeTaskSessionFullS
14 matches
Mail list logo