Re: Data Corruption in SFTP in Parallel Multicast branches

2015-05-12 Thread lakshmi.prashant
Hi, If direct is changed to SEDA, the multiple branches still fail. /Error processing exchange. Exchange[Message: [Body is instance of org.apache.camel.StreamCache]]. Caused by: [org.quartz.JobExecutionException - org.apache.camel.CamelExchangeException: Parallel processing failed for number 0.

Re: Data Corruption in SFTP in Parallel Multicast branches

2015-04-20 Thread lakshmi.prashant
Hi, We have another example of data corruption with Parallel Multicast. This issue is there even in camel-core 2.14.2. If we change to serial multicast, the issue disappears. I have a route with a camel producer (to uri) that writes data in the exchange & this data is of type stream cache

RE: Data Corruption in SFTP in Parallel Multicast branches

2015-01-15 Thread lakshmi.prashant
Hi Stephan, The body of the main exchange should be copied to the branch exchanges, as intended (Option 2 suggested by you). But I am not sure if it will lead to performance / memory issues, if there are more branches with huge data in the body of the main route. Thanks, Lakshmi -- Vi

Data Corruption in SFTP in Parallel Multicast branches

2015-01-14 Thread lakshmi.prashant
Hi, • In case if there are SFTP receivers at the end of the branch(es) of Parallel Multicast, the payload gets corrupted in 1 or more branches. oAny 1 branch / Some SFTP receivers may not receive the full data in the respective SFTP file(s). While validating it with a simple exam

How to use dynamic properties / bean references with http conduit used with camel CXF?

2014-12-17 Thread lakshmi.prashant
Hi, Is there a way to specify dynamic values (using a bean reference) for the httpConduit properties of the CXF endpoint (camel CXF)? We need to look up the values of proxyHost and proxyPort in a bean at runtime and then dynamically set it as the proxyHost, port for the Http connection. I had

Re: Quartz job data deletion in clustered quartz2

2014-11-23 Thread lakshmi.prashant
Hi Willem, Scheduler is not created from outside, but on deployment of camel blueprint bundles. But the scheduler jobs are not deleted on un-deployment of the camel bundles, in clustered mode and this is what needs to be handled - while taking care of durable jobs and job recovery'. Thanks, La

Re: Quartz job data deletion in clustered quartz2

2014-11-21 Thread lakshmi.prashant
Hi Willem, We are listening to the un-deployment event ourselves. 1. Actually, if the job is deleted from any UI (that is used to schedule jobs) - that UI will have to take care to remove the job data from the scheduler. 2. But, in the camel quartz scenarios, the jobs are created at the star

Re: Quartz job data deletion in clustered quartz2

2014-11-09 Thread lakshmi.prashant
Hi Claus, There is a mis-communication - we need not have a special classloader helper, I think. The issue was because on the un-deployment of 1 camel blueprint bundle (with camel quartz2 route),* the quartz job data is not deleted from db - if it is clustered quartz.* Unfortunately, we do

Re: Quartz job data deletion in clustered quartz2

2014-11-06 Thread lakshmi.prashant
Hi, That does not help. If we have a shared scheduler instance (by exposing the StdSchedulerFactory as a OSGi service) used by the different camel quartz components / routes, we face the following issue: After 1 camel quartz route is un-deployed & removed, the scheduler instance starts m

Re: Quartz job data deletion in clustered quartz2

2014-10-29 Thread lakshmi.prashant
Hi Claus, Thanks a lot. Adding *managementNamePattern="#name#"* to in blueprint XML seems to click. This resolved the 2 issues with both re-deployment of the same bundle & also the load-balancing issue when the other VM's acquire the trigger & look up the camel context. We still have 1

Re: Quartz job data deletion in clustered quartz2

2014-10-27 Thread lakshmi.prashant
We are setting the camel Context id in the blueprint xml and have deployed it to the osgi environment. Eg: Then we get misfires when other VM's in the cluster try to do load balancing of the trigger : No CamelContext could be found with name: *572-Quartz2_Mig_Test1* . Why is the osgi bundle i

Re: Quartz job data deletion in clustered quartz2

2014-10-22 Thread lakshmi.prashant
Hi Willem, Quartz2_Mig_Test1 is the camelcontext id that we set in our blueprint xml configuration. I had earlier attached the beans.xml in my earlier message for reference. Camel calculates the name for the camel context by calling getName() of DefaultManagementNameStrategy in line no. 76 of

Re: Quartz job data deletion in clustered quartz2

2014-10-22 Thread lakshmi.prashant
Hi Willem, Hi, Quartz2_Mig_Test1 is the camelcontext id that we set in our blueprint xml configuration. I had earlier attached the beans.xml in my earlier message for reference. Camel calculates the name for the camel context by calling getName() of DefaultManagementNameStrategy in line no.

Re: Quartz job data deletion in clustered quartz2

2014-10-19 Thread lakshmi.prashant
Hi, We get many misfires, while quartz is working in clustered mode. This is when the trigger is acquired / executed on another VM than the one that inserted the job data: We get an error while is CamelJob in that VM gets executed for a trigger. The camel job tries to locate the camel c

Quartz job data deletion in clustered quartz2

2014-10-13 Thread lakshmi.prashant
Hi, While using camel quartz2 in clustered mode, the job data is not deleted when we un-deploy the bundles. Due to the above, when we try to re-deploy the bundles (or) stop & start the cluster, we encounter errors: a) After the camel blueprint bundle is un-deployed, we get the error:

Re: Stream Cache / spool file deletion before aggregation in Multicast, involving huge data

2014-09-07 Thread lakshmi.prashant
Hi Claus, The problem can happen with multicast, even if there is no aggregation strategy. If in the last branch exchange of multicast, stream caching happens (and also clean up of file on completion of branch), then if the next step (processor) after multicast tries to read the data from the i

Stream Cache / spool file deletion before aggregation in Multicast, involving huge data

2014-09-05 Thread lakshmi.prashant
Hi, Mybeans.xml *Issue:* Whenever data is spooled in file via CachedOutputStream in any camel component in a multicast branch, that data becomes unreadable in a) Aggregation Strategy of Multicast b) After multicast, in cas

Re: StreamCache FileNotFound issues with bigger data in multicast routes

2014-09-05 Thread lakshmi.prashant
Hi, Kindly help us. Whenever data is spooled in file via CachedOutputStream in any camel component in a multicast branch, that data becomes unreadable in a) Aggregation Strategy of Multicast b) After multicast, in case there is no aggregation strategy We are getting: a) FileNotFound issues

Re: Multicast - Pass through all messages, aggregate from different branches

2014-06-03 Thread lakshmi.prashant
Hi, If we want a pass through of all the messages from the different branches, is it right to use the GroupedExchangeAggregationStrategy as the aggrgationstrategy for the multicast and use a splitter , thereafter? If so, should a custom bean be used as the splitter (or) is it possible for the ca

Multicast - Pass through all messages, aggregate from different branches

2014-05-30 Thread lakshmi.prashant
Hi, The multicast always blocks till the last message from the multicast branches and only uses the last reply as the outgoing message, at the end of multi-cast. At the end of the multi-cast: a) Is it possible to pass through

Re: Null Pointer exception with camel quartz simple trigger (fireNow)

2014-05-30 Thread lakshmi.prashant
Hi Claus,Thanks for your prompt reply. 1)But what is surprising is that there is no exception trace + the messages stay in processing state forever. Also, we are not able to stop / undeploy the camel bundles. Also, I see in QuartzEndpoint.java line no. 123, it logs an exception similar to what we

Re: Null Pointer exception with camel quartz simple trigger (fireNow)

2014-05-29 Thread lakshmi.prashant
Hi, Thanks for your reply - There is no stack trace and also, we are not able to debug the issue, as it doesn't reach any of our components in the route. I will look into this closely and follow your suggestion on delay. Thanks, Lakshmi -- View this message in context: http://camel.46542

Null Pointer exception with camel quartz simple trigger (fireNow)

2014-05-29 Thread lakshmi.prashant
Hi, We are using camel 2.12.3 distribution with quartz 1.8.6. We are repeatedly facing NullPointer issues with the quartz endpoint and the route is failing at the beginning in the quartz endpoint. After that the message in the route doesn't complete at all. We have faced this issue mainly

Re: Camel quartz misfires and route not getting run / triggered after exception:ObjectAlreadyExistsException

2013-11-13 Thread lakshmi.prashant
Hi, Can you kindly explain why the route starts failing after running for sometime.. Why is the doAddJob() getting called, after few runs of a schedule have already tun, as reported in the exception? The race condition can happen only at the start of the route, while scheduling the quart

Re: Camel quartz misfires and route not getting run / triggered after exception:ObjectAlreadyExistsException

2013-10-15 Thread lakshmi.prashant
Hi, Sorry for the delay in response. I did not get notified in my mail box and hence had not followed your reply to the mail thread. Please find below my answers and the blueprint xml used in the cluster with 7 VM's. Thanks, Lakshmi - Which exact Quartz 1.x version do you make use of? :

Camel quartz misfires and route not getting run / triggered after exception:ObjectAlreadyExistsException

2013-09-22 Thread lakshmi.prashant
Hi, I am running camel quartz (2.10.4) & quartz has been set-up in clustered mode. The clocks in the cluster are synchronized. I have set-up a trigger to run every 1 minute, via the camel-quartz end-point in my route. a) it works fine if quartz is not set up in clustered mode (uses RAMJobstore).

Re: Missing datasource exception while referring to OSGI datasource for clustering quartz with camel-quartz

2013-09-01 Thread lakshmi.prashant
Hi, I have already referred to the datasource (OSGI service) in the beans.xml: a) And I had already tried referring to the datasource using the reference id, in quartz properties: *dataSource* b) Also, I had earlier tried to refer the da

Missing datasource exception while referring to OSGI datasource for clustering quartz with camel-quartz

2013-08-31 Thread lakshmi.prashant
Hi, I am trying to set up quartz in clustered mode to work with camel quartz in my camel route. (i.e) we have deployed the bundle with the camel route having camel quartz endpoint, in all the cluster nodes. But I would like to have my camel route triggered only in one of the nodes in the cluster