Hi all, I have opend a Jira issue <https://issues.apache.org/jira/browse/MATH-484> for some tricky problems with events detection and adaptive step size in ordinary differential equations.
The problem is that this code has been around for a few years (it was in the former Mantissa library) and numerous fox have been added. It has become a nightmare to maintain as is. The last bug fix were difficult ones, with corner cases (event exactly at integration start or stop, exactly simultaneous events, very short steps, unstable switching functions ...). There is a long list of difficult to understand conditional statements for special cases. Now I encountered a case for which I would need a new conditional statement which is almost the exact opposite of a statement I put in a few months ago: two bug fixes are mutually incompatible! It took me quite a long time to finally see what was obvious since the beginning: the root cause of many problems is the entanglement of step size control and events handling. As we truncate steps when an event occurs, we completely blew what step size control attempts to do. When an event at an end points creates a zero size step, adaptive step size cannot handle zero size steps. Also artificially introducing zero sized step is clearly an awkward way. Once this enlightening conclusion has been drawn (two days ago, after months going round in circles), finding a solution was surprisingly easy and fast. Here is the proposed solution, I would like to know if someone has some comments on it. The ODE solving process is an iterative one which compute one step after the other. Fixed step integrators directly compute a few reference points starting from the state vector at the end of the previous step and compute the state vector at the end of the step in a straightforward way, without any conditionals or loops. Adaptive step size integrators have an internal loop at each step evaluation, for choosing the best step size according to an error that is estimated at the end of the step. I propose to keep the integration process as simple as this. There should be no events or continuous output involved here. State vector propagation deals only with the state vector itself and is NOT modified by events. Once the step has been completely computed and is accepted, it will not be changed. On this accepted step, the integrator-dependent polynomials models are built and a step interpolator valid for this step is set up. Once the interpolator has been set up, events are considered throughout the step. If one or more events are identified, the step will be split into virtual substeps but WITHOUT recomputing the global step. There will be no step truncation. The virtual substeps will be provided to the user step handlers just as if they were original global steps. Virtual substeps may be zero sized, the global step will not be zero sized. Since there is not state vector re-evaluation, events roots will be computed only once instead of twice now. Once all events and substeps have been handled, next iteration will begin with state vector evaluation. So the main loop will have four consecutive phases that will always be in the same order : state computation/interpolator setting/events detection/substeps handling and again state computation/interpolator setting/events detection/substeps handling, state computation/interpolator setting/events detection/substeps handling ... The differences with the existing implementation are the following ones: - much simpler code - more robust code (a few smelly special cases will be removed, unfortunately not all of them) - events handling part of the loop will be shared among all integrators (up to now, there have been 4 implementations: one for fixed step integrators, one for embedded Runge-Kutta, one for Gragg-Bulirsch-Stoer and one for multistep integrators) - no user code change at all - no interface change - new methods added in internal (but public) classes - faster computation (integrator will use large step when possible, without arbitrary truncation, events computed only once) - integration results reproducibility regardless of the fact events have been used or not (for now, adding events slightly changes results because it changes step sizes) - tests will be updated (number of iteration changed, achieved global error changed ...) - interpolator size increased, which means an increased file size if ContinuousOutputModel objects are serialized - in very rare cases, slightly degraded accuracy The last point is worth explaining further. Events will now occur at the end of substeps instead of at the end of integrator steps. From the user side, there is (almost) no way to make a distinction. From the interpolator side, the approximation is better at step endpoints than at step interior points. So the approximation at substeps endpoints, which are global step interior points is not as good as before. This has in many cases no influence on global error since state evaluation becomes independent from events. The only case where there will be an influence is when an event handler ask for a state reset. In this case, the state will be reset from a non-optimal approximation. The approximation will still be very good (it is within interpolator global order specifications). As this case is very rare (events resetting the state are not common) and as there are many advantages, I would consider implementing the change is an improvement. If nobody complains, I will implement the change in both 2.2 and 3.0 in the next few days. Luc --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org