Ok, I though we can make make cinder-volume aware of SIGTERM call and make sure it terminates with cleaning all the existing operations. If its not possible then probably SIGHUB is the only solution. :(
On Mon, Dec 16, 2013 at 10:25 AM, Joshua Harlow <harlo...@yahoo-inc.com>wrote: > It depends on the "corruption" that u are willing to tolerate. Sigterm > means the process just terminates, what if said process was 3/4 through > some operation (create_volume for example)?? > > Personally I am willing to tolerate zero corruption, reliability and > consistency are foundational things for me. Others may be more tolerant > though, seems worth further discussion IMHO. > > > Sent from my really tiny device... > > On Dec 15, 2013, at 8:39 PM, "iKhan" <ik.ibadk...@gmail.com> wrote: > > How about sending SIGTERM to child processes and then starting them? I > know this is the hard way of achieving the objective and SIGHUP approach > will handle it more gracefully. As you mentioned it is a major change, > tentatively can we use SIGTERM to achieve the objective? > > > On Mon, Dec 16, 2013 at 9:50 AM, Joshua Harlow <harlo...@yahoo-inc.com>wrote: > >> In your proposal does it means that the child process will be restarted >> (that means kill -9 or sigint??). If so, without taskflow to help (or other >> solution) that means operations in progress will be corrupted/lost. That >> seems bad... >> >> A SIGHUP approach could be handled more gracefully (but it does require >> some changes in the underlying codebase to do this "refresh"). >> >> >> Sent from my really tiny device... >> >> On Dec 15, 2013, at 3:11 AM, "iKhan" <ik.ibadk...@gmail.com> wrote: >> >> I don't know if this is being planned in Icehouse, if not probably >> proposing an approach will help. We have seen cinder-volume service >> initialization part. Similarly if we can get our hands on child process >> that are running under cinder-volume service, if we terminate those process >> and restart them along with newly added backends. It might help us achieve >> the target. >> >> >> On Sun, Dec 15, 2013 at 12:49 PM, Joshua Harlow >> <harlo...@yahoo-inc.com>wrote: >> >>> I don't currently know of a one size fits all solution here. There was >>> talk at the summit of having the cinder app respond to a SIGHUP signal and >>> attempting to reload config on this signal. Dynamic reloading is tricky >>> business (basically u need to unravel anything holding references to the >>> old config values/affected by the old config values). >>> >>> I would start with a simple trial of this if u want to so it, part if >>> the issue will likely be oslo.config (can that library understand dynamic >>> reloading?) and then cinder drivers themselves (perhaps u need to create a >>> registry of drivers that can dynamically reload on config reloads?). Start >>> out with something simple, isolate the reloading as much as u can to a >>> single area (something like the mentioned registry of objects that can be >>> reloaded when a SIGHUP arrives) and see how it goes. >>> >>> It does seem like a nice feature if u can get it right :-) >>> >>> Sent from my really tiny device... >>> >>> On Dec 13, 2013, at 8:57 PM, "iKhan" <ik.ibadk...@gmail.com> wrote: >>> >>> Hi All, >>> >>> At present cinder driver can be only configured with adding entries in >>> conf file. Once these driver related entries are modified or added in conf >>> file, we need to restart cinder-volume service to validate the conf entries >>> and create a child process that runs in background. >>> >>> I am thinking of a way to re-initialize or dynamically configure >>> cinder driver. So that I can accept the configuration from user on fly and >>> perform operations. I think solution lies somewhere around >>> "oslo.config.cfg", but I am still unclear about how re-initializing can be >>> achieved. >>> >>> Let know if anyone here is aware of any approach to re-initialize or >>> dynamically configure a driver. >>> >>> -- >>> Thanks, >>> IK >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Thanks, >> Ibad Khan >> 9686594607 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Thanks, > Ibad Khan > 9686594607 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Thanks, Ibad Khan 9686594607
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev