Hi Rune! Am 04.07.2016 um 22:49 schrieb Rune 'TheFlyingcorpse' Darrud: > I'm blown away by what I'm able to achieve with very little effort using > Icinga Director.
Glad to hear that :-) > In an hour I wrote a script to pull out the wanted > information from a REST API (Cisco Prime if you wanted to know) and > store it in a table for use by Icinga Director. Nice! This could be a candidate it for a nice add-on module. Did you know that Import Sources are hook'able in Director? It's pretty easy, have a look at the AWS module to see a similar example: https://github.com/Thomas-Gelf/icingaweb2-module-aws > What I am a bit stuck on is how to handle parents/relations to other > hosts/services in a flexible manner, as it appears now I should have > this already from "somewhere" other than Director and let it apply the > information to Icinga. Dependencies are not there yet, please see here for related feature/pull requests: https://dev.icinga.org/issues/11332 https://github.com/Icinga/icingaweb2-module-director/pull/14 It's no problem at all to keep apply rules for your dependencies in your local config. Still, it's preferable to have those also deployed with Director, even if he doesn't "understand" them yet. That's what the "fileshipper" module has been made for: https://github.com/Thomas-Gelf/icingaweb2-module-fileshipper > Are there DO's or DON'Ts in regards to the black magic one uses Director > for? Black magic: DO IT :D Seriously, just use it. Experimental features and DON'Ts are either hidden pretty good or not even pushed to public GIT. > There might be cases where a job has ran but failed, emptying a table, > which could be disastrous to the import/sync mechanism. Will not happen. Import is a all-or-nothing operation, and every import run stores it's new data in a single transaction. Any exception during the import will roll back the whole operation. > Similarly if it stopped partway through, it could also spell disaster > for whatever didn't sync after it stopped. In your case you opted for a custom DB placed in between Cisco Prime and the Director. If that table is empty or missing a bunch of rows, Director wouldn't know. You told him that this is your source of truth and he will blindly trust it. Import would be triggered, and Director would also happily accept an empty Import Run. Sync would then later on purge all related hosts. There is no "please do not run if there are less than X objects" feature. I'm also not sure whether this would be a good idea. It's the responsibility of the Import Source to fail when something bad happens. > I guess this is all down to how you write > your scripts to feed tables/databases with information for Director to > pull, maybe some guidelines/principles on this should be present so your > monitoring doesn't blow up while you sleep or work through the surprise > downtimewhen Icinga2 is suddently very empty. Well, the preferred way is not to write custom scripts to fill intermediary tables, but that's perfectly legal of course. You just should take care that your script is doing so in an atomic way and that it does nothing in case there is no/wrong/erroneous/missing data. The preferred way remains a custom Icinga Web 2 module providing an Import Source. All you need to do is to provide at least a fetchData and a listColumns method in a class extending the ImportSourceHook. And let fetchData throw an Exception in case anything goes wrong. > Some failsafe in Director would be a nice approach to this, unless you > decide never to deploy automatically, just import, sync and see what > changed before deploy. I would like to see some of the sources are > auto-deployed while yet others are manual before being pushed to Icinga2. That's perfectly possible, Job definitions allow you to trigger actions or to just check for changes. In case something went wrong and you deployed an empty/half-baked configuration the first steps to recover should be: * Go to your jobs, stop them. Either by stopping the service or by disabling them in the web front-end * Go to your deployment history, pick the latest correct one, click on "Config" and "Re-deploy now" Your monitoring is now fine again and you can relax, grab a coffee and try to figure out why there was an empty export from your CMDB. Fix it, re-enable your jobs. Sure, in case Cisco Prime ships an empty host list on Friday evening without throwing an error, you might still deploy an empty config and not realize that you have been left with no active monitoring. You can still decide to run "Sync Cisco Prime hosts" only during business hours, timeperiods can be configured for every Job. You could also create two distinct sync rules looking exactly the same, but only one of them with the "Purge" flag enabled. And then run "Sync with purge" during office hours and "Sync without purge" at night and on weekends. And if this is still not enough there is still our bug tracker: https://dev.icinga.org/projects/icingaweb2-module-director/issues Let me know your thoughts and dump your ideas into a feature request. Cheers, Thomas NB: In case you are interested in publishing a Cisco Prime addon and need some help with it please let me know! -- Thomas Gelf Principal Consultant NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg Tel: +49 911 92885-0 | Fax: +49 911 92885-77 CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB18461 http://www.netways.de | thomas.g...@netways.de ** OSBConf 2016 - September - osbconf.org ** ** OSMC 2016 - November - netways.de/osmc ** _______________________________________________ icinga-users mailing list icinga-users@lists.icinga.org https://lists.icinga.org/mailman/listinfo/icinga-users