Hi Hauke, On Mon, Dec 05, 2016 at 01:54:20PM +0100, Hauke Mehrtens wrote: > On 2016-12-05 11:57, Daniel Golle wrote: > > Hi Felix, > > > > On Thu, Dec 01, 2016 at 04:51:30PM +0100, Felix Fietkau wrote: > > > On 2016-12-01 16:38, Daniel Golle wrote: > > > > Hi Felix, > > > > > > > > On Thu, Dec 01, 2016 at 04:12:38PM +0100, Felix Fietkau wrote: > > > >> On 2016-12-01 16:05, Daniel Golle wrote: > > > >> > I was following your posts and do believe there is quite some > > > >> > overlap. > > > >> > It would thus be feasible to generalize the common parts (ubus call > > > >> > proxy, ubus service proxy, ubus remote monitor) by agreeing on a > > > >> > shared > > > >> > interface the actual implementations shall use. In that way, people > > > >> > can choose whether they want WebSockets, TR-069 or a suitable P2P > > > >> > framework depending on their specific needs. > > > >> > Has anything of your current approach at IOPSYS been made available > > > >> > publicly (eg. on github?) > > > >> > > > > >> > From what I can tell there is also some overlap with Felix' proposed > > > >> > System Configuration Abstraction Layer, just that my envisioned use > > > >> > exceeds system configuration as it includes sensors, events and > > > >> > actors > > > >> > rather than just access to a configuration model. > > > >> If it makes sense, I'd be open to extending my abstraction layer to > > > >> make > > > >> it suitable for your use case as well. > > > >> Feel free to propose changes to it if you like. > > > > > > > > Having a first deeper look at scal I believe that access to sensors > > > > and actors could be implemented inside scal similar to the existing > > > > shell and system backends. That would be nice, as then scal would > > > > make things available on ubus and provide the ACL mechanics. > > > Nice. Maybe we can reinterpret the acronym as "System Communication > > > Abstraction Layer". I'd be fine with renaming it to something else as > > > well, I just didn't find a better name for it yet. > > > > > > I think a good approach would be to add a dlopen plugin API to the > > > json > > > plugin itself, so you can use json files to parameterize access to > > > sensors and other devices. > > > > To me the question remains whether access to devices should happen > > directly inside those scal-json-plugins or if it'd be better to > > expose a service on ubus ("urthings") handling them (which was my > > original plan) and then have scal access them via ubus. The latter > > would come with the advantage that other local services (think: > > collectd) could also access them via that urthings service instead of > > having to go through scal. > > I would like to have an API which can be used locally easily not just from > remote, I haven't found the time to look into scal . > Like a Luci web UI to switch on and off your lamps and also some API so that > others can easily integrate own application running on the device which are > managing and controlling the things. Probably people will also run > applications to connect the devices to existing clouds with existing > interfaces.
I agree. Having a simple service on ubus allowing to expose sensors and actors is still the best first-step which everything else can later on build-upon. Using and, if necessary, extending SCAL to allow remote access to specific objects exposed by that service shouldn't be too hard either. > > > I'd be glad to here more opinions, because this obviously has to be > > decided early in the design of the IoT integration approach. Find > > me on IRC (dangole@freenode) in the next couple of hours, maybe we > > can collect some ideas about the edges to be cut before the meeting > > at 3pm CET. > > > > > > > > Event handling could also be scripted through .json files using > > > json_script. > > > > I thought of it like that, similar to how procd's hotplug json_scripts > > look like. In addition I thought about adding ubus input and output > > support to collectd (so events can be triggered depending on conditions > > defined over polled sensors), but maybe something much simpler and more > > thightly designed for that specific job -- polling sensors, (maybe) > > caching & monitoring/triggering events -- could also be sufficient. > > However, I reckon this can be decided at a later stage, and as I'm > > already quite familiar with collectd, I'd just go ahead and create > > a ubus plugin there and who ever is unhappy with that may suggest > > what ever else could be better. > > I think we should focus on Phase 1 first to get these stuff properly into a > generic API in LEDE and then use some generic interfaces to expose it to > remote hosts. The above is more about whether polling/monitoring/caching/aggregating should be taken care of by a to-be-created service or if implementing a collectd input and output plugin for ubus would be agreable for everyone...? Cheers Daniel > > > > > > > Cheers > > > > > > Daniel > > > > > > > > > > > > > I'll have a deeper look and play with it to see whether that can > > > > work. > > > > > > > > Ideally, data collection (think: interface counters and such things > > > > which need to be polled) and triggering events (think: link status > > > > updates) should also be made accessible. > > > > > > > > A local database which exceeds UCI state as suggested by Luka could > > > > also be very useful, eg. for renewable energy or other applications > > > > where loss of connectivity should never imply loss of collected data. > > > Makes sense. > > > > > > - Felix > > > > _______________________________________________ > > Lede-dev mailing list > > Lede-dev@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/lede-dev > > > _______________________________________________ > Lede-dev mailing list > Lede-dev@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/lede-dev _______________________________________________ Lede-dev mailing list Lede-dev@lists.infradead.org http://lists.infradead.org/mailman/listinfo/lede-dev