Alan DeKok <al...@deployingradius.com> wrote: > Networks are generally organized by configuration, not by state. > i.e. the "state" of the network, such as it is, is buried inside a > random grab-bag collection of configuration files and running data > structures, on multiple systems, in multiple formats. There is no way > to say "move to state X", or even to query what state the network is > currently in.
That's an interesting and rather profound observation, I think. 90% of debugging (of both programs and networks) is about getting the right set of observations. A difficulty is that it's so hard to capture the state. (There were LISP debuggers which could step forward *AND* backwards...) The fact that ptrX is NULL at the point where it is dereferenced is not interesting or useful information: the point where it was supposed to first not be NULL is the interesting point. In debugging network configurations, one tries to get as many observation points. A key challenge in many systems (Linux, *BSD, and all hardware) is that packets disappear between observation points, often without clearly leaving a trace. I've often had to debug Linux iptables filters by observing the counters which increment and which don't. (The replacement nftables' killer app is perhaps that the counters are available in JSON, and so one should be able to automate this much easier) What does all of this mean to ANIMA? We've had SNMP support in all sorts of things for ages. I'm still amazed at how often it's not used. So many interesting dotcoms that were going to do amazing things with it back in 1999/2000, which then went bust, and we never saw things again. We need basic data collection in a free to install module which goes into a common database, and then we need AI/heuristic systems that people can send their data to for "help" --- those things might cost $$$, but if you have a problem, then maybe you are willing to pay at the time of disaster. And, it's never the people who are at well-managed companies who have the problems. This is where I think the ACP wins: if it permits collection of data from all sorts of systems simply and consistently. (Particularly, when connectivity is poor) Is there some way to support the analysis of the data in another fashion? (Ad driven? I doubt that would work. While some IoT stuff can driven by targetting marketing, like the person to gets diaper and baby formula ads after they buy a pregnancy test... I don't know how that would work for ISPs and Enterprises with problems. Maybe) Maybe it will become some kind of welfare situation, recognizing that many network anomalies are precusors to attacks? If so, assuming that we can even figure out what the network state is, do we have any chance of anonymizing the data? -- Michael Richardson <mcr+i...@sandelman.ca> . o O ( IPv6 IøT consulting ) Sandelman Software Works Inc, Ottawa and Worldwide
signature.asc
Description: PGP signature
_______________________________________________ Anima mailing list Anima@ietf.org https://www.ietf.org/mailman/listinfo/anima