I've had issues like this in a previous life (I was a founder of
Tideway Systems and we put a lot of effort into finding info from the
runtime environment and reasoning about what the returned info meant).
IMO, you've got to be clear what the underlying information model that
puppet / facter supports is. In particular, if you simply say that the
facts are the data reported by the underlying tools, then you've got
zero abstraction of the model and it's 'an exercise for the user to
handle the differences between platforms. Alternatively, you can
define a canonical ontology and how the different tools map onto that
ontology. Even with such an ontology, you probably need to include
platform specific types in the data model.

It is usually useful to separate out the data returned from
interrogation from how facter has interpreted it. Not least for
debugging and provenance purposes.

fwiw, I'm also a big fan of encouraging best practice in the use of
the tools, so in this instance, the teaching/documentation would show
how to avoid naming pitfalls introduced by differences in standards
and how to remediate an environment that's fallen into such a trap.
Otherwise, the tools get bogged down in handling nasty
inconsistencies, which are impossible to cope with cleanly in code as
they depend on implicit or explicit customer organisational policies -
and the tool gets blamed for any shortfalls, while the organisation
keeps digging itself deeper into the trap.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to