Lets talk a bit about my world here. We created a product to link medical equipments using a variety of protocols, some talk HL7, DICOM, others have proprietary means to access data from various places.
>From the start we chose the richest data representation, some of it came from a few medical standards and another part is our own creation to carry stuff that is outside the scope of the business field (non medical data). We did not define specific records, instead we factored out the protocol definition (the contract) from the representation(s) and created its definition using persistent data structures and Clojure protocols The profile is being augmented often, new fields are added to keep up with support of new devices, new standards,... We never remove stuff from protocol definitions and we always aim to generate fully populated messages from the data available at a specific end point. It's all data (above 10k lines for as profile using our custom DSL) and less than 500 lines of code to define and handle. We have three message representations more or less equivalent, one is string based (HL7-enabled devices accept this representation), another one is for serialization and the last one is a "full" representation with each data component populated with all its attributes from the profile (name, relative position, ...) All the above representation allow you to switch from one to the other. We even have a pretty print so we can decipher message when investigating bugs. We also have a DSL to create messages from code to promoto some brevity when creating messages and lax us fro having to precisely order fields when writing code. None of the above uses defrecord or deftype. We use Clojure's persistent structures and protocols to implement how to handle these various formats. Roughly we do this in less than 1000 lines splitted in 4 namespaces. This code never refers to a specific piece of data by its name. It's all data driven, the contract is data in our internal protocol profile definition. Validation, searching the metadata of the profile, ... is done within the profile definition. The name spaces allowing us to support the different representations all rely on the profile definition. They never refer to specific data items. Some devices do not support all the fields or are using older versions of protocols compared to what we implemented internally. We have a rule engine and rules attached to specific device profiles to alter the common format message so we can spit out a valid message that can be understood by the device. We implemented a Clojure protocol that lets us manipulate a message using Clojure generic fns to strip items, swap items from one spot to the other, .... These rules are applied on top of a generic message just before sending or just after receiving to/fro the device. We do not have to recode messages for each device type. This is the only place aside from message generation that will ever refer to specific data items. What about maintenance ? Why remove a data item from the profile ? If ever necessary, we can flag it as unusable and the rest will carry on. Any attempt to refer to it will trigger an exception. Searching the code base to find references to it is no more difficult than finding getters. And we know it's a required change. Either it disappeared or the data item changed. We can remove it somewhere in the future, no need rushing. It eases the maintenance of the device specific rules, if they do not refer to the disabled data item, they are still working. We get a uniform way of dealing with messages, choosing the most efficient representation when needed, having uniform naming and a firewall preventing us from specifying invalid data items without writing a single internal line of code referring to specific data elements. It's mostly done with data definitions...the key is using proper data abstractions. I agree, if none exist, you have to create them but this is where the secret of the sauce is. Refrain from writing tons of code lines if your data abstractions are not well defined. Experiment and tune them before hand. Before this era, we used to have classes wrapping data items and so forth. It was a pain in the ass to navigate and modify this stuff each time we needed to enrich our data model with all these wrappers around us not withstanding dependency management. Now, the contract is the data. End of story. Luc P. > > > On Monday, June 18, 2012 8:01:47 AM UTC-6, tbc++ wrote: > > > > > Isnt that just creating an api? Everywhere the old model exists you need > > to > > > call a function to create the desired data structure and this adds > > another > > > layer of complexity that needs maintained. Not all conversions are > > straight > > > forward, may require additional context of whatever introducing the need > > for > > > deferred computation. > > > > Can you provide a concrete example (with code perhaps)? I guess I'm > > not seeing where the problems arise. In this situation I normally > > either, 1) create a new "fetch" function to create a new map of data > > in the new format 2) provide duplicate data in the map that's computed > > differently, 3) update the old code to the new map. > > > > Here at work we have a 20 year old application that is almost 100% > > backwards compatible (the server can talk to much older clients). And > > it does so by using pure data. The server has setup contracts saying > > "this is what Project data looks like, this is what Company Data looks > > like" and those contracts never change. The 3D Software Blender > > (blender.org) operates on this same principle. After almost a decade, > > their latest software can open 1.0 .blend files, and vice versa. All > > these applications work off of unchanging data contracts. > > > > So I'm interested in seeing concrete examples of how your application > > differs. > > > > Its not the external contracts that are changing but the internal contracts > between namespaces... as I add features I add new requirements clarify my > understanding and decide to replace them with new ones. I wish I could say > just don't change them, but the internal contracts are "implementation > details" no one uses them but me. If I decide to change them stuff breaks > and that would be true in OO too, but in I know how to go about these > refactorings incrementally in OO until the old abstractions fall away and > become dead code, in clojure new abstractions mean lots of breaking changes > and nothing works until everything works. > > I wouldn't mind sitting down with someone in the Denver area and getting > some feedback on my code, eventually I would like to open source the code > but I don't expect that to happen anytime in the near future. > > > > > > Timothy > > > > -- > You received this message because you are subscribed to the Google > Groups "Clojure" group. > To post to this group, send email to clojure@googlegroups.com > Note that posts from new members are moderated - please be patient with your > first post. > To unsubscribe from this group, send email to > clojure+unsubscr...@googlegroups.com > For more options, visit this group at > http://groups.google.com/group/clojure?hl=en -- Softaddicts<lprefonta...@softaddicts.ca> sent by ibisMail from my ipad! -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en