So if I got you right, this tool is meant as a universal logger and retriever of lab DAQ data. Your scientists seem to be satisfied with a SubVI containing all data that has been written some times, independently from what eqquipment was used and how it was composed to a complete system.
This SubVI with data is than -later- be used to individually analyse or process those data, right?

This is partly true... the sub-vi that reads the files actually returns a variant. This variant is a cluster of the same type as the rows in this particular file. To use the data in labview, you must define a cluster explicitly of the same type to convert the variant to labview data. And my original question was aimed at simplifying the task of creating this cluster.



If this is true, you should at least also include some time stamps and logs of what VI wrote a given file, what instruments in what mode haven been used and so on. There may be a huge difference in, say, using a 4.5 digit DMM or a 6.5 one, allthough both will provide an array of Dbl into your cluster.

Yes, this should be included... in the same header that defines the data type of the rows, there is also information about what equipment was used, which scientist took the data, any comments that seemed important at the time the data was acquired and so on. As far as time stamping... we usually time stamp each of the rows... our lab is controlled by a PLC and a supervisory computer archives all of our tags to a database. Therefore, with a time stamp it is possible to retrieve all of the known conditions in the lab... (magnetic fields, pressures, temperatures, etc)



My original Q was meant on _how_ would you process that newly created cluster when you do not know anything about it at devellopment time. But in your case the key seems to be to make those data availabel again for some kind of interactive analysis using LV.

This is half of it... in some cases, the data acquisition was "ad hoc" so we would not have any existing tools to handle this specific type of data... (we do however have a tool that will let you view any of these files, regardless of what is in them.) And in this ad hoc state, the scientists would be doing interactive analysis on the data. This situation is very common during the initial phases of many of our experiments, when we do not yet know exactly which instruments we will need, or what sort of data will ultimately appear in our papers.


The other half is when we have predefined row types... this is more common in mature experiments... and in this case we could write reusable code to manipulate/analyze these files. In fact, we usually turn such things into finished applications.

The "magic" is in the file format, which is flexible enough to allow for these ad hoc things, but always writes the data in the same way to the file (which is how we can have a universal viewer). And since the file includes all the metadata required to make sense of the data we never have to wonder what is in the file, or how to parse it.

George




George Gatling Applied Technology Division, SFA Inc. Space Physics Simulation Chamber US Naval Research Laboratory 202-404-5405 (phone) 202-767-3553 (fax)

If trees could scream, would we be so cavalier about cutting them down?
We might, if they screamed all the time, for no good reason. --Jack Handy






Reply via email to