On 1/30/06, Chris Knipe <[EMAIL PROTECTED]> wrote: > Thanks for the suggestion. This has been recommended to me by someone off > the list as well (or something relatively close to it), and unfortunately is > not going to be very efficient. It's going to kill the system as far as > disk IO is concerned. I'm talking about 200+ variables here, about half of > which will change approximately every 10ms (some even less). Doing 100 odd > disk writes/reads every 10ms, plus more than likely searching through open > files for a specific variable, and closing it in time so that it can be > written again... Don't think it will be feasable. > > I'll need to do this in memory I'm afraid... :(
Nothing about this suggestion precludes that. One way to handle this would be for you to make a "data server" that does, as you say, keep your 200 variables in memory. (It could, of course, write those variables to a file when it shuts down, and read them from the file when it starts up again. It could even be clever enough to periodically save the data while it's running.) Every other program communicates with the data server to work with the data, probably via a module. The data server provides quick access to the data, ensures that data updates can be made atomically when needed, perhaps even putting restrictions on the data to ensure data integrity. Because there will be some overhead associated with opening a connection to the data server, client programs will probably prefer to open a single connection that stays open for the lifetime of the application. If you make your data server especially full-featured, you could call it a database. But I know you don't want to use a database, because that would be too slow for your needs, so feel free to call it something else. :-) Cheers! --Tom Phoenix Stonehenge Perl Training -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>