John McKown wrote:
On Mon, 2 Feb 2004, Wiggins d Anconia wrote:


Sounds pretty good to me. One concern, do the sub record types always
have the same number of fields? Using your array to unpack into may
turn into a maintenance nightmare with respect to indexing into it to
get values if the record formats are signficantly different, etc.


Actually, that was only an example. I really hope to have the result returned more like:

if ($subrec = '0100') {
($name, $address, $city ) = unpack $template{$subrec}, $_ ;
} elsif ($subrec = '0101') {
($some1, $some2) = unpack $template{$subrec}, $_;
}


and so on for each defined $subrec.


That works, though you have to repeat your unpack over and over (not a big deal) but using the slices you only need it once and don't have to check the subrec type, though you will again when you use them...unless again you push to an array in a hash where the key is the subtype and then just loop over each of the different types, which might make the code more modular, granted the data structure would be more complicated (and unordered at that point).



Second concern, are you processing the records completely within the
loop or needing to parse them all before doing anything with them?  In
the latter case you may need to store them to an array based on type
rather than directly to a 'values' temporary array, etc.


I will be processing the records one at a time and putting them in a "persistant storage" for retrieval later in a reporting program. I have not yet determined what sort of "persistant storage" that I want. Perhaps DBM, perhaps PostgreSQL, perhaps mySQL, <whatever>.

I may end up not even doing this since PostgreSQL, at least, has a way to load records from a "flat file". I just like to leave my options open. And I'm looking a Perl solutions right now mainly because I'm trying to learn Perl.


MySQL can load flat files as well, though I don't know about formatted files like you describe.


<off-topic>
Also, if I find a "nice" Perl solution, I may implement it "in production" on our mainframe (IBM zSeries) at work. The actual data being parsed is a RACF (security system) database unload. If I can ftp that data from z/OS to our Linux/390 system and do all my reporting there, I can save z/OS CPU utilization. That's because Linux/390 on our zSeries runs on a separate processor from the z/OS work. The z/OS work cannot use this processor due to licensing restrictions. So, any work that I can "offload" from z/OS is a net gain because the IFL (Linux processor) is basically idle right now. I would then use Perl to create reports which would then be ftp'ed back to the z/OS system. This gets me "brownie points" by offloading z/OS processing. We are critically short of z/OS processor power and the next upgrade would cost 1.5 million dollars in software "upgrade" fees.



Yikes, I understood just enough of that to know that I am running for the hills :-)... Though I will say that it should be doable, and I assume you have checked out Net::FTP...


If this works for the database unload, I can use a similar system for RACF reports run against the "reformatted audit logs". Again, getting "brownie points" for offloading work.

This is why I'm considering a Perl-only solution. I have Perl on our SuSE Linux/390 system. I do not have any SQL database and am not really good enough to try to port something like PostgreSQL or mySQL.


If you decide against using a "real" database you might consider using some of the CSV text file modules, there is even a DBD::CSV that will allow you to implement using "real" SQL and the DBI if in the future you might get to port to a database and don't want to change the code later, though it is not speedy by any means. There is also XML, but that is all I will say for now :-)....


</off-topic>

For the first concern you may consider using a hash slice with the keys
being associated with the subtype stored in the original hash where you
retrieve the record format from.



Good idea. I'll keep it in mind.

thanks much!


Good luck,


http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>




Reply via email to