http://www.computerworlduk.com/infrastructure/lzlabs-promises-end-mainframe-migration-woes-with-software-defined-approach-3645686/
seems enthralled with LzLabs, but the article doesn't really shed any light
that I can see.
Consider statements like:
*Yet, while considered robust and reliable for certain uses, mainframes are
costly to maintain and difficult to support, particularly due to the
imminent retirement of those with knowledge of a system’s inner workings.*
OK, we can debate this (and have), but then:
*Cresswell described the migration process: “When an application is moved
from the mainframe into our environment we don't recompile it or anything
like that. We literally take the binary code that comes off the mainframe
environment,” Cresswell explained.*
How does this help with the maintenance issue? Do you keep a real z for a
dev platform?
Next graf says:
*“At the time we put it into the container we replace all the APIs with
contemporary ones that reference our software defined mainframe container.”*
Um, right. So that
L R3,540 Get TCB address
statement is going to get replaced? Or just replicated/emulated? Or they're
going to emulate all of the data structures in z/OS?
Or is this all a shell game, and it's really just Herc in the cloud?
I'm not opposed to someone doing something to shake things up. But the lack
of detail from Lz is starting to smell like PSI redux.
--
zMan -- "I've got a mainframe and I'm not afraid to use it"
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN