It has long been a hallmark of our profession to over-achieve. We prepared for Y2K so well that predicted chaos was averted. Even entertaining chaos was rare. 😉
I was personally convinced that we would be 'OK', but I made one particular concession to the doomsday crowd. I collected a few hundred dollars' worth of $5 bills on the grounds that ATMs might be the only source of cash. Didn't want to have to pay $20 each for every little thing. I eventually spent them all at leisure. . . J.O.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 323-715-0595 Mobile 626-543-6132 Office ⇐=== NEW [email protected] -----Original Message----- From: IBM Mainframe Discussion List <[email protected]> On Behalf Of Vernooij, Kees (ITOP NM) - KLM Sent: Friday, January 3, 2020 6:43 AM To: [email protected] Subject: (External):Re: it was 20 years ago today Correct. And then I see yesterday on a local news site an article with a suggestive title like: a lot of fuzz, but hardly any problems really occurred. The article itself has a little more nuance, but it is still nice food for title hunters (useful if your business model exists of getting as much people as possible to generate advertisement hits). Kees. -----Original Message----- From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf Of Jeremy Nicoll Sent: 03 January 2020 15:24 To: [email protected] Subject: Re: it was 20 years ago today On Fri, 3 Jan 2020, at 00:49, Joel C. Ewing wrote: > The problem for us was not "how" to fix a single instance of the > problem, but finding "where" to fix an unknown number of instances of > the problem in 1000's of lines of in-house code and in associated data sets. I think how complex this was depended a lot on how old a site's applications were. It also depended on how long the 'tail' or archived data was. Suppose you identified all the instances of a single date data column in a single current file that might be read and rewritten by umpteen programs. If you changed all the programs you'd also need either to change the data in all the archived files as well, or make the program able to decide whether it was running in pre-change or post-change mode. But you'd also need to change all the other programs that also read those old archive files. And that might have knpck-on effects on other files that they manipulated. While doing this, you also had to make sure that - if your change failed - you could back it out and create/recreate all the related files. It's rapidly obvious that you can't change all old files in sync with a series of program changes and still allow all your old, or not-yet-changed programs to process the old files (because you changed those). That's why the date-window approach got used. The as-yet-unchanged programs could contine to read all the archived data, while the updated programs could handle the old-format data more intelligently. If I remember correctly, one of the changes that happened (to customer TS&Cs) was that we no longer supported customers aged > 100. I have no idea what they were meant to do with their money. Where I worked (a UK bank), Y2K was a big deal. A colossal amount of work was done in the two or so years beforehand, and on the night the computer centre was heavily staffed - not quite as many of us as on a normal day, but nevertheless lots. We had by then run simulated workloads back and forth across the date change many times, as well as checking things like how end-of-financial/ tax year/calendendar year processing could be expected to go (at eg end of March 2000, 5/6 April 2000, 31 Dec 2000) and I expect beyond that as eg programmes doing 2-year, 5-year etc historical reports and those doing forecasting that might not run until some months later also needed to be thought to be ok. ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
