XMITIP Mailing List now on groups.io
For those of you that use Lionel Dyck's XMITIP REXX program to create an email from z/OS, the mailing list for it is now on groups.io Details at https://groups.io/g/xmitip Regards, Mark Regan, K8MTR *CTO1 USNR-Retired, 1969-1991* *Nationwide Insurance, Retired, 1986-2017* Facebook: https://www.facebook.com/mark.t.regan LinkedIn: https://www.linkedin.com/in/mark-t-regan -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Rocket's Git and GitHub Enterprise
I don't agree. z/OS isn't the center of the universe anymore. Most sites run a stack such as Atlassian Jira, Bitbucket etc which are used by both z/OS and distributed development teams. The integrations are awesome. Being able to track a bug ticket to changed lines of code is gold dust. If you don't want to host Bitbucket you can run it in Altlassians cloud for as little as $100 a year depending on users. I understand that there are political issues between mainframe and distributed folks at a lot of sites but that's just BS which should be solved by strong leadership at the board level. Everybody needs to be tugging on the same rope! I've successfully deployed Gitbucket on z/OS but I don't see the point when we have Bitbucket. We also run Jenkins, Ansible, Artifactory etc. I work for Rocket and you would be surprised by how many products that you use every day that are now resident in the z/OS UNIX file system and source controlled by Git. We do code reviews in Bitbucket and when we merge into master Jenkins kicks in to run regression tests, scan code for vulnerabilities, build ESCROW artifacts etc. If anything fails the merge is rejected. This is DevOps. It's not just some buzz word or fad, it's a useful methodology for project life-cycles. It's automation which used to take a resource to run manually. On 6/08/2021 10:51 pm, kekronbekron wrote: So z/OS datasets are still the source of truth, and just a copy is being made into GitHub for visibility from the outside. I'm thinking of implementations that work the other way. Running Git server on Z**, hooking it to GitHub UI / web service, use GitHub Actions or other release mechanisms to rollout directly into live Z datasets. I mean live as in.. the way in which we normally do in Z. Just hooking GH into the usual current procedures/jobs/REXX in Z. **Noticed that Github Enterprise Server, the thing where you run the GitHub Enterprise servers yourself in 'your' cloud, or on-prem on VMware or OpenStack (lol) KVM... can't actually run in Z. That is, can it even run in Linux on Z, seeing that currently there's only OpenStack KVM flavour? Z can run KVM instead of z/OS but who's going to setup KVM just for this. - KB ‐‐‐ Original Message ‐‐‐ On Friday, August 6th, 2021 at 7:04 PM, Pew, Curtis G wrote: On Aug 5, 2021, at 11:32 PM, kekronbekron 02dee3fcae33-dmarc-requ...@listserv.ua.edu wrote: I periodically copy over the current libraries and push the changes to GitHub. Do you mean push to GitHub and then 'build/deploy/copy over the current (PARMLIB) libraries using some build workflow? I have a script in my repository that runs commands like “rm sys1.parmlib/*; cp "//'sys1.parmlib'" sys1.parmlib” (where “sys1.parmlib” is a directory in the repository.) After running it I commit the changes and then push to GitHub. It’s not perfect, but I can get some idea of when a change was made or find an older version of a member that isn’t working right. Why is it not perfect, what would you want to work better? “Perfect” would be if git could manage the actual PDS(E)s, but that seems like a lot to ask for. Pew, Curtis G curtis@austin.utexas.edu - For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
COBOL V5+
We are in the process of migrating from COBOL V4.2 to V6.2. We are using most if not all of the options that relate to testing (e.g. PC, RULES, NC, SSR, etc.) when compiling for test environments. Additionally we have NOTEST(DWARF) set for both testing and production compile options. Programmers noticed in CICS testing regions that with CEDF ON, when you hit the PROCEDURE DIVISION USING statement you execute a CICS GETMAIN for every 01 level in the LINKAGE SECTION. First I assumed that this was due to option PARMCHECK. In the manuals it says that PARMCHECK adds a string of hex values at the end of COBOL WORKING-STORAGE. I assumed that it also did the same for each 01 level in the LINKAGE SECTION, which logically made sense that the compiler would acquire another piece of storage to copy the 01 level to and append the string of hex values. So, that when the called program exited, COBOL would be testing the trailing storage for the string of hex values to determine if the program had stepped on that storage, before doing a CICS FREEMAIN. Well you know what happens when you ASSUME? We went through the list of testing associated compile options. Removing them one at a time, compiling, new copying, and testing in the CICS region. With no success, I finally tried removing NOTEST(DWARF). Eureka, no more CICS GETMAINS for each of the LINKAGE SECTION 01 levels. Not what I was expecting. None of the documentation suggested that NOTEST(DWARF) would affect runtime. It should only come into play when the program ABENDs. Has anybody else noticed this behavior? This does affect the time for each transaction a great deal. Maybe we can get Mr. COBOL "Tom Ross" to shed some light on this? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Connecting to IMS DB in SpringBoot
HI, IBM folks, We are from the Ford group, and we were advised by IBM to follow the instruction from git GitHub - imsdev/ims-java-springboot to set up connect to IMS transaction but we have encountered some issues. | | | | | | | | | | | GitHub - imsdev/ims-java-springboot Contribute to imsdev/ims-java-springboot development by creating an account on GitHub. | | | We are not able to generate the Java class from the COBOL COPY book by using the J2C CICS/IMS Data Binding wizard are required. We got errors com.ibm.adapter.framwrok.BaseException Reason: IWAA0654S: Missing template file in the plugin directory. null We are working with IBM consulting group trying to find out a solution but I wanted to post this here maybe some one can help? Please help! ( also this is my first time using the listserv, and I really hope this mail reach to you expert). Thanks. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Rocket's Git and GitHub Enterprise
Hi David, Interesting post. I don't agree either. I think it is more of a multiverse. I think phenomena like Solar Winds is a good reasons to host your own and not rely on the admin quality of another vendor's cloud hosting. (AWS has to employ some of the best cloud admins in the world, yet they unwittingly helped facilitate Solar Winds) I rather doubt AWS hosts their clouds on Z. I may have just been fortunate as far as change man procedures and software in use, but tracking bugs back to changed lines of code has rarely been an issue. Your DevOps implementation sounds quite interesting and worth looking into. Mike -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of David Crayford Sent: Sunday, August 8, 2021 7:36 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: Rocket's Git and GitHub Enterprise Caution! This message was sent from outside your organization. I don't agree. z/OS isn't the center of the universe anymore. Most sites run a stack such as Atlassian Jira, Bitbucket etc which are used by both z/OS and distributed development teams. The integrations are awesome. Being able to track a bug ticket to changed lines of code is gold dust. If you don't want to host Bitbucket you can run it in Altlassians cloud for as little as $100 a year depending on users. I understand that there are political issues between mainframe and distributed folks at a lot of sites but that's just BS which should be solved by strong leadership at the board level. Everybody needs to be tugging on the same rope! I've successfully deployed Gitbucket on z/OS but I don't see the point when we have Bitbucket. We also run Jenkins, Ansible, Artifactory etc. I work for Rocket and you would be surprised by how many products that you use every day that are now resident in the z/OS UNIX file system and source controlled by Git. We do code reviews in Bitbucket and when we merge into master Jenkins kicks in to run regression tests, scan code for vulnerabilities, build ESCROW artifacts etc. If anything fails the merge is rejected. This is DevOps. It's not just some buzz word or fad, it's a useful methodology for project life-cycles. It's automation which used to take a resource to run manually. On 6/08/2021 10:51 pm, kekronbekron wrote: > So z/OS datasets are still the source of truth, and just a copy is being made > into GitHub for visibility from the outside. > I'm thinking of implementations that work the other way. > Running Git server on Z**, hooking it to GitHub UI / web service, use GitHub > Actions or other release mechanisms to rollout directly into live Z datasets. > I mean live as in.. the way in which we normally do in Z. Just hooking GH > into the usual current procedures/jobs/REXX in Z. > > **Noticed that Github Enterprise Server, the thing where you run the GitHub > Enterprise servers yourself in 'your' cloud, or on-prem on VMware or > OpenStack (lol) KVM... can't actually run in Z. > That is, can it even run in Linux on Z, seeing that currently there's only > OpenStack KVM flavour? > Z can run KVM instead of z/OS but who's going to setup KVM just for this. > > - KB > > ‐‐‐ Original Message ‐‐‐ > > On Friday, August 6th, 2021 at 7:04 PM, Pew, Curtis G > wrote: > >> On Aug 5, 2021, at 11:32 PM, kekronbekron >> 02dee3fcae33-dmarc-requ...@listserv.ua.edu wrote: >> I periodically copy over the current libraries and push the changes to GitHub. >>> Do you mean push to GitHub and then 'build/deploy/copy over the current >>> (PARMLIB) libraries using some build workflow? >> I have a script in my repository that runs commands like “rm sys1.parmlib/*; >> cp "//'sys1.parmlib'" sys1.parmlib” (where “sys1.parmlib” is a directory in >> the repository.) After running it I commit the changes and then push to >> GitHub. >> It’s not perfect, but I can get some idea of when a change was made or find an older version of a member that isn’t working right. Why is it not perfect, what would you want to work better? >> “Perfect” would be if git could manage the actual PDS(E)s, but that seems >> like a lot to ask for. >> >> >> - >> --- >> >> Pew, Curtis G >> >> curtis@austin.utexas.edu >> >> >> - >> >> For IBM-MAIN subscribe / signoff / archive access instructions, >> >> send email to lists...@listserv.ua.edu with the message: INFO >> IBM-MAIN > -- > For IBM-MAIN subscribe / signoff / archive access instructions, send > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN