Personally, I only recommend doing this if you're a heavy consumer of the REST APIs and want them to be highly available. Otherwise, let one instance service the plex, z/OSMF is enough of a resource hog as-is. I'm going off of memory but this is how I did it at my old shop...purely for HA REST API:
IZUSVR1: This was our "engineering" instance, with all of the plugins enabled. Homedir: /global/zosmf IZUSVRHA: This was launched on every lpar with only the z/osmf nucleus (purely for rest APIs). Homedir: /$SYSNAME/var/zosmf For "1" and "HA", we had separate parmlib members (IZUPRM1 and IZUPRMHA) which was also reflected in IEASYS. We had our HA servers set to SERVER=STANDALONE and IZUSVR1 set to AUTOSTART. To be honest, autostart is kind of pointless since you're gonna have your automation utility manage the startup/shutdown anyway, but I think if you set your server to AUTOSTART there's additional functionality. A little tip: If you're moving your z/OSMF instance around from lpar to lpar with the same userdir (default /global/zosmf), it's not a bad idea to change the owning system of said filesystem with a chmount -D. I've run into some performance issues when the owning lpar of the filesystem is not the same as the one running the started task, specifically with the network configurator. A good way to do this is in your IZUSVR proc, make the first step a BPXBATCH step to perform the chmount -D. That way, whenever it launches, the first thing it does is change ownership of the filesystem. On Fri, Dec 2, 2022 at 12:04 PM Dave Jousma < [email protected]> wrote: > On Fri, 2 Dec 2022 10:07:33 -0600, Carmen Vitullo <[email protected]> > wrote: > > >we do share zfs's but maybe my choice of SERVER options and the > >autostart group was flawed > > > >I really don't have a need to start multiple servers, for testing, I > >take the lazy route, shutdown the prod server for a while, and start the > >new instance on my test LPAR when I need to test. > > > > > >Carmen > > > > > >On 12/2/2022 9:42 AM, Michael Babcock wrote: > >> I could never get a single instance with multiple LPARs connecting to > the > >> same server because we do not share our ZFS datasets, specifically, the > one > >> mounted at /global. > >> > >> Hopefully I was just doing something incorrectly. I use 1 server per > LPAR > >> and each has its own autostart group. > >> > >> On Fri, Dec 2, 2022 at 7:46 AM Carmen Vitullo <[email protected]> > wrote: > >> > >>> I've only tried this once on my test LPAR when I'm installing a new OS > >>> release or maint, not much success, I've followed the guide but I must > >>> be missing something. > >>> > >>> in the started task there's 2 start up options > >>> > >>> SERVER=STANDALONE <- I use to test > >>> > >>> and > >>> > >>> SERVER=AUTOSTART <- i use from prod > >>> > >>> there's some IZUPRMxx changes, I can provide some examples of what I've > >>> used for my second LPAR > >>> > >>> Carmen > >>> > >>> On 12/1/2022 5:49 PM, Steely.Mark wrote: > >>>> We have z/OSMF active on one of ours LPAR's. Now we would like to add > >>> another LPAR. > >>>> I know their were instructions on how to perform this - I am unable to > >>> find the instructions. > >>>> If you have done this and would provide the documentation it would be > >>> appreciated. > >>>> We are z/OS v2.4 > >>>> > >>>> Thank You > >>>> > > We have shared filesystem, In IEASYSxx we specify IZU=NS and in IZUPRMNS I > have > > AUTOSTART(CONNECT) > AUTOSTART_GROUP('IZU&ENV.') > > Then we use automation to start/stop zOSMF and move it around with DVIPA > as needed for system maintenance periods. We run one instance per sysplex. > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to [email protected] with the message: INFO IBM-MAIN > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
