Hi List,
Can anyone please point me to documentation on how to use the USAGE= parameter
in the RESOURCE PARTITION statement.
I couldn't find anything, and want to know how to be inclusive of a statement
with LPARs in multiple CSS-es.
Thanks.
- Vignesh
Mainframe Infrastructure
MARKSANDSPENCER
On Sun, 24 Feb 2019 19:02:53 -0800, Ed Jaffe
wrote:
>Is everyone experiencing this? Or is local to my region?
>
>https://www14.software.ibm.com/support/customercare/psearch/search?domain=gapar
>
>It either fails with "SearchRequest. Incorrect XML format of dBlue
>response." or just returns zero
Want to correct my question...
I've following both pre-requisites (setting a HCDPROF variable and setting
stand-alone IOCP to No in the build IOCP input screen), but still I don't see
the $HCD$ statements in the extract IOCP..
Help!
- Vignesh
Mainframe Infrastructure
-Original Message
I recently ran into an issue where LE and COBOL needed to be updated
concurrently.
Due to a timing problem in rollout, this caused a major kerfluffle.
I would be very careful mixing and matching between LE and Cobol.
Otherwise, the buildmcs route should work.
HTH,
-Original Message-
Fro
Sigh, the same old refrain. The "new tools" are neither as available reliable
or functional as those they replace.
I get the same problem you do, howver, an APAR search through:
https://www.ibm.com/support/home/
works fine,
HTH,
-Original Message-
From: IBM Mainframe Discussion List
Assuming you have all maintenance applied to ENT COB 4.2, there should be no
problem doing a BUILDMCS and receiving/applying into your z/os V2.3
environment. There is no "LE V4.2". LE follows the z/OS level. I did
exactly that, but put it into its own target/dlib zones and target/dist
libra
In my case I believe I'm good, I have all up to date maint for z (le) and ent
cobol 4.2, querying the cobol developers I feel better by the time we are z/os
2.3 and ent cobol 6.2, all cobol will be compiled with the 6.2 compiler /tested
and running in production.
Carmen Vitullo
- Origi
Thanks to everyone for the input. As Chuck said this is a program that you can
use to copy one file to another. During the copy process you can do some
filtering to determine which records get copied. Sounds like we will have to
live with the fact we don't have source code.
Thanks..
Paul Fe
On 2/25/2019 3:52 AM, Roger Lowe wrote:
Using your referenced URL works fine in the "land down under" - albeit
Monday evening .
So far, you're the only one reporting good results.
Did you actually try to search for something? The results were as expected?
--
Phoenix Software Internation
SIS works fine for me via this link:
https://www-03.ibm.com/ibmlink/sis/sis.wss?lc=en&cc=US
_
Dave Jousma
Mainframe Engineering, Assistant Vice President
david.jou...@53.com
1830 East Paris, Grand Rapids, MI 49546 MD RSCB2H
p 616.6
On 2/25/2019 6:11 AM, Allan Staller wrote:
Sigh, the same old refrain. The "new tools" are neither as available reliable
or functional as those they replace.
Yes. VERY frustrating!
I get the same problem you do, howver, an APAR search through:
https://www.ibm.com/support/home/
works fine,
Any hardware ninjas in here today?
– Vignesh
Mainframe Infrastructure
On 25-Feb-2019, at 18:50, Sankaranarayanan, Vignesh
wrote:
Want to correct my question...
I've following both pre-requisites (setting a HCDPROF variable and setting
stand-alone IOCP to No in the build IOCP input screen), b
Can anyone share their experiences running z/OS 2.3 on a z14 or z14 ZR1 in an
LPAR with less than the required 8GB of memory? Other than having to respond
to the warning message during IPL, have any negative effects been experienced?
Background:
Excerpt From:
https://www.ibm.com/support/knowle
IMHO it is not worth testing time.
Yes, the system itselft can work with less than 8GB (see z/VM or z/PDT),
but the system with subsystems started may not perform well.
In my shop 16GB is minimal amount of memory assigned to LPAR. It can be
doubled, tripled, etc.
--
Radoslaw Skorupka
Lodz, Po
I asked a former co-worker who is still with the insurance company. He
found some old JCL examples but no source code. He seemed to think it came
from Policy Management Systems, which also wrote the policy administration
software we used at the time. I think they were acquired by CSC along the
l
On Mon, 25 Feb 2019 at 10:35, Ed Jaffe wrote:
>
> Did you actually try to search for something? The results were as expected?
>
Failing with "SearchRequest. Incorrect XML format of dBlue response." here
in Toronto at noon EST on Monday 25 Feb.
I submitted "Feedback" using the button on the righ
On Mon, 25 Feb 2019 12:10:42 -0500, Chuck Kreiter wrote:
>I asked a former co-worker who is still with the insurance company. He
>found some old JCL examples but no source code. He seemed to think it came
>from Policy Management Systems, which also wrote the policy administration
>software we us
Or what language it's written in? awe had a homegrown PL/I program called
DSCOPY. Might have source. No longer have PL/I compiler.
> -Original Message-
> From: IBM Mainframe Discussion List On
> Behalf Of Tom Marchant
> Sent: Monday, February 25, 2019 9:47 AM
> To: IBM-MAIN@LISTSERV.UA.E
Hello,
I'm trying to understand the CPU and ECPU times displayed on SDSF and the
relation to zIIP processing time.
For example, here is a CICS region running a Java web service.
*CPU-Time ECPU-Time GCP-Time zIIP-Time zICP-Time zIIP-NTime*
* 164.42166.2890.89 30.21 3.42 7
To answer several questions.
Nothing in the program modules showed any copyright information.
Programs had been written in assembler. All four programs had been last
assembled with ASSEMBLER H V2R1 in 1987.
The business unit that uses the program have no idea where the programs came
from and
ZIIP is not reported as part of CPU.
Chris Blaicher
Technical Architect
Syncsort, Inc.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf
Of Brian Chapman
Sent: Monday, February 25, 2019 1:53 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: CPU t
according to SDSF
ECPU% CPU usage consumed within the address space (RMF)
Carmen Vitullo
- Original Message -
From: "Brian Chapman"
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Monday, February 25, 2019 12:52:35 PM
Subject: CPU time and zIIP
Hello,
I'm trying to understand the CPU and
Maybe I am incorrect, but I thought when you ordered a z14 it could come with
triple the amount of memory for the same cost of memory you were currently
paying.
For example, if you were using on a NON z14 8G, then when you order a z14 it
could done with 24G but at the same price as the 8G memor
When we ordered our ZR1 we still had to specify the amount of memory we
wanted. We did order substantially more central storage. I don’t know
the costs involved so can’t directly answer your question.
On Mon, Feb 25, 2019 at 1:24 PM Lizette Koehler
wrote:
> Maybe I am incorrect, but I thought
We ordered our ZR1 with 96G of memory since that was recommended based on
our planned usage and growth.
Not sure how much was the memory component but small in terms of the other
components.
On Tue, Feb 26, 2019, 8:32 AM Michael Babcock wrote:
> When we ordered our ZR1 we still had to specify t
I certainly heard that offer several times. I'm not a customer; I heard it as
an IBM "partner."
Charles
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf
Of Lizette Koehler
Sent: Monday, February 25, 2019 11:24 AM
To: IBM-MAIN@LISTSERV.UA
I'm sure this is the kind of thing Sort should be able to do easily, but I
really don't know where to start.
I have System Trace data as formatted by IPCS with the GMT timestamp
option. Trace lines are well documented, but have a few quirks. These are
132 byte records, with a timestamp toward the
Not quite. The minimum amount of memory on a z14 ZR1 is 64GB (256GB on the
larger z14).
At first, 64GB sounds like "plenty." But if you are supporting a number of
LPARs with small memory allocations, you can use of the 64GB pretty quickly.
-Original Message-
From: IBM Mainframe
All Z systems have a certain amount of minimum memory for customer use in the
base system configuration. In addition to this 'customer' memory there is
memory for the HSA - again included in the base configuration. If required,
customers can purchase extra memory up to the maximum limit for the
Replying to the original question. There may well be a better way to manage DR
than what we do, but we have not been pitched one in 20+ years. We own both
PROD and DR data centers and all contents. The DR site runs XRC ('Global
Mirroring for Z', a truly execrable name). Its only job in life is t
I have inherited a system where nobody bothered to clean-up after
themselves. If I do a DITTO VTOC of all the volumes, I can sometimes
find 5 or 6 copies of the same dataset, some of which are SYSx.*.
I would like to first rename all the uncataloged versions so I can
eventually delete them.
Tony,
If the timestamp starts on the first record, then it is quite easy to
propagate that on to the following records. You can choose as to how many
records you want to propagate the values using RECORDS=n keyword.
Here is a link to earlier topic on how to PUSH the contents using
WHEN=GROUP
htt
We do something similar except because our locations are only 80km apart we
can run Metro Mirror.
To do DR we flash to tertiary volumes using a small Z LPAR but looking at
changing that to using CSM with practice volumes as per this video from the
Washington Systems Center.
https://www.youtube.co
ADRDSSU if you have it.
Dump and delete the uncataloged datasets.
Later you can RESTORE with a new HLQ if desired.
//STEP1EXEC PGM=ADRDSSU
//SYSPRINT DDSYSOUT=A
//DASD1DDVOL=SER=MYVOL1,UNIT=SYSDA,DISP=OLD
//DASD2DDVOL=SER=MYVOL2,UNIT=SYSDA,DISP=OLD
//TAPE DDUNIT=
Tony,
Please consider that SYS?.** datasets might be on volumes that represent
the previous running system volumes.
Do the volumes look like previous 'sysres' volumes?
My 2 cents, take some time to investigate the date and contents, might
be you don't want to trash them yet.
On the other h
Be very careful running DFDSS to delete uncataloged datasets. Make certain you
understand the complete catalog structure. I would not be surprised to see
that "uncataloged" SYS1.* datasets are actually active and in use on other
running systems.
-Original Message-
From: IBM Mainframe
36 matches
Mail list logo