Printed Page numbers 63-64 of https://www.redbooks.ibm.com/redbooks/pdfs/sg245444.pdf Ficon Express 16 will negotiate down to 8 or 4. Ficon Express 8 will negotiate down to 4 or 2.
You will need to upgrade Dasd Ficon cards or downgrade mainframe Ficon Express cards. I don't know if there is a speed adjusting director. On Tue, Feb 20, 2018 at 4:41 AM, Tommy Tsui <[email protected]> wrote: > Hi Ron, > What happens to if our ficon card is 16gb, and fcp connection is 2gb, I try > to do the simulation on monoplex lpar , the result is fine, now we are > suspect the GRS or other system parm which will increase the disconnect time > > Ron hawkins <[email protected]> 於 2018年2月 > > 15日 星期四寫道: > >> Tommy, >> >> This should not be a surprise. The name "Synchronous Remote Copy" implies >> the overhead that you are seeing, namely the time for the synchronous write >> to the remote site. >> >> PPRC will more than double the response time of random writes because they >> the Host write to cache has the additional time of controller latency, >> round trip delay, and block transfer before the write is complete. On IBM >> and HDS (not sure with EMC) the impact is greater for single blocks, as >> chained sequential writes have some overlap between the host write, and the >> synchronous write. >> >> Some things to check: >> >> 1) Buffer Credits on ISLs between the sites. If no ISLs then settings on >> the storage host ports to cater for 30km B2B credits >> 2) Channel speed step-down - If your FICON channels are 8Gb, and the FCP >> connections are 2Gb, then PPRC writes will take up to four times longer to >> transfer. It dep[ends on the block size. >> 3) Unbalanced ISLs - ISLs do not automatically rebalance after one drops. >> The more concurrent IO there is on an ISL, the longer the transfer time for >> each PPRC write. There may be one opr more ISL that are not being used, >> while others are overloaded >> 4) Switch board connections not optimal - talk to your switch vendor >> 5) Host adapter ports connections not optimal - talk to your storage vendor >> 6) Sysplex tuning may identify IO that can convert from disk to Sysplex >> caching. Not my expertise, but I'm sure there are some red books. >> >> There is good information on PPRC activity in the RMF Type 78 records. You >> may want to do some analysis of these to see how transfer rates and PPRC >> write response time correlate with your DASD disconnect time. >> >> Final Comment: do you really need synchronous remote copy? If your company >> requires zero data loss, then you don't get this from synchronous >> replication alone. You must use the Critical=Yes option which has it's own >> set of risks and challenges. If you are not using GDPS and Hyperswap for >> hot failover, then synchronous is not much better than asynchronous. >> Rolling disasters, transaction roll back, and options that turn off >> in-flight data set recovery can all see synchronous recovery time end up >> with the same RPO as Asynchronous. >> >> Ron >> >> >> >> >> >> >> -----Original Message----- >> From: IBM Mainframe Discussion List [mailto:[email protected]] On >> Behalf Of Tommy Tsui >> Sent: Thursday, February 15, 2018 12:41 AM >> To: [email protected] >> Subject: Re: [IBM-MAIN] DASD problem >> >> Hi, >> The distance is around 30km, do you know any settings on sysplex >> environment such as GRS and JES2 checkpoint need to aware? >> Direct DASD via San switch to Dr site , 2GBPS interface , we check with >> vendor, they didn't find any problem on San switch or DASD, I suspect the >> system settings >> >> Alan(GMAIL)Watthey <[email protected]> 於 2018年2月15日 星期四寫道: >> >> > Tommy, >> > >> > This sounds like the PPRC links might be a bit slow or there are not >> > enough of them. >> > >> > What do you have? Direct DASD to DASD or via a single SAN switch or >> > even cascaded? What settings (Gbps) are all the interfaces running at >> > (you can ask the switch for the switch and RMF for the DASD)? >> > >> > What type of fibre are they? LX or SX? What kind of length are they? >> > >> > Any queueing? >> > >> > There are so many variables that can affect the latency. Are there >> > any of the above that you can improve on? >> > >> > I can't remember what IBM recommends but 80% sounds a little high to me. >> > They are only used for writes (not reads). >> > >> > Regards, >> > Alan Watthey >> > >> > -----Original Message----- >> > From: Tommy Tsui [mailto:[email protected]] >> > Sent: 15 February 2018 12:15 am >> > Subject: DASD problem >> > >> > > >> > > Hi all, >> > >> > >> > Our shop found the most job elapse time prolong due to pprc >> > synchronization versus without pprc mode. It's almost 4 times faster >> > if without pprc synchronization. Is there any parameters we need to >> > tune on z/os or disk subsystem side? We found the % disk util in RMF >> > report over 80, Any help will be appreciated. Many thanks >> > >> > ---------------------------------------------------------------------- >> > For IBM-MAIN subscribe / signoff / archive access instructions, send >> > email to [email protected] with the message: INFO IBM-MAIN >> > >> > ---------------------------------------------------------------------- >> > For IBM-MAIN subscribe / signoff / archive access instructions, send >> > email to [email protected] with the message: INFO IBM-MAIN >> > >> >> ---------------------------------------------------------------------- >> For IBM-MAIN subscribe / signoff / archive access instructions, send email >> to [email protected] with the message: INFO IBM-MAIN >> >> ---------------------------------------------------------------------- >> For IBM-MAIN subscribe / signoff / archive access instructions, >> send email to [email protected] with the message: INFO IBM-MAIN >> > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to [email protected] with the message: INFO IBM-MAIN -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
