Mike It depends on the size of the Arrays and the type of SSD’s being used. Enterprise level SSD’s Are mixed read / Write with at least 3DRPD (disk writes per day) performance should last at least 5 years or more. We have not seen other than sporadic failures in both HDD AND SSD and certainly not mass failures within a few weeks time of the first failure. The real issue is to make sure you have enough hot spares that are auto inserted into the array, use RAID 6 that can tolerate multiple drive failures and have diagnostics that report a drive failure quickly and DO NOT IGNOR the report of a failure with a quick replacement.
Side note, if you want to be proactive And replace drives preventatively, remember that for raid 5 you can only do one drive at a time, raid 6 2 drives, but you need to have insertable Hot spares Or you lose data and you can’t do the next set of drives until the new ones are rebuilt....bottom line, it’s not efficient to do this. Ken Kenneth A. Bloom CEO Avenir Technologies Inc /d/b/a Visara International 203-984-2235 bl...@visara.com www.visara.com > On Jul 7, 2020, at 6:25 PM, Mike Schwab <mike.a.sch...@gmail.com> wrote: > > RAID with SSD is very susceptible to failure. The SSDs from the same > batch will die at about the same number of writes. So be sure to > check the number of bad blocks and do proactive replacement. Maybe > space a week apart so the next set will give you some time. > >> On Tue, Jul 7, 2020 at 2:52 PM R.S. <r.skoru...@bremultibank.com.pl> wrote: >> >> Disclaimer: I HAVE NEVER SAID THAT. >> RAID is fallible. Everything is fallible. >> I used RAID rhetorically, just as example of "pretty good". >> And even then I urged to make backups. >> >> Few words about RAID: >> RAID is more reliable than single disk. Assuming same reliablity of disk >> used in RAID. >> >> RAID is more reliable when it has spare drives inside. No waiting for >> CE. Less time needed to start rebuild process. >> >> RAID6 is more reliable than RAID5. Reasons: Data on RAID6 will survive >> failure of 2 drives within a group. The second reason is time to >> rebuild. The more capacious disk the more time is needed to rebuild. At >> this time there is no protection. >> >> Remote copy (PPRC, XRC, SRDF, HRC) is yet another level of protection. >> Disk failure will not be replicated. ;-) >> >> Side notes: >> Sometimes disk failure is not just isolated case. Sometimes it is a >> symptom of epidemic. What kind? Some of them: disk came form same lot, >> which is bad. Earthquake or just some accident in server room (someone >> hit the cabinet by accident ...and didn't reported it). Or microcode >> problem (search for HP SSD - horror story). >> Conclusion: when you observer disk failure, pay attention. It may be >> isolated case or FIRST failure you observe. >> >> Poor quality of the array. It is rather problem of entry level home >> devices, but it does happen. Some guy bought "cheap" raid box, there >> disks and ...one day the array failed. No data, accessible, similar raid >> box does not recognize anything on disks. Everything looks like new and >> working, but there is no data. Of course that person did not make any >> backup for obvious reason: he has RAID. This is real story. After that I >> observed more cases like this one. Obviously no support available. Just >> warranty, but "c'mon guy, ligths are blinking, you can format disks and >> used it - no failure". >> >> -- >> Radoslaw Skorupka >> Lodz, Poland >> >> >> >> >> >> >> W dniu 07.07.2020 o 15:18, Jackson, Rob pisze: >>> Fun little note on RAID: it is fallible. The last Sunday of October 2016 >>> I got a call bright and early because our VTS (TS7740) had shut down. >>> Turns out we had a "cache" HDD failure at around 4 AM, and then a second >>> one failed at around 7 AM, before the first one had been rebuilt on a >>> spare. RAID-5 could not accommodate it. Because of IBM politics, we had >>> no tape until Monday at 16:00. I am ashamed to say that I sort of took >>> tape for granted. It was astonishing how much of our processing depended >>> on it. >>> >>> R.S. is spot on: make backups. Because of the trauma from this one event, >>> we now have a three-way VTS grid, synchronous-mirrored SANs, and two >>> mainframes on the floor. >>> >>> First Horizon Bank >>> Mainframe Technical Support >>> >>> -----Original Message----- >>> From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> On Behalf Of >>> R.S. >>> Sent: Tuesday, July 7, 2020 4:36 AM >>> To: IBM-MAIN@LISTSERV.UA.EDU >>> Subject: Re: Storage & tape question >>> >>> [External Email. Exercise caution when clicking links or opening >>> attachments.] >>> >>> Yes, it is possible to have VTS without real tapes on backend. Some vendors >>> do offer only "tapeless tapes", with no option to connect real tape library. >>> However from OS point of view there is difference between disk (DASD) and >>> tape (offline storage). >>> Price difference is also worth to consider, however I mean the logic. >>> Even the biggest, cheapest and really huge DASD will not protect you form >>> human and application (and other) errors. But backup will do it. >>> That's why we do backups. We don't afraid of disk failure, because we have >>> RAID, spare modules and possibly remote copy. However we do backups. >>> If you insist on DASD, you may (theoretically) connect another DASD box >>> dedicated for backups only. And even (logically) disconnect it between >>> backup sessions. However it is IMHO worse version of VTS. >>> >>> Note: I do not discuss here things like price (initial, per terabyte), >>> compression, thruput, scalability, RAID, etc. >>> >>> -- >>> Radoslaw Skorupka >>> Lodz, Poland >>> >>> >>> >>> >>> >>> >>> W dniu 06.07.2020 o 16:46, kekronbekron pisze: >>>> Hmm... do a lot of shops use actual cart based tapes ... TS77xx with >>>> TS4x00? >>>> Don't know if EMC DLm has a cart back-end option. >>>> >>>> If it's VTL with disk back-end, is that any different from having it all >>>> on DASD? >>>> >>>> >>>> - KB >>>> >>>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ >>>> On Monday, July 6, 2020 4:25 PM, R.S. <r.skoru...@bremultibank.com.pl> >>>> wrote: >>>> >>>>> I forgot something obvious for me: NEVER USE TAPES FOR APPLICATION DATA. >>>>> No jobs should write or read tapes. >>>>> Nothing except backup and restore and (optionally) ML2. Managed by >>>>> HSM or FDR. Some excepions for archive copies are worth to consider. >>>>> Note: you may have 15 years old backup on new shining tape. Migration >>>>> from older tape is no nightmare at all. It is simple. >>>>> >>>>> --------------------------------------------------------------------- >>>>> --------------------------------------------------------------------- >>>>> --------------------------------------------------------------------- >>>>> --------------------------------------------------------------------- >>>>> --------------------------------------------------------------------- >>>>> ------------------------ >>>>> >>>>> Radoslaw Skorupka >>>>> Lodz, Poland >>>>> >>>>> W dniu 06.07.2020 o 12:49, R.S. pisze: >>>>> >>>>>> W dniu 05.07.2020 o 14:12, kekronbekron pisze: >>>>>> >>>>>>> Hello List, >>>>>>> Just wondering ... assuming there's a primary storage product out >>>>>>> there that can store how-many-ever hoo-haa-bytes, and is a good >>>>>>> product in general, it should make sense to begin eliminating all >>>>>>> tape (3490/3590) use right? >>>>>>> First, ML1 & ML2 in HSM, then HSM itself, then rebuild jobs to >>>>>>> write to disk, or do SMS/ACS updates to make it all disk reads/writes. >>>>>>> Looking at the current storage solutions out there, this is >>>>>>> possible, right? >>>>>>> What would be the drawbacks (assume that primary storage is super >>>>>>> cost-efficient, so there's no need to archive anything). >>>>>> Few remarks: >>>>>> Even the cheapest possible DASD will not replace backup and other >>>>>> things (archive copy, etc.) I did replace 3490E tapes with really >>>>>> cheap second hand DASD boxes, it was approx. 20 years ago. Been >>>>>> There, done that. It wasn't very fine solution, it was cheap and >>>>>> working. AFAIR HSM does not like DASD as the output for some >>>>>> activities, can't remember details. >>>>>> Someone wrote about tapes moved to DR shelter. That's very >>>>>> old-fashioned. I would strongly prefer to have remote copy, that >>>>>> means two dasd-boxes and connectivity between. >>>>>> There are products for tape emulation on CKD disk. It is definitely >>>>>> no cheap. It also consume MSU. >>>>>> Tapes, even virtual tapes are OFFLINE media from MVS point of view. >>>>>> Offline media are good for some oooops! mistakes. >>>>>> Last, but not least: you assumption is far from reality. DASD is >>>>>> still more expensive than tape. The more capacity the difference is >>>>>> bigger. >>>>>> Tape (real one) is cheap when talking about carts and very well >>>>>> scalable. However tape realm with "first cart" is extremely >>>>>> expensive, because drives are expensive, controllers are expensive >>>>>> and ATLs are expensive. >>>>>> The real decision depends strongly on your capacity, your predicted >>>>>> growths, your needs and budget. >>>>> == >>>>> >>>>> >>> >>> >> >> >> ====================================================================== >> >> Jeśli nie jesteś adresatem tej wiadomości: >> >> - powiadom nas o tym w mailu zwrotnym (dziękujemy!), >> - usuń trwale tę wiadomość (i wszystkie kopie, które wydrukowałeś lub >> zapisałeś na dysku). >> Wiadomość ta może zawierać chronione prawem informacje, które może >> wykorzystać tylko adresat.Przypominamy, że każdy, kto rozpowszechnia >> (kopiuje, rozprowadza) tę wiadomość lub podejmuje podobne działania, narusza >> prawo i może podlegać karze. >> >> mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 >> Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. Sąd Rejonowy dla m. st. >> Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, KRS >> 0000025237, NIP: 526-021-50-88. Kapitał zakładowy (opłacony w całości) >> według stanu na 01.01.2020 r. wynosi 169.401.468 złotych. >> >> If you are not the addressee of this message: >> >> - let us know by replying to this e-mail (thank you!), >> - delete this message permanently (including all the copies which you have >> printed out or saved). >> This message may contain legally protected information, which may be used >> exclusively by the addressee.Please be reminded that anyone who disseminates >> (copies, distributes) this message or takes any similar action, violates the >> law and may be penalised. >> >> mBank S.A. with its registered office in Warsaw, ul. Senatorska 18, 00-950 >> Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. District Court for the >> Capital City of Warsaw, 12th Commercial Division of the National Court >> Register, KRS 0000025237, NIP: 526-021-50-88. Fully paid-up share capital >> amounting to PLN 169.401.468 as at 1 January 2020. >> >> ---------------------------------------------------------------------- >> For IBM-MAIN subscribe / signoff / archive access instructions, >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > > > -- > Mike A Schwab, Springfield IL USA > Where do Forest Rangers go to get away from it all? > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN