Michael wrote: > On Thursday 14 November 2024 17:00:07 GMT Dale wrote: >> Michael wrote: >>> On Wednesday 13 November 2024 23:10:10 GMT Dale wrote: >>>> Howdy, >>>> >>>> One of my PVs is about 83% full. Time to add more space, soon anyway. >>>> I try not to go past 90%. Anyway, I was looking at hard drives and >>>> noticed something new. I think I saw one a while back but didn't look >>>> into it at the time. I'm looking at 18TB drives, right now. Some new >>>> Seagate drives have dual actuators. Basically, they have two sets of >>>> heads. In theory, if circumstances are right, it could read data twice >>>> as fast. Of course, most of the time that won't be the case but it can >>>> happen often enough to make it get data a little faster. Even a 25% or >>>> 30% increase gives Seagate something to brag about. Another sales tool. >>>> >>>> Some heavy data users wouldn't mind either. >>>> >>>> My question is this. Given they cost about $20 more, from what I've >>>> found anyway, is it worth it? Is there a downside to this new set of >>>> heads being added? I'm thinking a higher failure rate, more risk to >>>> data or something like that. I think this is a fairly new thing, last >>>> couple years or so maybe. We all know how some new things don't work >>>> out. >>>> >>>> Just looking for thoughts and opinions, facts if someone has some. >>>> Failure rate compared to single actuator drives if there is such data. >>>> My searched didn't help me find anything useful. >>>> >>>> Thanks. >>>> >>>> Dale >>>> >>>> :-) :-) >>> I don't know much about these drives beyond what the OEM claims. From >>> what I read, I can surmise the following hypotheses: >>> >>> These drives draw more power from your PSU and although they are filled >>> with helium to mitigate against higher power/heat, they will require >>> better cooling at the margin than a conventional drive. >>> >>> Your system will use dev-libs/libaio to read the whole disk as a single >>> SATA drive (a SAS port will read it as two separate LUNs). The first 50% >>> of LBAs will be accessed by the first head and the last 50% by the other >>> head. So far, so good. >>> >>> Theoretically, I suspect this creates a higher probability of failure. In >>> the hypothetical scenario of a large sequential write where both heads >>> are writing data of a single file, then both heads must succeed in their >>> write operation. The cumulative probability of success of head A + head B >>> is calculated as P(A⋂B). As an example, if say the probability of a >>> successful write of each head is 80%, the cumulative probability of both >>> heads succeeding is only 64%: >>> >>> 0.8 * 0.8 = 0.64 >>> >>> As long as I didn't make any glaring errors, this simplistic thought >>> experiment assumes all else being equal with a conventional single head >>> drive, but it never is. The reliability of a conventional non-helium >>> filled drive may be lower to start with. Seagate claim their Exos 2 >>> reliability is comparable to other enterprise-grade hard drives, but I >>> don't have any real world experience to share here. I expect by the time >>> enough reliability statistics are available, the OEMs would have moved on >>> to different drive technologies. >>> >>> When considering buying this drive you could look at the market segment >>> needs and use cases Seagate/WD could have tried to address by developing >>> and marketing this technology. These drives are for cloud storage >>> implementations, where higher IOPS, data density and speed of read/write >>> is >>> desired, while everything is RAID'ed and backed up. The trade off is >>> power >>> usage and heat. >>> >>> Personally, I tend to buy n-1 versions of storage solutions, for the >>> following reasons: >>> >>> 1. Price per GB is cheaper. >>> 2. Any bad news and rumours about novel failing technologies or unsuitable >>> implementations (e.g. unmarked SMRs being used in NAS) tend to spread far >>> and wide over time. >>> 3. High volume sellers start offering discounts for older models. >>> >>> However, I don't have a need to store the amount of data you do. Most of >>> my drives stay empty. Here's a 4TB spinning disk with 3 OS and 9 >>> partitions: >>> >>> ~ # gdisk -l /dev/sda | grep TiB >>> Disk /dev/sda: 7814037168 sectors, 3.6 TiB >>> Total free space is 6986885052 sectors (3.3 TiB) >>> >>> HTH >> Sounds like my system may not can even handle one of these. I'm not >> sure my SATA ports support that stuff. > I think your PC would handle these fine. > > >> It sounds like this is not something I really need anyway. > Well, this is more to the point. ;-) > > >> After all, I'm already spanning my data >> over three drives. I'm sure some data is coming from each drive. No >> way to really know for sure but makes sense. >> >> Do you have a link or something to a place that explains what parts of >> the Seagate model number means? I know ST is for Seagate. The size is >> next. After that, everything I find is old and outdated. I looked on >> the Seagate website to but had no luck. I figure someone made one, >> somewhere. A link would be fine. > This document is from 2011, I don't know if they changed their nomenclature > since then. > > https://www.seagate.com/files/staticfiles/docs/pdf/marketing/st-model-number-cheat-sheet-sc504-1-1102us.pdf > > >> Thanks. >> >> Dale >> >> :-) :-) > The only Seagate 7200RPM disk I have started playing up a month ago. I now > have to replace it. :-(
Yea, I found that one too. I see drives with letters that are not listed under Segment. They got new stuff, or changed letters to trick folks. I emailed the company I usually buy drives from, they do server stuff, but haven't heard back yet. Could be, there isn't one for new drives. Could be they there to make it look like they mean something but don't, again, to trick folks. I've had a Seagate, a Maxtor from way back and a Western Digital go bad. This is one reason I don't knock any drive maker. Any of them can produce a bad drive. What matters, if they stand behind it and make it good or not. It's one thing that kinda gets on my nerves about SMR. It seems, sounds, like they tried to hide it from people to make money. Thing is, as some learned, they don't do well in a RAID and some other situations. Heck, they do OK reading but when writing, they can get real slow when writing a lot of data. Then you have to wait until it gets done redoing things so that it is complete. I still have that SMR drive for a backup. It completes the backup pretty quick, if it isn't much data, but after it is done, it does that bumpy thing for a lot longer than the copy process does. I wish I never bought that thing. The one good thing, I can unmount it and unhook the SATA cable while it finishes. All it needs is power. Still annoying tho. Think I'll try for a 18TB drive with one actuator. Oh, some info on my data storage. This doesn't include backup drives. root@Gentoo-1 / # dfc FILESYSTEM (=) USED FREE (-) %USED USED AVAILABLE TOTAL MOUNTED ON /dev/root [===-----------------] 11.4% 24.6G 348.0G 392.7G / devtmpfs [--------------------] 0.0% 0.0B 10.0M 10.0M /dev tmpfs [=-------------------] 0.0% 1.7M 25.1G 25.1G /run efivarfs [=========-----------] 43.2% 50.3K 72.7K 128.0K +ys/firmware/efi/efivars shm [=-------------------] 0.0% 136.0K 62.9G 62.9G /dev/shm /dev/nvme0n1p2 [==------------------] 6.4% 137.5M 9.2G 9.8G /boot /dev/nvme0n1p4 [=====---------------] 20.9% 18.8G 139.3G 176.1G /var +v/mapper/home2-home--lv [=====---------------] 21.5% 1.4T 5.7T 7.2T /home /dev/nvme0n1p1 [=-------------------] 0.0% 152.0K 2.0G 2.0G /efi +ev/mapper/datavg-datalv [=================---] 83.3% 34.6T 6.9T 41.5T /home/dale/Desktop/Data tmpfs [=-------------------] 0.0% 4.0K 70.0G 70.0G /var/tmp/portage tmpfs [=-------------------] 0.0% 44.0K 12.6G 12.6G /run/user/1000 /dev/mapper/crypt [===============-----] 73.9% 34.8T 12.3T 47.1T /home/dale/Desktop/Crypt /dev/mapper/6tb-main [=============-------] 61.2% 3.3T 2.1T 5.4T /mnt/6tb-main SUM: [===============-----] 72.7% 74.2T 27.6T 102.0T root@Gentoo-1 / # pvs -O vg_name PV VG Fmt Attr PSize PFree /dev/sde1 datavg lvm2 a-- 12.73t 0 /dev/sdc1 datavg lvm2 a-- 14.55t 0 /dev/sdb1 datavg lvm2 a-- 14.55t 0 /dev/sda1 home2 lvm2 a-- <7.28t 0 /dev/sdf1 vg.crypt lvm2 a-- 16.37t 0 /dev/sdd1 vg.crypt lvm2 a-- 14.55t 0 /dev/sdg1 vg.crypt lvm2 a-- 16.37t 0 root@Gentoo-1 / # That looks better in a Konsole. Oooo. I'm over 100TBs now. O_O Dale :-) :-)