Solaris and/or ZFS are badly confused about drive IDs. The "c5t0d0"
names are very far removed from the real world, and possibly they've
gotten screwed up somehow. Is devfsadm supposed to fix those, or does
it only delete excess?
Reason I believe it's confused:
zpool status shows mirror-0 o
> For the data sheet I referenced, all the drive sizes have the same sustained
> data rate OD, 125 MB/s. Eric posted an explanation for this, which
> seems entirely believable: The data rate is not being limited by the density
> of magnetic material on the platter or the rotational speed, but by th
Roy Sigurd Karlsbakk wrote:
Nope. Most HDDs today have a single read channel, and they select
which head uses that channel at any point in time. They cannot use
multiple heads at the same time, because the heads to not travel the
same path on their respective surfaces at the same time. There's no
And devfsadm doesn't create them. Am I looking at the wrong program, or
what?
--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info
> Nope. Most HDDs today have a single read channel, and they select
> which head uses that channel at any point in time. They cannot use
> multiple heads at the same time, because the heads to not travel the
> same path on their respective surfaces at the same time. There's no
> real vertical align
Hi
I keep getting these messages on this one box. There are issues with at least
one of the drives in it, but since there are some 80 drives in it, that's not
really an issue. I just want to know, if anyone knows, what this kernel message
mean. Anyone?
Feb 5 19:35:57 prv-backup scsi: [ID 3658
> One characteristic people often overlook is: When you get a disk with
> higher capacity (say, 2T versus 600G) then you get more empty space
> and hence typically lower fragmentation in the drive. Also, the
> platter density is typically higher, so if the two drives have equal
> RPM's, typically t
On Feb 5, 2011, at 2:43 PM, David Dyer-Bennet wrote:
> Is there a clever way to figure out which drive is which? And if I have to
> fall back on removing a drive I think is right, and seeing if that's true,
> what admin actions will I have to perform to get the pool back to safety?
> (I've g
I've got a small home fileserver, Chenowith case with 8 hot-swap bays.
Of course, at this level, I don't have cute little lights next to each
drive that the OS knows about and can control to indicate things to me.
The configuration I think I have is three mirror pairs. I've got
motherboard SA
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> Bottom line, it's maybe $50 in parts, plus a $100k VLSI Engineer to do
> the design.
Well, only if there's a high volume. If you're only going to sell 10,000 of
these device
Hi all,
I'm trying to achieve the same effect of UFS directio on ZFS and here
is what I did:
1. Set the primarycache of zfs to metadata and secondarycache to none,
recordsize to 8K (to match the unit size of writes)
2. Run my test program (code below) with different options and measure
the runnin
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>
> So, the bottom line is that Solaris 11 Express can not use TRIM and SSD?
Is
> that the conclusion? So, it might not be a good idea to use a SSD?
Even without TRIM, SSD's are s
On 2/5/2011 5:44 AM, Orvar Korvar wrote:
So... Sun's SSD used for ZIL and L2ARC does not use TRIM, so how big a problem
is lack of TRIM in ZFS really? It should not hinder anyone to run without TRIM?
I didnt really understand the answer on this question. Because Sun's SSD does
not use TRIM - a
So... Sun's SSD used for ZIL and L2ARC does not use TRIM, so how big a problem
is lack of TRIM in ZFS really? It should not hinder anyone to run without TRIM?
I didnt really understand the answer on this question. Because Sun's SSD does
not use TRIM - and it is not consider a hinder? A home user
Orvar Korvar wrote:
> Ok, I read a bit more on TRIM. It seems that without TRIM, there will be more
> unnecessary reads and writes on the SSD, the result being that writes can
> take long time.
>
> A) So, how big of a problem is it? Sun has for long sold SSDs (for L2ARC and
> ZIL), and they do
Ok, I read a bit more on TRIM. It seems that without TRIM, there will be more
unnecessary reads and writes on the SSD, the result being that writes can take
long time.
A) So, how big of a problem is it? Sun has for long sold SSDs (for L2ARC and
ZIL), and they dont use TRIM? So, is TRIM not a bi
If you use drives of varying size, zfs will use the smallest capacity drives.
Say you have 1TB + 2TB + 2TB, then ZFS create a raid with 1TB large drives. 3 x
1TB raid will be result.
One ZFS raid consists of vdevs, that is, a group of drives. That vdev can be
configured as raidz1 (raid-5) or ra
17 matches
Mail list logo