Hi folks,
So the expansion unit for the 2500 series is the 2501.
The back-end drive channels are SAS.
Currently it is not "supported" to connect a 2501 directly to a SAS HBA.
SATA drives are in the pipe, but will not be released until the RAID firmware
for the 2500 series officially supports th
I am pretty sure the T3/6120/6320 firmware does not support the
SYNCHRONIZE_CACHE commands..
Off the top of my head, I do not know if that triggers any change in behavior
on the Solaris side...
The firmware does support the use of the FUA bit...which would potentially lead
to similar flushing
Hi Robert,
It should work. We have not had the time or resources to test it (we are
busy qualifying the 2530 (SAS array) with an upcoming MPxIO enabled MPT
driver and SATA drive support).
I do not know if MPxIO will claim raw drives or nottypically there
are vendor specific modules that pr
Ok...got a break from the 25xx release...
Trying to catch up so...sorry for the late response...
The 6120 firmware does not support the Cache Sync command at all...
You could try using a smaller blocksize setting on the array to attempt to
reduce the number of read/modify/writes that you will in
In case your still interested, I did do a firmware build based on 3.2.7 that:
1) Allows 14 volumes (aka RAID groups) to be defined per tray (I had to limit
the tray count to 2 to avoid gobs of restructuring...)
BTW, This requires you to wipe the disk labels and you lose everything...but I
figure
Much of the complexity in hardware RAID is in the fault detection, isolation,
and management. The fun part is trying to architect a fault-tolerant system
when the suppliers of the components can not come close to enumerating most of
the possible failure modes.
What happens when a drive's perfo
Actually the point is that there are situations that occur in which the
typical software stack will make the wrong decision because it has no
concept of the underlying hardware and no fault management structure
that factors in more than just a single failed IO at a time...
Hardware RAID control
The segment size is amount of contiguous space that each drive contributes to a
single stripe.
So if you have a 5 drive RAID-5 set @ 128k segment size, a single stripe =
(5-1)*128k = 512k
BTW, Did you tweak the cache sync handling on the array?
-Joel
This message posted from opensolaris.or
Bob,
Here is how you can tell the array to ignore cache sync commands and the force
unit access bits...(Sorry if it wraps..)
On a Solaris CAM install, the 'service' command is in "/opt/SUNWsefms/bin"
To read the current settings:
service -d arrayname -c read -q nvsram region=0xf2 host=0x00
sav
It is the same for the 2530, and I am fairly certain it is also valid
for the 6130,6140, & 6540.
-Joel
On Feb 18, 2008, at 3:51 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Joel,
>
> Saturday, February 16, 2008, 4:09:11 PM, you wrote:
>
> JM> Bob,
>
> JM> Here is how you can tell th
10 matches
Mail list logo