I've been looking at using consumer 2.5" drives also, I think the ones I've
settled on are the hitachi 7K500 500 GB. These are 7200 rpm, I'm concerned the
5400's might be a little too low performance wise. The main reasons for hitachi
were performance seems to be among the top 2 or 3 in the lapt
Use create-lu to give the clone a different GUID:
sbdadm create-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
--
Dave
On 2/8/10 10:34 AM, Scott Meilicke wrote:
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00
cess works for mounting cloned volumes under
linux with b130. I don't have any windows clients to test with.
--
Dave
On 2/8/10 11:23 AM, Scott Meilicke wrote:
Sure, but that will put me back into the original situation.
-Scott
___
zfs-discuss m
Try:
zfs list -r -t snapshot zp1
--
Dave
On 2/21/10 5:23 PM, David Dyer-Bennet wrote:
I thought this was simple. Turns out not to be.
bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type
Fails equally on all the variants of pool
Can you provide some specifics to see how bad the writes are?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I just query for the percentage in use via snmp (net-snmp)
In my snmpd.conf I have:
extend .1.3.6.1.4.1.2021.60 drive15 /usr/gnu/bin/sh /opt/utils/zpools.ksh rpool
space
and the zpools.ksh is:
#!/bin/ksh
export PATH=/usr/bin:/usr/sbin:/sbin
export LD_LIBRARY_PATH=/usr/lib
zpool list -H -o capa
I have a 14 drive pool, in a 2x 7 drive raidz2, with l2arc and slog devices
attached.
I had a port go bad on one of my controllers (both are sat2-mv8), so I need to
replace it (I have no spare ports on either card). My spare controller is a LSI
1068 based 8 port card.
My plan is to remove the
handle a
drive failure or disconnect. :(
I don't think there's a bug filed for it. That would probably be the
first step to getting this resolved (might also post to storage-discuss).
--
Dave
Ross wrote:
> Has anybody here got any thoughts on how to resolve this problem:
> http://www.o
ch fixes some pretty important issues for thumper.
> Strongly suggest applying this patch to thumpers going forward.
> u6 will have the fixes by default.
>
I'm assuming the fixes listed in these patches are already committed in
OpenSolaris (b94 or greater)?
--
Dave
_
e good work, Tim. There are more users of your work out there
than you might think :)
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Try something like this:
zsfs set sharenfs=options mypool/mydata
where options is:
sharenfs="[EMAIL PROTECTED]/24:@10.9.9.5/32,[EMAIL PROTECTED]/24:@10.9.9.5/32"
--
Dave
Michael Stalnaker wrote:
> All;
>
> I’m sure I’m missing something basic here. I need to do the follo
think about this architecture? Could the gateway be a bottleneck?
> Do you have any other ideas or recommendations?
>
I have a setup similar to this. The most important thing I can recommend
is to create a mirrored zpool from the iscsi disks.
-Dave
shows up
then you can probably import it, but beware that there may be
incompatibilities and bugs in either the solaris or mac zfs code that
may cause you to lose your data.
--
Dave
LEES, Cooper wrote:
> M,
>
> Just taking a stab at it.
>
> Yes. This should work - well mou
Upgrading to b105 seems to improve zfs send/recv quite a bit. See this
thread:
http://www.opensolaris.org/jive/message.jspa?messageID=330988
--
Dave
Kok Fong Lau wrote:
> I have been using ZFS send and receive for a while and I noticed that when I
> try to do a send on a zfs file sys
D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
with all respect: I never read such a non logic
You can also import pools by their unique ID instead of by name. If the
pool is not imported, 'zpool import' with no arguments should list the
pool IDs. If the pool is imported, 'zpool get guid ' will list
the pool ID.
Beware that if the zpools have the same mountpoints set within any of
thei
Brent wrote:
Does anyone know if this card will work in a standard pci express slot?
Yes. I have an AOC-USAS-L8i working in a regular PCI-E slot in my Tyan
2927 motherboard.
The AOC-SAT2-MV8 also works in a regular PCI slot (although it is PCI-X
card).
_
Dave wrote:
Brent wrote:
Does anyone know if this card will work in a standard pci express slot?
Yes. I have an AOC-USAS-L8i working in a regular PCI-E slot in my Tyan
2927 motherboard.
The AOC-SAT2-MV8 also works in a regular PCI slot (although it is PCI-X
card).
Please let the list
Will Murnane wrote:
On Thu, Feb 12, 2009 at 20:05, Tim wrote:
Are you selectively ignoring responses to this thread or something? Dave
has already stated he *HAS IT WORKING TODAY*.
No, I saw that post. However, I saw one unequivocal "it doesn't work"
earlier (even if I c
, but I have nowhere near the ability to do so.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Henrik Johansson wrote:
I tried to export the zpool also, and I got this, the strange part is
that it sometimes still thinks that the ubuntu-01-dsk01 dataset exists:
# zpool export zpool01
cannot open 'zpool01/xvm/dsk/ubuntu-01-dsk01': dataset does not exist
cannot unmount '/zpool01/dump': De
Frank Cusack wrote:
When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.
So far as I can tell, mounting the received filesystem is the last
step in the process. So I guess mayb
nsistent! What I want is to disable this ZFS
behaviour and force it to wait until my cluster software decides about the
active server.
Use the cachefile=none option whenever you import the pool on either server:
zpool import -o cachefile=none xpool
--
Dave
ld not be the responsibility of ZFS. If you want to make
sure your data is not corrupted over the wire, use IPSec. If you want to
prevent corruption in RAM, use ECC sticks, etc.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gary Mills wrote:
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
"gm" == Gary Mills writes:
gm> I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence o
en snapshots. If this could be done at
the ZFS level instead of the application level it would be very cool.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
C-USAS-L8i.cfm
This is the low profile card that will fit in a 2U:
http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
They both work in normal PCI-E slots on my Tyan 2927 mobos.
Finding good non-Sun hardware that works very well under OpenSolaris is
frustrating to say the least.
Carson Gaspar wrote:
Tim wrote (although it wasn't his error originally):
Unless you want to have a different response for each of the repair
methods, I'd just drop that part:
status: One or more devices has experienced an error. The error has been
automatically corrected by zfs.
s in a txg group waiting to be committed to the main pool
vdevs - you will never know if you lost any data or not.
I think this thread is the latest discussion about slogs and their behavior:
https://opensolaris.org/jive/thread.jspa?threadID=102392&tstart=0
--
Dave
_
Eric Schrock wrote:
On May 19, 2009, at 12:57 PM, Dave wrote:
If you don't have mirrored slogs and the slog fails, you may lose any
data that was in a txg group waiting to be committed to the main pool
vdevs - you will never know if you lost any data or not.
None of the above is co
slog.
-- richard
I can't test this myself at the moment, but the reporter of Bug ID
6733267 says even one failed slog from a pair of mirrored slogs will
prevent an exported zpool from being imported. Has anyone tested this
recently?
--
Dave
___
z
Haudy Kazemi wrote:
I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?
my first bet would be writing tools that test for ignored sync cache
commands leading to lost
Anyone (Ross?) creating ZFS pools over iSCSI connections will want to
pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk
problems:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=649
Yay!
___
zfs-discuss mailing li
Can anyone from Sun comment on the status/priority of bug ID 6761786?
Seems like this would be a very high priority bug, but it hasn't been
updated since Oct 2008.
Has anyone else with thousands of volume snapshots experienced the hours
long import process?
--
shots. According to the link/bug
report above it will take roughly 5.5 hours to import my pool (even when
the pool is operating perfectly fine and is not degraded or faulted).
This is obviously unacceptable to anyone in an HA environment. Hopefully
someone close to the issue can clarify.
--
Dave
ation
that it has been fixed in OpenSolaris as well. I can't tell by the info
on the bugs DB - it seems like it hasn't been fixed in OpenSolaris. If
it has, then the status should reflect it as Fixed/Closed in the bug
database...
--
Dave
Trevor Pretty wrote:
Dave
Yep that's
Richard Elling wrote:
On Aug 28, 2009, at 12:15 AM, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all
solaris versions.
In a former life, I worked at Sun to identify things like this that
a
s as the
zfs send stream. It does not verify the ZFS format/integrity of the
stream - the only way to do that is to zfs recv the stream into ZFS.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Thanks for the reply but this seems to be a bit different.
a couple of things I failed to mention;
1) this is a secondary pool and not the root pool.
2) the snapshot are trimmed to only keep 80 or so.
The system boots and runs fine. It's just an issue for this secondary pool
and filesystem.
Hello all,
I have a situation where zpool status shows no known data errors but all
processes on a specific filesystem are hung. This has happened 2 times before
since we installed Opensolaris 2009.06 snv_111b. For instance there are two
files systems in this pool 'zfs get all' on one fil
> The case has been identified and I've just received
> an IDR,which I will
> test next week. I've been told the issue is fixed in
> update 8, but I'm
> not sure if there is an nv fix target.
>
Anyone know if there Is an opensolaris fix for this issue and when?
These seem to be related.
htt
s. within the electronic
discovery and records and information management space data deduplication and
policy-based aging are the foremost topics of the day but this is at the file
level while block-level deduplication would lend no benefit to that regardless.
-=dave
This mess
equivolently low to the
checksum collision probability.
-=dave
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on my Tyan MB with the MCP55
chipset. I bought Supermicro AOL-SAT2-MV8's and moved all my disks to
them. Haven't had a problem since.
http://de.opensolaris.org/jive/thread.jspa?messageID=204736
--
Dave
On 05/03/2008 01:44 PM, Simon Breden wrote:
> @Max: I've not tried this wi
o be stopped for
a disk failure?
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/08/2008 11:29 AM, Luke Scharf wrote:
> Dave wrote:
>> On 05/08/2008 08:11 AM, Ross wrote:
>>
>>> It may be an obvious point, but are you aware that snapshots need to
>>> be stopped any time a disk fails? It's something to consider if
>From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration
guide, it sounds like a disk designated as a hot spare will:
1. Automatically take the place of a bad drive when needed
2. The spare will automatically be detached back to the spare
pool when a new device is inserted and bro
> Hi Dave,
>
> I'm unclear about the autoreplace behavior with one
> spare that is
> connected to two pools. I don't see how it could work
> if the autoreplace
> property is enabled on both pools, which formats and
> replaces a spare
Because I already partit
ear to offer greater
drive densities, but a quick Google search shows that they've overpromised
and underdelivered on Solaris support in the past. Is anybody currently
using those cards on OpenSolaris?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media h
d if they fudge compatibility
information on one product....)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
Hi all,
I'm planning a new build based on a SuperMicro chassis with 16 bays. I am
looking to use up to 4 of the bays for SSD devices.
After reading many posts about SSDs I believe I have a _basic_ understanding of
a reasonable approach to utilizing SSDs for ZIL and L2ARC.
Namely:
ZIL: Intel
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Edward Ned
> Harvey
> >
> > > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > > boun...@opensolaris.org] On Behalf Of D
Ok, so originally I presented the X-25E as a "reasonable" approach. After
reading the follow-ups, I'm second guessing my statement.
Any decent alternatives at a reasonable price?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
>
> On 18 apr 2010, at 00.52, Dave Vrona wrote:
>
> > Ok, so originally I presented the X-25E as a
> "reasonable" approach. After reading the follow-ups,
> I'm second guessing my statement.
> >
> > Any decent alternatives at a reasonable price?
&
The Acard device mentioned in this thread looks interesting:
http://opensolaris.org/jive/thread.jspa?messageID=401719
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
Or, DDRDrive X1 ? Would the X1 need to be mirrored?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> IMHO, whether a dedicated log device needs redundancy
> (mirrored), should
> be determined by the dynamics of each end-user
> environment (zpool version,
> goals/priorities, and budget).
>
Well, I populate a chassis with dual HBAs because my _perception_ is they tend
to fail more than other ca
Ethereal); works great for me. It does require X11
on your machine.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/25/10 6:11 PM, "Rich Teer" wrote:
> I tried going to that URL, but got a 404 error... :-( What's the correct
> one, please?
<http://code.google.com/p/maczfs/>
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media
ories
(lots of small writes/reads), how much benefit will I see from the SAS
interface?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
being enough to make the hardware ID it as bad.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Is there a good resource on doing
something like that with an OpenSolaris storage server? I could see that as
a project I might want to attempt.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-disc
Data loss may be unavoidable, but that's why we keep backups. It's the
invisible data loss that makes life suboptimal.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing lis
, described in the
administration guide:
http://wikis.sun.com/display/FishWorks/Documentation
-- Dave
--
David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I trimmed, and then got complained at by a mailing list user that the context
of what I was replying to was missing. Can't win :P
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
ithout my
having to send multiple emails to multiple addresses-- may yet push me back
to my default CentOS platform, but to the extent that Oracle is even in the
running it's because of ZFS.)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
AID
10.
*You can't really compare ZFS to conventional RAID implementations, but if
you look at it from 50,000 feet and squint you get the similarities.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
z
> Ok guys, can we please kill this thread about commodity versus enterprise
> hardware?
+1
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
Looks like the bug affects through snv_137. Patches are available from the
usual location-- <https://pkg.sun.com/opensolaris/support> for OpenSolaris.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmed
ut I
don't expect any different outcome.
Any other ideas?
Is it possible that snapshots were renamed on the sending pool during
the send operation?
-- Dave
--
David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/
___
zfs-dis
David Dyer-Bennet wrote:
On Tue, August 10, 2010 13:23, Dave Pacheco wrote:
David Dyer-Bennet wrote:
My full backup still doesn't complete. However, instead of hanging the
entire disk subsystem as it did on 111b, it now issues error messages.
Errors at the end.
[...]
cannot re
ill be a lot more interest in the BTRFS project,
much of it from the same folks who have experience producing
enterprise-grade ZFS. Speaking for myself, if Solaris 11 doesn't include
COMSTAR I'm going to have to take a serious look at another alternative for
our show storage towers
--
Thanks for taking the time to write this - very useful info :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 1816.2 0.0 0.10.00.5 0 6 c9t0d0
0.0 191.00.0 1816.2 0.0 0.10.00.5 0 6 c9t1d0
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opens
nd possibly grab some dinner while I'm about it. I'll report back to the
list with any progress or lack thereof.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-di
> on my motherboard, i can make the onboard sata ports show up as IDE or SATA,
> you may look into that. It would probably be something like AHCI mode.
Yeah, I changed the motherboard setting from "enhanced" to AHCI and now
those ports show up as SATA.
--
Dave Pooser,
.06 where iSCSI (and apparently FC) forced all writes to be
synchronous -- thanks to Richard for that pointer.
Five hours from tearing my hair out to toasting a success-- this list is a
great resource!
--
Dave Pooser, ACSA
Manager of Information Service
LSI 3018 PCIe SATA controllers (latest IT firmware)
8x 2TB Hitachi 7200RPM SATA drives (2 connected to each LSI and 2 to
motherboard SATA ports)
2x 60GB Imation M-class SSD (boot mirror)
Qlogic 2440 PCIe Fibre Channel HBA
--
Dave Pooser, ACSA
Manager of Information Services
Alford Me
at's most important is sequential write speed.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dual disks and ZFS can handle redundancy and
recovery.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11,
will zpool let me create a new pool with ashift=12 out of the box or will
I need to play around with a patched zpool binary (or the iSCSI loopback)?
--
Dave Pooser
Manager of Information Services
Alford Media http
t, but I'd like to get rid of that error.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
paring zfs list -t snapshot and looking at
the 5.34 ref for the snapshot vs zfs list on the new system and looking at
space used.)
Is this a problem? Should I be panicking yet?
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
_
export another
volume on a second zpool and then let the Mac copy from one zvol to the
other-- this is starting to feel like voodoo here.)
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing
les from one zvol to the other. (Leaning toward
option 3 because the files are mostly largish graphics files and the like.)
Thanks for the help!
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
z
play with in the Jan-Feb timeframe, but as of
now I have no knowledge of that subject.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
y understanding is that the combination of SATA drives
and SAS expanders is a large economy-sized bucket of pain.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
0
c11t6d0 ONLINE 0 0 0
c11t7d0 ONLINE 0 0 0
c11t8d0 ONLINE 0 0 0
errors: 1 data errors, use '-v' for a list
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
: Read and write I/Os cannot be serviced.
Action : Make sure the affected devices are connected, then run
'zpool clear'.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
_
rors: 9
Vendor: ATA Product: Hitachi HDS72202 Revision: A20N Serial No:
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
--
Dave Pooser, ACSA
Manager of Information Services
Alford Medi
27;t
catch any errors last scrub), all on the same controller-- well, that seems
much less likely than the idea that I just have a bad controller that needs
replacing.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
files are on a zfs server the same files fail to
> play.
>
> Is it a local phenomena or a common problem?
We don't have that problem, and we have roughly 25TB of QuickTime files on
an OpenSolaris box shared over CIFS to mostly Mac clients.
--
Dave Pooser, ACSA
Manager of
): ^C
Any suggestions?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
86,340e@7/pci1000,3020@0/iport@8/disk@w5000cca222e046b7,0
9. c21t5000CCA222E0533Fd0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@20/disk@w5000cca222e0533f,0
So now I'm more baffled than I started. Any other suggestions will be
gratefully accepted...
--
Dave Pooser, ACSA
Manager
On 2/27/11 5:15 AM, "James C. McPherson" wrote:
>On 27/02/11 05:24 PM, Dave Pooser wrote:
>>On 2/26/11 7:43 PM, "Bill Sommerfeld" wrote:
>>
>>>On your system, c12 is the mpxio virtual controller; any disk which is
>>>potentially multipath-a
. IMHO this looks more
>like a design flaw in the driver code
Especially since the SAS3081 cards work as expected. I guess I'll start
looking for some more of the 3Gb SAS controllers and chalk the 9211s up as
a failed bit.
--
Dave Pooser, ACSA
Manager of Information Services
quot;a failed bit".
I'd like to revise and extend my remarks and replace that with "a
suboptimal choice for this project." In fact, if I can't make this work my
backup plan is to take some of my storage towers that have only one HBA,
put the 9211s in them and gra
here's a 340b@4 and a 340d@6 if I add more drives and try 'format'
again?)
>>I'd like to revise and extend my remarks and replace that with "a
>>suboptimal choice for this project."
>Not knowing your other requirements for the project, I'll settle
&
or all your help-- not only can I fully, unequivocally retract my
"failed bit" crack, but I just ordered two more of these cards for my next
project! :^)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com <http://www.alfordmedia.com/>
_
,340c@5 even if other
controllers are active?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 168 matches
Mail list logo