used or
as extra swap ?
Yes. This is what I do at home, and what we do on the onnv
gate machines - we've got swap in rpool and a separate,
dedicated, swap pool.
Would this have any performance implications ?
Negative performance implications? none that I know of.
James C. McPherson
--
S
ntroduced until build
snv_118.
So you could either wait until 2010.$spring comes out,
or start using the /dev repo instead.
hth,
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss m
On 8/03/10 01:42 AM, Tim Cook wrote:
On Sun, Mar 7, 2010 at 3:12 AM, James C. McPherson mailto:j...@opensolaris.org>> wrote:
On 7/03/10 12:28 PM, norm.tallant wrote:
I'm about to try it! My LSI SAS 9211-8i should arrive Monday or
Tuesday. I bought the
user0m0.458s
sys 0m5.260s
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sata
Memory 16 GB
Processor Processor 1GH 6 core
Solaris 10 8/07 s10s_u4wos_12b SPARC
Since you are seeing this on a Solaris 10 update
release, you should log a call with your support
provider to get this investigated.
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
,010700)(pciclass,0107)
driver name:mr_sas
This should be using the mpt_sas driver, not the mr_sas driver.
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss
NTERPRISEY"
over and over again :-)
I don't know of any other specific difference between "Enterprise
SATA" and "SAS" drives.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
_
.
Note that not all of those will be applicable for ZFS.
You should read the ZFS Best Practices Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
and the ZFS Config Guide too
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
James C. McPherson
n't worry, I've re-opened it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n't worry, I've re-opened it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
what of a work in progress in
> regards to being 'production ready'.
What metric are you using for "production ready" ?
Are there features missing which you expect to see
in the driver, or is it just "oh noes, I haven't
seen enough big customers with it" ?
ld on this?
I believe you are talking through your hat.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bunch of FUD cast on a driver that
I worked on (mpt_sas), and I'm still trying to find out from
you and others what you think is a problem with it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
#x27; +
'scsiclass,00.vATA.pST3320620AS' + 'scsa,00.bmpt' + 'scsiclass,00' +
'scsiclass'
name='lun' type=int items=1
value=
name='target' type=int items=1
value=0003
[extra output elided
On 2/06/10 11:39 AM, Fred Liu wrote:
Thanks.
No.
If you must disable MPxIO, then you do so after installation,
using the stmsboot command.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs
see any plain old targets, or
that no disk devices of any sort show up in your host when
you are installing?
What is your actual problem, and why do you think that
turning off MPxIO will solve it?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/b
connectedconfigured unknownClient
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable disk-pathn
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
No need to use luxadm.
James C. McPherson
--
Senior Software
connectedconfigured unknownClient
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable disk-pathn
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
No need to use luxadm.
James C. McPherson
--
Senior Software
as the old device.
It really shouldn't be a problem for you.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
myself.
Could we all please STOP RESPONDING to this thread?
It's not about ZFS at all.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
ripts are
considerably faster running when I don't have to traverse
whole directory trees (ala ufs).
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@op
On 21/06/10 10:38 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
On the build systems that I maintain inside the firewall,
we mandate one filesystem per user, which is a very great
boon for system administration.
What's the reasoning behi
On 22/06/10 01:05 AM, Fredrich Maney wrote:
On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson
wrote:
[...]
So when I'm
trying to figure out who I need to yell at because they're
using more than our acceptable limit (30Gb), I have to run
"du -s /builds/[zyx]". And that
On 3/07/10 12:25 PM, Richard Elling wrote:
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response, I have a hard time
recommending ANYONE go to Nexenta. It's great they're employi
closedbins are you using,
why crypto bits are you using, and what changeset is your own workspace
synced with?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
have configured for them within the pool should
stick around over an export/import operation. If they
don't, I would be very, very surprised.
[note: everybody was a noob at some point]
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
NLINE 0 0 0
c5t50024E90037AF38Cd0s0 ONLINE 0 0 0
errors: No known data errors
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-di
Trevor Pretty wrote:
It think James said there was audio problems and that's why it took so
long to get published.
Well, one of the reasons. The other, more major, reason is that there's
been a heckuvalot of video generated lately that we want to get up on
slx.sun.com etc, and we don't have a
of these adapters? Though I'd ask before forking out
for a SATA DVD drive - just hate to put perfectly good drives
out for recycling.
It might work. It certainly wouldn't hurt to try.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
h
Adam Cheal wrote:
Cindy: How can I view the bug report you referenced? Standard methods
show my the bug number is valid (6694909) but no content or notes. We are
having similar messages appear with snv_118 with a busy LSI controller,
especially during scrubbing, and I'd be interested to see what
Adam Cheal wrote:
James: We are running Phase 16 on our LSISAS3801E's, and have also tried
the recently released Phase 17 but it didn't help. All firmware NVRAM
settings are default. Basically, when we put the disks behind this
controller under load (e.g. scrubbing, recursive ls on large ZFS
file
the incoming queues all day yesterday for the
bug, but missed seeing it, not sure why.
I've now moved the bug to the appropriate category so it will
get attention from the right people.
Thanks,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blo
Dennis Clarke wrote:
I just went through a BFU update to snv_127 on a V880 :
neptune console login: root
Password:
Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
Last login: Mon Nov 2 16:40:36 on console
Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
SunOS Internal Develo
sun.com or
pkg.opensolaris.org by early December.
cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolari
Roman Naumenko wrote:
Interesting stuff.
By the way, is there a place to watch lated news like this on zfs/opensolaris?
rss maybe?
You could subscribe to onnv-not...@opensolaris.org...
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com
/view/Community+Group+on/flag-days
The flag days page has not been updated since the switch
to XWiki, it's on my todo list but I don't have an ETA
for when it'll be done.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/
Roman Naumenko wrote:
James C. McPherson wrote, On 09-11-09 04:40 PM:
Roman Naumenko wrote:
Interesting stuff.
By the way, is there a place to watch lated news like this on
zfs/opensolaris?
rss maybe?
You could subscribe to onnv-not...@opensolaris.org...
James C. McPherson
l/zfs-discuss/2009-November/033672.html
On Mon Nov 9 14:26:54 PST 2009, James C. McPherson wrote:
The flag days page has not been updated since the switch
to XWiki, it's on my todo list but I don't have an ETA
for when it'll be done.
Perhaps anyone interested in seeing the flags d
solaris.org. If you don't, we
don't know that there might be a problem outside of the ones
that we identify.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
_
Travis Tabbal wrote:
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
mailto:j...@opensolaris.org>> wrote:
The first step towards "acknowledging" that there is a problem
is you logging a bug in bugs.opensolaris.org
<http://bugs.opensolaris.org>. If y
and run the VMs on another box?
Hi Travis,
your bug showed up - it's 6900767. Since bugs.opensolaris.org
isn't a "live" system, you won't be able to see it at
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6900767
until tomorrow.
cheers,
James C. Mc
ch zpool) that would help us sort through and find
any commonalities and hopefully a fix.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss
ch tells you that this is direct-access device (device type 0),
has a non-standard revision id field (2SS00_01) of which we take
the last 4 bytes as the actual revision field, and the vendor and
product ids. The devid information helps here too.
James C. McPherson
--
Senior Kernel Software
Thankyou for all who've procvided data about this. I've updated
the bugs mentioned earlier and I believe we can now make progress
on diagnosis.
The new synopsis (should show up on b.o.o tomorrow) is as follows:
6894775 mpt's msi support is suboptimal with xVM
James C. McPh
numerate as many disks as
it is able to.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
added this to the public comments field of the CR, and removed
the reference to xVM from the synopsis - hopefully the mail gateway
will send your copy reasonably soon :-)
Best regards,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp
ut.
I think that's sufficient to go on for the moment, thankyou.
cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss
uite long right now.
The reboot command should have automatically run bootadm update-archive
for you, I have this habit of running it by hand whenever I change a
driver or /etc/system to make sure that I have an up to date boot archive
from that point in time onwards.
James C. McPherson
--
Senior
Tru Huynh wrote:
On Sat, Nov 21, 2009 at 07:08:20PM +1000, James C. McPherson wrote:
If you and everybody else who is seeing this problem could provide
details about your configuration (output from cfgadm -lva, raidctl
-l, prtconf -v, what your zpool configs are, and the firmware rev
of each
root cause here, we're just trying to nail down specifics of what
seems to be a likely cause.
thankyou in advance,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.ho
h two LSI SAS3081E-R PCI-E 8 port SAS controllers, with
8 drives each.
Are these disks internal to your server's chassis, or external in
a jbod? If in a jbod, which one? Also, which cables are you using?
thankyou,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsy
Chad Cantwell wrote:
Hi,
Replied to your previous general query already, but in summary, they are in the
server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes
that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8
drives
per card).
Hi Chad,
thanks
init the config space
(that's the pci bus config space), then you've got about
1/2 the nails in the coffin hammered in. Then the failure
to restart the IOC (io controller unit) == the rest of
the lid hammered down.
best regards,
James C. McPherson
--
Senior Kernel Software Engine
Tru Huynh wrote:
follow up, another crash today.
On Mon, Nov 30, 2009 at 11:35:07AM +0100, Tru Huynh wrote:
1) OS
SunOS xargos.bis.pasteur.fr 5.10 Generic_141445-09 i86pc i386 i86pc
You should be logging a support call for this issue.
James C. McPherson
--
Senior Kernel Software Engineer
Dennis Clarke wrote:
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
The source has, yes. But what Richard was referring to was the
respun build now available via IPS.
cheers,
James C
om Murayama-san) drivers.
As another comment in this thread has mentioned, a full scrub
can be a serious test of your hardware depending on how much
data you've got to walk over. If you can keep the hardware
variables to a minimum then clarity will be more achievable.
thankyou,
James C. McPher
proving so is, however, another thing
entirely.
Could you send the output from prtconf -v for your host please,
so that we can have a look at the vital information for the
enclosure services and SMP nodes that the SAS Expander presents/
thankyou,
James C. McPherson
--
Senior Kernel Software
.com/~jmcp/WhatIsAGuid.pdf
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
supported ?
Anything else I should know before buying one of these cards ?
These cards work very well with OpenSolaris, and attach using
the mpt(7d) driver - supports hotplugging and MPxIO too.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp
-USAS2-L8i
2. LSI SAS 9211-8i
No, the 2nd generation non-RAID LSI SAS controllers make
use of the mpt_sas(7d).
Second generation RAID LSI SAS controllers use mr_sas(7d).
Code for both of these drivers is Open and you can find
it on src.opensolaris.org.
James C. McPherson
--
Senior Kernel Software
find these drivers and chips to be
up to the task.
If you do come across problems, please bring it up in storage-discuss
or zfs-discuss, and if necessary file a bug on bugs.opensolaris.org
solaris/driver/mpt-sas, and solaris/driver/mr_sas are the two subcats
that you'll need in that case
ntial part of the whole
picture that is the SS7000 appliance series, as well as the
J4x00 series.
Personally, I'm quite happy with the LSISAS3081E that I have
installed in my system, with the attached 320Gb consumer-grade
SATA2 disks.
James C. McPherson
--
Senior Kernel Software Eng
n/view/Community+Group+on/2009052003
and the update to the pluggable fwflash spec is
http://arc.opensolaris.org/caselog/PSARC/2009/163/
original pluggable fwflash case
http://arc.opensolaris.org/caselog/PSARC/2008/151
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystem
stop assuming that all this only costs a few pennies.
It doesn't.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-dis
sical path.
If the devid does not wander, then there is no need to look at the
physical path to open the device, hence there is no problem for ZFS.
Assuming that Areca's fix does in fact resolve this wandering problem,
then there is no problem elsewhere.
====
On 3/02/10 01:31 AM, Tonmaus wrote:
Hi James,
am I right to understand that in a nutshell the problem is that if
page 80/83 information is present but corrupt/inaccurate/forged (name
> it as you want), zfs will not get to down to the GUID?
Hi Tonmaus,
If page83 information is present, ZFS wi
ash
which was fixed in snv_122.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
_status=0x804b,
scsi_state=0xc
Feb 17 04:47:57 thecratewall scsi: [ID 365881 kern.info]
/p...@0,0/pci15ad,7...@15/pci1000,3...@0 (mpt_sas0):
Feb 17 04:47:57 thecratewall Log info 0x31110630 received for target 33.
Feb 17 04:47:57 thecratewall scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x
nt or utter
anything authoritative. You will just have to wait for the
official word to be announced - as will we all.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
__
The true root cause was traced to a chip that was supplied
to the manufacturer by a third party.
Personally, I'd start looking at the cables first - in my
experience they seem to incur more physical stress through the
connect/disconnect operations than HBAs.
James C. McP
, you
can match that up with info from iostat -En and/or prtconf -v.
hth,
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
conf, you need to utter
# /usr/sbin/stmsboot -u
in order for those changes to be correctly propagated.
You can (and should) read about this in the stmsboot(1m) manpage,
and there's more information available in my blog post
http://blogs.oracle.com/jmcp/entry/on_stmsboot_1m
James C. McP
On 1/02/12 12:40 PM, Ragnar Sundblad wrote:
...
I still don't really get what stmsboot -u actually does (and if - and if
so how much - this differs between x86 and sparc).
Would it be impolite to ask you to elaborate on this a little?
Not at all. Here goes.
/usr/sbin/stmsboot -u arms the mpxi
e to see location information in format, and
using the diskinfo too.
Otherwise, if you're running S11, you could try using
/usr/lib/fm/fmd/fmti - a tool which blinks LEDs at you
and prompts for label confirmation.
James C. McPherson
--
Oracle
http://www.jmcp.home
On 12/06/12 06:40 AM, David Combs wrote:
Actual newsgroup for zfs-discuss?
Actually, no. Where's the value in having a newsgroup
as well as a mailing list?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing
d info... and we use devid's in
preference to physical paths.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 19/10/12 09:27 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James C. McPherson
As far as I'm aware, having an rpool on multipathed devices is fine.
Even a yea
at to determine that
MPxIO isn't working.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oting
should be enough for him to make significant progress.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 17/02/13 08:48 AM, Sašo Kiselkov wrote:
On 02/16/2013 10:47 PM, James C. McPherson wrote:
...
Whether that message winds up being something you need
to talk with a Oracle about is entirely different.
He got a kernel panic on a completely legitimate operation (booting with
one half of the
1g_zfs-raidz
performance regression x86
which was fixed in snv_135.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rather messy inside the kernel.
Do you have the panic stack trace we can look at, and/or a
crash dump?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote:
...
--
*From:* James C. McPherson
*To:* Ramesh Babu
On 7/10/10 03:46 PM, Ramesh Babu wrote:
> I am trying to create ZPool using single veritas volume. The host is go
runs under OSOL build134 or solaris10?
I can.
This card should attach using the mpt_sas(7d) driver.
This is *different* to the mpt(7d) driver.
PSARC 2008/443 Driver for LSI MPT2.0 compliant SAS controller
went into build 118.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 18/11/10 03:05 PM, Fred Liu wrote:
Yeah, no driver issue.
BTW, any new storage-controller-related drivers introduced in snv151a?
LSI seems the only one who works very closely with Oracle/Sun.
You would have to have a look at what's in the repo,
I'm not allowed to tell you :|
when you run it as root:
# /usr/lib/fm/fmd/fmtopo -V
If this doesn't work for you, then you'll have to resort to the
tried and tested use of dd to /dev/null for each disk, and see
which lights blink.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
also
supports parallel SCSI and has different firmware to what you have
in your 9211 card. The 9211 card is also 2nd generation SAS, not 1st
generation like the 3081.
Personally, having worked on the mpt_sas(7d) project, I'm disappointed
that you believe the card and its driver are &
le that since Linux and Windows are fairly closely tied
to the PC architecture, that perhaps they do some bios calls to
try to figure out "correct order" mappings.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss
On 28/02/11 02:08 AM, Dave Pooser wrote:
On 2/27/11 5:15 AM, "James C. McPherson" wrote:
On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, "Bill Sommerfeld" wrote:
On your system, c12 is the mpxio virtual controller; any disk which is
potentially mult
On 28/02/11 12:46 PM, Dave Pooser wrote:
On 2/27/11 4:07 PM, "James C. McPherson" wrote:
...
PHY iport@
01
12
24
38
410
520
640
780
OK, bear with me for a moment because I'm feeling extra dense this evening.
The PHY tells me which port
On 28/02/11 02:51 PM, Dave Pooser wrote:
On 2/27/11 10:06 PM, "James C. McPherson" wrote:
...
2nd controller
c16t5000CCA222DDD7BAd0
/pci@0,0/pci8086,340c@5/pci1000,3020@0/iport@2/disk@w5000cca222ddd7ba,0
3rd controller
c14t5000CCA222DF8FBEd0
/pci@0,0/pci8086,340e@7/pci1000,3020
On 1/03/11 03:00 AM, Dave Pooser wrote:
On 2/27/11 11:13 PM, "James C. McPherson" wrote:
/pci@0,0/pci8086,340c@5/pci1000,3020@0
and
/pci@0,0/pci8086,340e@7/pci1000,3020@0
which are in different slots on your motherboard and connected to
different PCI Express Root Ports - which s
icensed
driver and create problems for myself or my employer.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
which tracks the inclusion in a Solaris 10 Update.
I'd also like to know where you're getting your information from
on this topic.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opens
On 14/03/11 11:26 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
Sent: Monday, March 14, 2011 9:20 AM
Just for clarity:
The in-kernel CIFS service is indeed available in solaris 10.
Are you really, really sure about that? Please point the RFE number
which
Hi Bob,
thanks for the quick response. Comments inline below
Bob Friesenhahn wrote:
> On Thu, 17 Jul 2008, James C. McPherson wrote:
>> ...
>>> MPXIO is quite ugly and rough around the edges (at least compared
>>> with ZFS) but it works.
>>
>> Just curio
Bob Friesenhahn wrote:
> On Thu, 17 Jul 2008, James C. McPherson wrote:
>> I'm fairly sure that the long device names aspect won't change.
>>
>> I don't understand what you mean by "Odd requirement to update /etc/vfstab"
>> - when we turn on
on't see it yet; is
> there a better way than "text search for things I put in the bug" that
> I can check on the status of this bug?
when it does show up on bugs.opensolaris.org (in perhaps 24 hours
time), it'll be
6727026 -t flag for 'zfs destroy'
James C.
gt; # zpool create data mirror c4t1d0s0 c5t1d0s0
> cannot open '/dev/dsk/c4t1d0s0': I/O error
> # zpool create data mirror c5t1d0s0 c4t1d0s0
> cannot open '/dev/dsk/c5t1d0s0': I/O error
Do you have fdisk partitions and/or vtocs on those disks?
If you want to give the who
1 - 100 of 345 matches
Mail list logo