disks?
And what device driver is the controller using?
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ts are not off by one,
Unfortunately, there is little indication of any progress being made.
Maybe some other 'zfs-discuss' readers would try zdb on there pools,
if using a recent dev build and see if they get a similar problem...
Thanks
Nigel Smith
# mdb core
Loading modules: [ libume
Hello Carsten
Have you examined the core dump file with mdb ::stack
to see if this give a clue to what happend?
Regards
Nigel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
The iSCSI COMSTAR Port Provider is not installed by default.
What release of OpenSolaris are you running?
If pre snv_133 then:
$ pfexec pkg install SUNWiscsit
For snv_133, I think it will be:
$ pfexec pkg install network/iscsi/target
Regards
Nigel Smith
--
This message posted from
Hi Robert
Have a look at these links:
http://delicious.com/nwsmith/opensolaris-nas
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Another things you could check, which has been reported to
cause a problem, is if network or disk drivers share an interrupt
with a slow device, like say a usb device. So try:
# echo ::interrupts -d | mdb -k
... and look for multiple driver names on an INT#.
Regards
Nigel Smith
--
This message
al test, direct to the hard drives, you could try 'dd',
with various transfer sizes. Some advice from BenR, here:
http://www.cuddletech.com/blog/pivot/entry.php?id=820
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with Google, but
there are others:
http://serverfault.com/questions/13190/what-are-good-speeds-for-iscsi-and-nfs-over-1gb-ethernet
BTW, what sort of network card are you using,
as this can make a difference.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
m 'prtconf -pv'.
If Native IDE is selected the ICH10 SATA interface should
appear as two controllers, the first for ports 0-3,
and the second for ports 4 & 5.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zf
, but three drives are showing
high %b.
And strange that you have c7,c8,c9,c10,c11
which looks like FIVE controllers!
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
posts regarding this has not been helpful,
as my only intention was to try to be helpful.
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ags days page
resurrected can petition James to raise the priority on
his todo list.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
More ZFS goodness putback before close of play for snv_128.
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html
http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e
Regards
Nigel Smith
--
This message posted from opensolaris.org
/src/uts/common/io/sata/adapters/
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Gary
I will let 'website-discuss' know about this problem.
They normally fix issues like that.
Those pages always seemed to just update automatically.
I guess it's related to the website transition.
Thanks
Nigel Smith
--
This message posted from
Hi Robert
I think you mean snv_128 not 126 :-)
6667683 need a way to rollback to an uberblock from a previous txg
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
http://hg.genunix.org/onnv-gate.hg/rev/8aac17999e4d
Regards
Nigel Smith
--
This message posted from opensolaris.org
the dev
repository will be updated to snv_128.
Then we see if any bugs emerge as we all rush to test it out...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
rg/message/j7av5b22dke2anui
http://markmail.org/thread/rdswnnqlk2f6q47k
http://opensolaris.org/jive/thread.jspa?threadID=79749
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-October/022815.html
Regards
Nigel Smith
--
This message posted from opens
careful:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-September/031434.html
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
nding that anyone
using raidz, raidz2, raidz3, should not upgrade to that release?
For the people who have already upgraded, presumably the
recommendation is that they should revert to a pre 121 BE.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
_
g
an open source project.
For instance, Sun's project for the Comstar iscsi target:
http://www.opensolaris.org/os/project/iser/
..where there was an open mailing list, where you
could see the developers making progress:
http://mail.opensolaris.org/pipermail/iser-dev/
Best Regards
Nigel
I would say that maybe Sun should have held back on
announcing the work on deduplication, as it just seems to
have ramped up frustration, now that it seems no
more news is forthcoming. It's easy to be wise after the event
and time will tell.
Thanks
Nigel Smith
--
s is being made,
or to actively help with code reviews or testing.
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ost
hard drives fail to sync/flush correctly,
but AFAIK no one is saying how they know this.
Have they actually tested, in which case
how have they tested. Or do they just know
because of bad experiences having lost lots of data.
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
__
David Magda wrote:
> This is also (theoretically) why a drive purchased from Sun is more
> that expensive then a drive purchased from your neighbourhood computer
> shop: Sun (and presumably other manufacturers) takes the time and
> effort to test things to make sure that when a drive says "I'
s.org/pipermail/zfs-discuss/2008-May/047270.html
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g up/down in '/var/admin/message'?
You are never going to do any good while that is happening.
I think you need to try a different network card in the server.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailin
TW, just run '/usr/X11/bin/scanpci' and identify the 'vendor id' and
'device id' for the network card, just in case it turns out to be a driver bug.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g and where.
It would be interesting to do two separate captures - one on the client
and the one on the server, at the same time, as this would show if the
switch was causing disruption. Try to have the clocks on the client &
server synchronised as close as possible.
Thanks
Nigel Smith
--
This messag
source"
bnx driver (B) Broadcom NetXtreme II Gigabit Ethernet driver
So the bnx driver is closed source :-(
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are seeing the same problem with other client PC, then I guess we need
to
suspect the 'switch' that connects them.
Ok, that's my thoughts & conclusion for now.
Maybe you could get some more snoop captures with other clients, and
with a different switch, and do a similar analysi
ssion method used for this file is 98.
Please can you check it out, and if necessary use a more standard
compression algorithm.
Download File Size was 8,782,584 bytes.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi Tano
Please check out my post on the storage-forum for another idea
to try which may give further clues:
http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
ving 'smartctl' (fully) working with PATA and
SATA drives on x86 Solaris.
I've done a quick search on PSARC 2007/660 and it was
"closed approved fast-track 11/28/2007".
I did a quick search, but I could not find any code that had been
committed to 'onnv-gate' that re
'status' of your zpool on Server2?
(You have not provided a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have not provided a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rc/caselog/2007/660/onepager/
http://bugs.opensolaris.org/view_bug.do?bug_id=5044205
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
working system.
If your using Solaris, maybe try 'prtvtoc'.
http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view
(Unless someone knows a better way?)
Thanks
Nigel Smith
# prtvtoc /dev/rdsk/c1t1d0
* /dev/rdsk/c1t1d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1465149168 s
-discuss/2008-October/052136.html
Does the OpenSolaris box give any indication of being busy with other things?
Try running 'prstat' to see if it gives any clues.
Presumably you are using ZFS as the backing store for iScsi, in
which case, maybe try with a UFS formatted disk to see if that is
direction.
(On OpenSolaris, Iperf was not able to increase
the default TCP window size of 48K bytes.)
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
he network.
Control-C to stop the capture.
You can then use Ethereal or WireShark to analyze the capture file.
On the 'Analyze' menu, select 'Expert Info'.
This will look through all the packets and will report
any warning or errors it sees.
Regards
Nigel
Hi Tano
I will have a look at your snoop file.
(Tomorrow now, as it's late in the UK!)
I will send you my email address.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
testing
with the iscsi target and various initiators, including Linux.
I have found the snv_93 and snv_97 iscsi target to work
well with the Vmware ESX and Microsoft initiators.
So it is a surprise to see these problems occurring.
Maybe some of the more resent builds snv_98, 99 have
'fi
the bottom of the root cause.
Following Eugene report, I'm beginning to fear that some sort of regression
has been introduced into the iscsi target code...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
the path.
Maybe you could check the Solaris iScsi target works ok under stress
from something other that ESX, like say the Windows iscsi initiator.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
right, having a
mirrored pair of identical hard drives would not help,
as the bios update may cause an identical problem
with each drive.)
Good Luck
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
onnv-gate.hg/rev/29862a7558ef
http://hg.genunix.org/onnv-gate.hg/rev/5b422642546a
Tano, based on the above, I would say you need
unique GUID's for two separate Targets/LUNS.
Best Regards
Nigel Smith
http://nwsmith.blogspot.com/
--
This message posted from opensolaris.org
at output.
The log files seem to show the iscsi session has dropped out,
and the initiator is auto retrying to connect to the target,
but failing. It may help to get a packet capture at this stage
to try & see why the logon is failing.
Regards
Nigel Smith
--
This message
Presumably the labels are some how confused,
especially for your USB drives :-(
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f you decide to try a different SATA controller card, possible options are:
1. The si3124 driver, which supports SiI-3132 (PCI-E)
and SiI-3124 (PCI-X) devices.
2. The AHCI driver, which supports the Intel ICH6 and latter devices, often
found on motherboard.
4. The NV_SAT
And are you seeing any error messages in '/var/adm/messages'
indicating any failure on the disk controller card?
If so, please post a sample back here to the forum.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
an example of this sort of information
for a different hard disk controller card:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss m
olaris.org/pipermail/onnv-notify/2007-October/012782.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that, then maybe I will not experience the problem until I
upgrade to snv70 or latter.
Regards,
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Please can you provide the source code for your test app.
I would like to see if I can reproduce this 'crash'.
Thanks
Nigel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
1475-3232017.html
It has a similar 12+2 drive bay arrangement, and I believe
HP do support Solaris and have drivers for their disk interface cards.
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing lis
please report success or failure
back to this forum, and on the 'Storage-discuss' forum where these sort
of questions are more usually discussed.
Thanks
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing lis
Richard, thanks for the pointer to the tests in '/usr/sunvts', as this
is the first I have heard of them. They look quite comprehensive.
I will give them a trial when I have some free time.
Thanks
Nigel Smith
pmemtest- Physical Memory Test
ramtest - Memory DIMMs
Yes, I'm not surprised. I thought it would be a RAM problem.
I always recommend a 'memtest' on any new hardware.
Murphy's law predicts that you only have RAM problems
on PC's that you don't test!
Regards
Nigel Smith
This messa
chipset and hence driver you
are using to connect the sata drives. I would guess it's the AHCI driver.
See this link to see how I answered this question for my system:
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-May/040562.html
Regards
Nigel Smith
This message posted
You can see the status of bug here:
http://bugs.opensolaris.org/view_bug.do?bug_id=6566207
Unfortunately, it's showing no progress since 20th June.
This fix really could do to be in place for S10u4 and snv_70.
Thanks
Nigel Smith
This message posted from opensolari
he tab key to expand the path)
My sata drive is using the 'ahci' driver, connecting to the
ICH7 chipset on the motherboard.
And I have a scsi drive on a Adaptec card, plugged into a PCI slot.
Thanks
Nigel Smith
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html
But after a reboot the iscsi target was not longer available, so the iscsi
initiator could not provide the d
63 matches
Mail list logo