On Mar 11, 2010, at 10:02 PM, Tonmaus wrote:
> Hi,
> thanks for sharing.
> Is your LSI card running in IT or IR mode? I had some issues getting all
> drives connected in IR mode which is the factory default of the LSI branded
> cards.
> I am also curious why your controller shows up as "c11". Doe
Hi,
My zpool is reporting unrecoverable errors with the metadata:
pool: rpool2
> state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption. Applications may be effected.
> action: Restore the file in question if possible. Otherwise re
Hi,
I suspect mine are already IT mode...not sure how to confirm that
though...I have had no issues.
My controller is showing as C8...odd isn't it. It's in the 16xPCIE slot at the
moment...I am not sure how it gets the number...
--
This message posted from opensolaris.org
___
Hi,
thanks for sharing.
Is your LSI card running in IT or IR mode? I had some issues getting all drives
connected in IR mode which is the factory default of the LSI branded cards.
I am also curious why your controller shows up as "c11". Does anybody know more
about the way this is enumerated? I a
Hi,
We are using Solaris 10 update 7 with ZFS file system.And using the machine for
informix db.
Solaris Patch level Generic_142900-02 (Dec 09 PatchCluster release)
Informix DB version 11.5FC6
We are facing an issue while enabling DIRECT_IO for the DB chunks.
The error message which appears
On Friday 12,March,2010 12:02 PM, Erik Trimble wrote:
In general, I would heartily agree with Russ, in that the 8-port
LSI-based PCI-E cards are very, very well worth the price. I'm a
satisfied user of the Marvell-based PCI-X cards, too (at least, since
the 2009.06 release).
That all said,
Russ Price wrote:
Can you tell us the build
version of the opensolaris?
I'm currently on b134 (but I had the performance issues with 2009.06, b130,
b131, b132, and b133 as well).
I may end up swapping the Phenom II X2 550 with an Athlon II X4 630 that I've
put into another M4A785-M syst
Thank Erik, and I will try it, but the new question is that the root of the
NFS server mapped as nobody at the NFS client.
For this issue, I set up a new test NFS server and NFS client, and with the
same option, at this test environment, the file owner mapped correctly, it
confused me.
Thank
> Can you tell us the build
> version of the opensolaris?
I'm currently on b134 (but I had the performance issues with 2009.06, b130,
b131, b132, and b133 as well).
I may end up swapping the Phenom II X2 550 with an Athlon II X4 630 that I've
put into another M4A785-M system. I noticed that the
pantzer5 wrote:
>
> > These days I am a fan for forward check access
> lists, because any one who
> > owns a DNS server can say that for IPAddressX
> returns aserver.google.com.
> > They can not set the forward lookup outside of
> their domain but they can
> > setup a reverse lookup. The other adv
Glad you got it humming!
I got my (2x) 8 port LSI cards from here for $130USD...
http://cgi.ebay.com/BRAND-NEW-SUPERMICRO-AOC-USASLP-L8I-UIO-SAS-RAID_W0QQitemZ280397639429QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item4149006f05
Works perfectly.
--
This message posted from opensolaris.org
_
Hi,
Thank you for sharing it. Seems like it's more cheaper than the HBA from
LSI, isn't it?
Can you tell us the build version of the opensolaris?
best regards,
hanzhu
On Fri, Mar 12, 2010 at 8:52 AM, Russ Price wrote:
> I had recently started setting up a homegrown OpenSolaris NAS with a lar
On Mar 10, 2010, at 4:18 PM, Chris Banal wrote:
> What is the best way to tell if your bound by the number of individual
> operations per second / random io?
If no other resource is the bottleneck :-)
> "zpool iostat" has an "operations" column but this doesn't really tell me if
> my disks are
On Thu, Mar 11, 2010 at 02:00:41AM -0800, Svein Skogen wrote:
> I can't help but keep wondering if not some sort of FEC wrapper
> (optional of course) might solve both the "backup" and some of the
> long-distance-transfer (where retransmissions really isn't wanted)
> issues.
Retransmissions aren
I had recently started setting up a homegrown OpenSolaris NAS with a large
RAIDZ2 pool, and had found its RAIDZ2 performance severely lacking - more like
downright atrocious. As originally set up:
* Asus M4A785-M motherboard
* Phenom II X2 550 Black CPU
* JMB363-based PCIe X1 SATA card (2 ports)
On Mar 11, 2010, at 2:00 AM, Svein Skogen wrote:
> I can't help but keep wondering if not some sort of FEC wrapper (optional of
> course) might solve both the "backup" and some of the long-distance-transfer
> (where retransmissions really isn't wanted) issues.
I don't think retransmissions of b
On Mar 11, 2010, at 7:49 AM, R.G. Keen wrote:
>> I think ZFS has no specific mechanisms in respect to
>> RAM integrity. It will just count on a healthy and
>> robust foundation for any component in the machine.
> I'd really like to understand what OS does with respect to ECC. Anyone who
> does un
On Mar 11, 2010, at 12:31 PM, Andrew wrote:
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) /
p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to reference thi
Hi David,
In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.
The installgrub command is like this:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
Thanks,
Cindy
On 03/11/10 15:45, David L Kensiski wrote:
At Wed, 10
At Wed, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:
> Hey list,
>
> Grant says his system is hanging after the zpool replace on a v240,
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.
> No errors from zpool replace so it sounds like the disk was
physically
> replaced
Erik Trimble writes:
[...]
> The Warning only applies to this circumstance: if you've upgraded
> from an older build, then upgrading the zpool /may/ mean that you will
> NOT be able to reboot to the OLDER build and still read the
> now-upgraded zpool.
Lots of good details snipped... thanks for
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
> It uses six ZFS filesystems built on a single zpool with 14 daily
> snapshots. Every day at 11:56, a cron command destroys the oldest
> snapshots and creates new ones
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126)
/p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to reference this disk? I've tried
/dev/rdsk/c8t4d0 and /dev/dsk
Robert,
That's great info.
Do you know how you can check the number of CORRECTED errors by ECC in
OpenSolaris?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
On 11/03/2010 15:49, R.G. Keen wrote:
I think ZFS has no specific mechanisms in respect to
RAM integrity. It will just count on a healthy and
robust foundation for any component in the machine.
I'd really like to understand what OS does with respect to ECC. Anyone who
does understand the
Hi Ross,
Thanks for your advice.
I've tried presenting as Virtual and Physical but sadly to no avail. I'm
guessing if it was going to work then a quick zpool import or zpool status
should at the very show me the "data" pool thats gone missing.
The RDM is from a FC SAN so unfortunately I can't
> I'd really like to understand what OS does with
> respect to ECC.
In information technology ECC (Error Correction Code, Wikipedia article is
worth reading.) normally protects point-to-point "channels". Hence, this is
entirely a "hardware" thing here.
Regards,
Tonmaus
--
This message posted
> I think ZFS has no specific mechanisms in respect to
> RAM integrity. It will just count on a healthy and
> robust foundation for any component in the machine.
I'd really like to understand what OS does with respect to ECC. Anyone who
does understand the internal operation and can comment would
On Thu, 11 Mar 2010, Lars-Gunnar Persson wrote:
> Is it possible to convert a rz2 array to rz1 array? I have a pool with
> to rz2 arrays. I would like to convert them to rz1. Would that be
> possible?
No, you'll have to create a second pool with raidz1 and do a "send | recv"
operation to copy th
On Mar 11, 2010, at 8:27 AM, Andrew wrote:
Ok,
The fault appears to have occurred regardless of the attempts to
move to vSphere as we've now moved the host back to ESX 3.5 from
whence it came and the problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any
Is it possible to convert a rz2 array to rz1 array?
I have a pool with to rz2 arrays. I would like to convert them to rz1. Would
that be possible?
If not, is it ok to remove one disk from a rz2 array and just let the array
keep running with one disk missing?
Regards,
Lars-Gunnar Persson
Syste
> Is the nature of the scrub that it walks through
> memory doing read/write/read and looking at the ECC
> reply in hardware?
I think ZFS has no specific mechanisms in respect to RAM integrity. It will
just count on a healthy and robust foundation for any component in the machine.
As far as I un
Ok,
The fault appears to have occurred regardless of the attempts to move to
vSphere as we've now moved the host back to ESX 3.5 from whence it came and the
problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any help and advice would be greatly appreciated.
-
http://bugs.opensolaris.org/
Enter 6846560
will get 3 BugID's for this sort of support. But it will come later in
the year and
I don't know whether the 1068e HBA will work automatically or not since
the solution
will be targeted for SAS2.0 HBA cards.
/T. Paul
On 03/11/10 10:02 AM, Brian Xu
Hi All,
We recently upgraded our Solaris 10 servers from ESX 3.5 to vSphere and in the
process, the zpools appeared to become FAULTED even though we did not touch the
OS.
We detached the Physical RDM (1TB) from the Virtual Machine and attached to
another idential Virtual machine to see if that
I can't help but keep wondering if not some sort of FEC wrapper (optional of
course) might solve both the "backup" and some of the long-distance-transfer
(where retransmissions really isn't wanted) issues.
Reason I'm saying long-distance, is this is where latency-on-the-link starts
rearing its
To whom it may concern...
Those who subscribe to this list in digest format
are receiving upwards of ten (10) digest mailings a day.
This is not standard list practice for digests, which should
commonly be sent/expected once a day.
No apparrent message overlap, missing or repetition.
Seems like ran
On Thu, Mar 11, 2010 at 07:23:43PM +1100, Daniel Carosone wrote:
> You have reminded me to go back and look again, and either find that
> whatever issue was at fault last time was transient and now gone, or
> determine what it actually was and get it resolved.
>
> In case you want to: http://allm
On Wed, Mar 10, 2010 at 02:54:18PM +0100, Svein Skogen wrote:
> Are there any good options for encapsulating/decapsulating a zfs send
> stream inside FEC (Forward Error Correction)? This could prove very
> useful both for backup purposes, and for long-haul transmissions.
I used par2 for this for s
39 matches
Mail list logo