Greg Mason wrote:
Thanks for the link Richard,
I guess the next question is, how safe would it be to run snv_114 in
production? Running something that would be technically "unsupported"
makes a few folks here understandably nervous...
You mentioned you run Linux clients. Are they all under a
I've been trying to get either smartctl or sg3_utils to report properly.
They both have the same low-level problems which leads me to suspect
either I'm doing something wrong OR there is a problem in the
marvell88sx / sd / SATA etc framework.
I can access drive name/serial number of all driv
You're right - in my company (a very big one) we just stumbled across this as
well and we're strongly considering not using ZFS because of it.
It's easy to type zpool add when you meant zpool replace - and then you can go
rebuild your box because it was the root pool. Nice.
At the very least, "
I don't know how relevant this is to you on Nexenta, but I can tell you that
the driver support for that card improved tremendously with OpenSolaris
2008.11. All of our hot swap problems went away with that release but the
change wasn't documented anywhere that I could see.
It might be worth s
Hallo.
I'm trying do "zfs send -R" from a S10 U6 Sparc system to a Solaris 10 U7 Sparc
system. The filesystem in question is running version 1.
Here's what I did:
$ fs=data/oracle ; snap=transfer.hot-b ; sudo zfs send -R $...@$snap |
sudo rsh winds07-bge0 "zfs create rpool/trans/winds00r/${fs%%/
Writes using the character interface (/dev/zvol/rdsk) are synchronous.
If you want caching, you can go through the block interface
(/dev/zvol/dsk) instead.
- Eric
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
Thanks for the great tips. I did some more testing and
indeed it was a version issue. The pool was created under:
# zpool upgrade
This system is currently running ZFS version 14.
whereas I tried it on systems with versions 10 and 12.
It could be imported on a newer system using -f option.
I suppos
Alexander Skwar wrote:
Hallo.
I'm trying do "zfs send -R" from a S10 U6 Sparc system to a Solaris 10 U7 Sparc
system. The filesystem in question is running version 1.
Here's what I did:
$ fs=data/oracle ; snap=transfer.hot-b ; sudo zfs send -R $...@$snap |
sudo rsh winds07-bge0 "zfs create rpo
Worked great during test jumpstart, thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I'm running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as
Richard Elling wrote:
There are many error correcting codes available. RAID2 used Hamming
codes, but that's just one of many options out there. Par2 uses
configurable strength Reed-Solomon to get multi bit error
correction. The par2 source is available, although from a ZFS
perspective is hi
11 matches
Mail list logo