BTW, the following text from another discussion may be helpful towards your
concerns.
What to use for RAID is not a fixed answer, but using ZFS can be a good
thing for many cases and reasons, such as the price/performance concern as
Bob highlighted.
And note Bob said "client OSs". To me, that s
The hyper links didn't work, here are the urls --
http://queue.acm.org/detail.cfm?id=1317400
http://www.sun.com/bigadmin/features/articles/zfs_part1.scalable.jsp#integrity
- Original Message -
From: "JZ"
To: "Orvar Korvar" ;
Sent: Sunday, December 28, 2008 7:50 PM
Subject: Re: [zfs-
Nice discussion. Let my chip in my old timer view --
Until a few years ago, the understanding of "HW RAID doesn't proactively
check for consistency of data vs. parity unless required" was true. But
LSI had added background consistency check (auto starts 5 mins after the
drive is created) on i
thanks for the input. since i have no interest in multibooting (virtualbox will
suit my needs), i created a 10gb partition on my 500gb drive for opensolaris
and reserved the rest for files (130gb worth).
after installing the os and fdisking the rest of the space to solaris2, i
created a zpool c
I got this in the file system-filesystem-zfs-auto-snapshot:daily.log:
...
[ Dez 28 23:13:44 Enabled. ]
[ Dez 28 23:13:53 Executing start method ("/lib/svc/method/zfs-auto-snapshot
start"). ]
Checking for non-recursive missed // snapshots rpool
Checking for recursive missed // snapshots home rpool
So we have roughly 700 OpenSolaris snv_81 boxes out in the field. We're
looking to upgrade them all to probable OpenSolaris 11.08 or the latest snv_10x
build soon. Currently all boxes have a single 80gb HD (these are small
appliance type devices, so we can't add a second hard drive). What we'
Bob Friesenhahn wrote:
> On Sun, 28 Dec 2008, Robert Bauer wrote:
>
>> It would be nice if gnome could notify me automatically when one of
>> my zpools are degraded or if any kind of ZFS error occurs.
>
> Yes. It is a weird failing of Solaris to have an advanced fault
> detection system withou
On Sun, 28 Dec 2008, Robert Bauer wrote:
> It would be nice if gnome could notify me automatically when one of
> my zpools are degraded or if any kind of ZFS error occurs.
Yes. It is a weird failing of Solaris to have an advanced fault
detection system without a useful reporting mechanism.
>
I just saw by luck that one of my zpool is degraded!:
$ zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
home 97,5G 773M 96,7G 0% ONLINE -
rpool 10,6G 7,78G 2,85G73% DEGRADED -
It would be nice if gnome could notify me automatically when one of my zpools
are degr
This is good information guys. Do we have some more facts and links about HW
raid and it's data integrity, or lack of?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
On Sat, Dec 27, 2008 at 3:24 PM, Miles Nordin wrote:
> > "t" == Tim writes:
>
> t> couldn't you simply do a detach before removing the disk, and
> t> do a re-attach everytime you wanted to re-mirror?
>
> no, for two reasons. First, when you detach a disk, ZFS writes
> something to
Hi Bob,
Bob Friesenhahn wrote:
>> AFAIK this is not done during the normal operation (unless a disk asked
>> for a sector cannot get this sector).
>
> ZFS checksum validates all returned data. Are you saying that this fact
> is incorrect?
>
No sorry, too long in front of a computer today I gu
On Sun, 28 Dec 2008, Carsten Aulbert wrote:
>> ZFS does check the data correctness (at the CPU) for each read while
>> HW raid depends on the hardware detecting a problem, and even if the
>> data is ok when read from disk, it may be corrupted by the time it
>> makes it to the CPU.
>
> AFAIK this is
Hi all,
Bob Friesenhahn wrote:
> My understanding is that ordinary HW raid does not check data
> correctness. If the hardware reports failure to successfully read a
> block, then a simple algorithm is used to (hopefully) re-create the
> lost data based on data from other disks. The difference
On Sun, 28 Dec 2008, Orvar Korvar wrote:
> On a Linux forum, Ive spoken about ZFS end to end data integrity. I
> wrote things as "upon writing data to disc, ZFS reads it back and
> compares to the data in RAM and corrects it otherwise". I also wrote
> that ordinary HW raid doesnt do this check.
Hi,
System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and
2x146GB (space pool). snv_98.
After a panic the system hangs on boot and manual attempts to mount
(at least) one dataset in single user mode, hangs.
The Panic:
Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20:
Dec 27
On a Linux forum, Ive spoken about ZFS end to end data integrity. I wrote
things as "upon writing data to disc, ZFS reads it back and compares to the
data in RAM and corrects it otherwise". I also wrote that ordinary HW raid
doesnt do this check. After a heated discussion, I now start to wonder
On Sun, 28 Dec 2008 15:27:00 +0100, dick hoogendijk
wrote:
>On Sat, 27 Dec 2008 14:29:58 PST
>Ross wrote:
>
>> All of which sound like good reasons to use send/receive and a 2nd
>> zfs pool instead of mirroring.
>>
>> Send/receive has the advantage that the receiving filesystem is
>> guaranteed
> > Send/receive has the advantage that the receiving filesystem is
> > guaranteed to be in a stable state.
>
> Can send/receive be used on a multiuser running server system?
Yes.
> Will
> this slowdown the services on the server much?
"Depends". On a modern box with good disk layout it should
On Sat, 27 Dec 2008 14:29:58 PST
Ross wrote:
> All of which sound like good reasons to use send/receive and a 2nd
> zfs pool instead of mirroring.
>
> Send/receive has the advantage that the receiving filesystem is
> guaranteed to be in a stable state.
Can send/receive be used on a multiuser ru
20 matches
Mail list logo