Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Richard Elling
Anton B. Rang wrote: > Some RAID systems compare checksums on reads, though this is usually only for > RAID-4 configurations (e.g. DataDirect) because of the performance hit > otherwise. > For the record, Solaris had a (mirrored) RAID system which would compare data from both sides of the mir

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Richard Elling
Anton B. Rang wrote: > I find it naïve to imagine that Sun customers "expect" their UFS (or other) > file systems to be unrecoverable. OK, I'll bite. If we believe the disk vendors who rate their disks as having an unrecoverable error rate of 1 bit per 10^14 bits read, and knowing that UFS has

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Anton B. Rang
Some RAID systems compare checksums on reads, though this is usually only for RAID-4 configurations (e.g. DataDirect) because of the performance hit otherwise. End-to-end checksums are not yet common. The SCSI committee recently ratified T10 DIF, which allows either an operating system or appli

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Anton B. Rang
I wasn't joking, though as is well known, the plural of anecdote is not data. Both UFS and ZFS, in common with all file system, have design flaws and bugs. To lose an entire UFS file system (barring the loss of the entire underlying storage) requires a great deal of corruption; there are multipl

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Bob Friesenhahn
On Sat, 13 Dec 2008, Joseph Zhou wrote: > > In that spirit, and looking at the NetApp virtual server support > architecture, I would say -- > as much as the ONTAP/WAFL thing (even with GX integration) is elegant, it > would make more sense to utilize the file system capabilities with kernal > in

Re: [zfs-discuss] zpol mirror creation after non-mirrored zpool is setup

2008-12-13 Thread Jeff Bonwick
On Sat, Dec 13, 2008 at 04:44:10PM -0800, Mark Dornfeld wrote: > I have installed Solaris 10 on a ZFS filesystem that is not mirrored. Since I > have an identical disk in the machine, I'd like to add that disk to the > existing pool as a mirror. Can this be done, and if so, how do I do it? Yes:

[zfs-discuss] zpol mirror creation after non-mirrored zpool is setup

2008-12-13 Thread Mark Dornfeld
I have installed Solaris 10 on a ZFS filesystem that is not mirrored. Since I have an identical disk in the machine, I'd like to add that disk to the existing pool as a mirror. Can this be done, and if so, how do I do it? Thanks -- This message posted from opensolaris.org __

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Joseph Zhou
Hi Bob, Tim, Jeff, you are all my friends, and you all know what you are talking about. As a friend, and trusting your personal integrity, I ask you, please, don't get mad, enjoy the open discussion. (ok, ok, O(N) is revolutionary in tech thinking, just not revolutionary in end customer value.

Re: [zfs-discuss] ZFS as a Gateway for a stroage network

2008-12-13 Thread Bob Friesenhahn
On Sat, 13 Dec 2008, Dak wrote: > What do you think about this architecture? Could the gateway be a > bottleneck? Do you have any other ideas or recommendations? You will need to have redundancy somewhere to avoid possible data loss. If redundancy is in the backend, then you should be protecte

Re: [zfs-discuss] ZFS as a Gateway for a stroage network

2008-12-13 Thread Dave
Dak wrote: > Hi together, > Currently I am planning a storage network for making backups of several > servers. At the moment there are several dedicated backup server for it: 4 > nodes; each node is providing 2.5 TB disk space and exporting it with CIFS > over Ethernet/1 GBIT. Unfortunately this

[zfs-discuss] ZFS as a Gateway for a stroage network

2008-12-13 Thread Dak
Hi together, Currently I am planning a storage network for making backups of several servers. At the moment there are several dedicated backup server for it: 4 nodes; each node is providing 2.5 TB disk space and exporting it with CIFS over Ethernet/1 GBIT. Unfortunately this is not a very flexib

[zfs-discuss] [Fwd: Re: [indiana-discuss] build 100 image-update: cannot boot to previous BEs]

2008-12-13 Thread Sebastien Roy
zfs folks, I sent the following to indiana-disc...@opensolaris.org, but perhaps someone here can get to the bottom of this. Why must zfs trash my system so often with this hostid nonsense? How do I recover from this situation? (I have no OpenSolaris boot CD with me at the moment, so zpool impor

Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-13 Thread Bob Friesenhahn
On Sat, 13 Dec 2008, Brett wrote: > > I will just say though that there is something in zfs which caused > this in the first place as when i first replaced teh faulty sata > controller, only 1 of the 4 disks showed the incorrect size in > format but then as i messed around trying to zpool export

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Bob Friesenhahn
On Sat, 13 Dec 2008, Tim wrote: > > Seriously? Do you know anything about the NetApp platform? I'm hoping this > is a genuine question... I believe that esteemed Sun engineers like Jeff are quite familiar with the NetApp platform. Besides NetApp being one of the primary storage competitors, i

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Bryan Cantrill
> Seriously? Do you know anything about the NetApp platform? I'm hoping this > is a genuine question... > > Off the top of my head nearly all of them. Some of them have artificial > limitations because they learned the hard way that if you give customers > enough rope they'll hang themselves.

Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-13 Thread Brett
Well after a couple of weeks of beating my head, i finally got my data back so I thought I would post what process recovered it. I ran the samsung estool utility ran auto-scan and for each disk that was showing the wrong physical size i :- chose set max address chose recover native size After th

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Jeff Bonwick
> Off the top of my head nearly all of them. Some of them have artificial > limitations because they learned the hard way that if you give customers > enough rope they'll hang themselves. For instance "unlimited snapshots". Oh, that's precious! It's not an arbitrary limit, it's a safety feafure

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Tim
On Fri, Dec 12, 2008 at 8:16 PM, Jeff Bonwick wrote: > > I'm going to pitch in here as devil's advocate and say this is hardly > > revolution. 99% of what zfs is attempting to do is something NetApp and > > WAFL have been doing for 15 years+. Regardless of the merits of their > > patents and pr

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-13 Thread Joseph Zhou
Richard, I have been glancing through the posts, saw more hardware RAID vs ZFS discussion, some are very useful. However, as you adviced me the other day, we should think about the overall solution architect, not just the feature itself. I believe the spirit of ZFS snapshot is more significant t