Eric Haycraft wrote:
Since no one seems to believe that you can expand a raidz pool, I have attached the following output from solaris 11/06 showing me doing just that. The first expanision is with like sized disks, and the second expansion is with larger disks. I realize that the documentation o
Richard Elling wrote:
warning: noun/verb overload. In my context, swap is a verb.
It is also a common shorthand for "swap space."
--
--Ed
begin:vcard
fn:Ed Gould
n:Gould;Ed
org:Sun Microsystems, Inc.;Solaris Cluster
adr;dom:M/S UMPK17-201;;17 Network Circle;Menlo Park;CA;94025
email;in
Consider the following scenario involving various failures.
We have a zpool composed of a simple mirror of two devices D0 and D1
(these may be local disks, slices, LUNs on a SAN, or whatever). For the
sake of this scenario, it's probably most intuitive to think of them as
LUNs on a SAN. Init
On Jan 26, 2007, at 13:53, Paul Fisher wrote:
This seems to be from Jim and on point:
http://www.usenix.org/event/fast05/tech/gray.pdf
Yes, thanks. That's the talk I was referring to. There's a reference
in it to a Microsoft tech report with measurement data.
--Ed
___
On Jan 26, 2007, at 13:29, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
Yes, I agree. I'm sorry I don't have the data that Jim presented at
FAST, but he did present actual data. Richard Elling (I believe it was
Richard) has also posted some related da
On Jan 26, 2007, at 13:16, Dana H. Myers wrote:
I would tend to expect these spurious events to impact read and write
equally; more specifically, the chance of any one read or write being
mis-addressed is about the same. Since, AFAIK, there are many more
reads
from a disk typically than writes,
On Jan 26, 2007, at 12:52, Dana H. Myers wrote:
So this leaves me wondering how often the controller/drive subsystem
reads data from the wrong sector of the drive without notice; is it
symmetrical with respect to writing, and thus about once a drive/year,
or are there factors which change this?
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management industry
for longer than I have (and I've be
On Jan 26, 2007, at 10:52, Marion Hakanson wrote:
Perhaps I'm stating the obvious, but here goes:
You could use SAN zoning of the affected LUN's to keep multiple hosts
from seeing the zpool. When failover time comes, you change the zoning
to make the LUN's visible to the new host, then import.
On Jan 26, 2007, at 10:57, Ross, Gary (G.A.) wrote:
...
What if something like the old CacheFS was revived, using ZFS as the
base file system instead of UFS?
...
Could this be a good thing, or am I way off base???
Disconnected operation is a hard problem. One of the better research
efforts
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers? I'm putting together a new
IMAP server that will eventually use 3TB of space from our Netapp via
an iSCSI SAN. The Netapp provides all of the
On Jan 26, 2007, at 7:17, Peter Eriksson wrote:
If you _boot_ the original machine then it should see that the pool
now is "owned" by
the other host and ignore it (you'd have to do a "zpool import -f"
again I think). Not tested though so don't take my word for it...
Conceptually, that's about
Shannon Roddy wrote:
For sun to charge 4-8 times street price for hard drives that
they order just the same as I do from the same manufacturers that I
order from is infuriating.
Are you sure they're really the same drives? Mechanically, they
probably are, but last I knew (I don't work in the
Ivan wrote:
Hi,
Is ZFS comparable to PVFS2? Could it also be used as an distributed filesystem
at the moment or are there any plans for this in the future?
I don't know anything at all about PVFS2, so I can't comment on that point.
As far as ZFS being used as a distributed file system, it c
On Dec 22, 2006, at 09:50, Anton B. Rang wrote:
Phantom writes and/or misdirected reads/writes:
I haven't seen probabilities published on this; obviously the disk
vendors would claim zero, but we believe they're slightly
wrong. ;-) That said, 1 in 10^8 bits would mean we’d have an
error
On Dec 9, 2006, at 8:59 , Jim Mauro wrote:
AnywayI'm feeling rather naive' here, but I've seen the "NFS
enforced synchronous semantics" phrase
kicked around many times as the explanation for suboptimal
performance for metadata-intensive
operations when ZFS is the underlying file system, bu
On Oct 20, 2006, at 0:48, Torrey McMahon wrote:
Anthony Miller wrote:
I want to create create a raidz on one array and have it mirrored to
the other array.
Do you think this will get you more availability compared to a simple
mirror? I'm curious as to why you would want to do this.
This con
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to read a RAID volume f
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are over
except for
oab wrote:
I'm new to ZFS so I was wondering if it is possible to concurrently
share a ZFS storage pool between two separate machines. I am currently
evaluating Sybase IQ
running on ZFS rather than raw devices(initial performance tests look
very promising) and need now to evaluate whether the IQ
Brian Hechinger wrote:
Could you "mix and match" by keeping the current style assuming there
are no -o options present?
# zfs create pool/fs
If you need to specify options, then they should all be options:
# zfs create -o name=pool/fs -o mountpoint=/bar -o etc
I would be tempted to have two
On Jul 18, 2006, at 8:58, Richard Elling wrote:
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way
On May 3, 2006, at 15:21, eric kustarz wrote:
There's basically two writes that need to happen: one for time and one
for the subcommand string. The kernel just needs to make sure if a
write completes, the data is parseable (has a delimiter). Its then up
to the userland parser (zpool history)
not tested,
partly because all of the required pieces are not yet in place) is
exporting the drives via iSCSI. Of course, this can't be done until
both initiator and target support for iSCSI are in Solaris, and I don't
know what the schedule for that might be.
valuable.
--Ed
--
Ed GouldSun Microsystems
File System Architect Sun Cluster
[EMAIL PROTECTED] 17 Network Circle
+1.650.786.4937 MS UMPK17-201
x84937 Menlo Park, CA 94025
25 matches
Mail list logo