Hi Louwtjie,
Are you running FC or SATA-II disks in the 6140? How many spindles too?
Best Regards,
Jason
On 11/3/06, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above avail
Jeff Victor wrote:
If I add a ZFS dataset to a zone, and then want to "zfs send" from
another computer into a file system that the zone has created in that
data set, can I "zfs send" to the zone, or can I send to that zone's
global zone, or will either of those work?
I believe that the 'zfs s
Al Hopper wrote:
[1] Using MTTDL = MTBF^2 / (N * (N-1) * MTTR)
But ... I'm not sure I buy into your numbers given the probability that
more than one disk will fail inside the service window - given that the
disks are identical? Or ... a disk failure occurs at 5:01 PM (quitting
time) on a Frida
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I'm open to
suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions
conc
Don't forget to restart mapid after modifying default domain in
/etc/default/nfs.
As root, run "svcadm restart svc:/network/nfs/mapid".
I've run into this in the past.
Karen
eric kustarz wrote:
Erik Trimble wrote:
I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...
On Fri, 3 Nov 2006, Richard Elling - PAE wrote:
> ozan s. yigit wrote:
> > for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
> > basis for this recommendation? i assume it is performance and not failure
> > resilience, but i am just guessing... [i know, recommendation was in
Erik Trimble wrote:
I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...
Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems
shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the
ZFS property). Default settings in /etc/default/nfs
b
I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...
Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems
shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the
ZFS property). Default settings in /etc/default/nfs
bigbox# share
- /
Matthew Flanagan wrote:
Matt,
Matthew Flanagan wrote:
mkfile 100m /data
zpool create tank /data
...
rm /data
...
panic[cpu0]/thread=2a1011d3cc0: ZFS: I/O failure
(write on off 0: zio 60007432bc0 [L0
unallocated] 4000L/400P DVA[0]=<0:b000:400>
DVA[1]=<0:120a000:400> fletcher4 lzjb
If I add a ZFS dataset to a zone, and then want to "zfs send" from another
computer into a file system that the zone has created in that data set, can I "zfs
send" to the zone, or can I send to that zone's global zone, or will either of
those work?
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 10/16 - 10/31
=
Size of all threads during per
ozan s. yigit wrote:
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no f
Chris Gerhard wrote:
An alternate way will be to use NFSv4. When an NFSv4
client crosses
a mountpoint on the server, it can detect this and
mount the filesystem.
It can feel like a "lite" version of the automounter
in practice, as
you just have to mount the root and discover the
filesystems as n
>
> An alternate way will be to use NFSv4. When an NFSv4
> client crosses
> a mountpoint on the server, it can detect this and
> mount the filesystem.
> It can feel like a "lite" version of the automounter
> in practice, as
> you just have to mount the root and discover the
> filesystems as neede
Jay Grogan wrote:
The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not
shairing any ports on the MCdata, but with so much cache we aren't close to
taxing the disk.
Are you sure? At some point data has to get flushed from the cache to
the drives themselves. In most
Richard Elling - PAE wrote:
Robert Milkowski wrote:
I almost completely agree with your points 1-5, except that I think
that having at least one hot spare by default would be better than
having none at all - especially with SATA drives.
Yes, I pushed for it, but didn't win.
In a perfect wor
Hello ozan,
Friday, November 3, 2006, 3:57:00 PM, you wrote:
osy> for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
osy> basis for this recommendation? i assume it is performance and not failure
osy> resilience, but i am just guessing... [i know, recommendation was intended
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
t
18 matches
Mail list logo