Thanks for the response Richard. Forgive my ignorance but the following
questions come to mind as I read your response.
I would then have to create 80 RAIDz(6+1) Volumes and the process of
creating these Volumes can be scripted. But -
1) I would then have to create 80 mount points to mount each o
Oatway, Ted wrote:
IHAC that has 560+ LUNs that will be assigned to ZFS Pools and some
level of protection. The LUNs are provided by seven Sun StorageTek
FLX380s. Each FLX380 is configured with 20 Virtual Disks. Each Virtual
Disk presents four Volumes/LUNs. (4 Volumes x 20 Virtual Disks x 7 Di
IHAC that has 560+ LUNs that will be assigned to ZFS Pools
and some level of protection. The LUNs are provided by seven Sun StorageTek
FLX380s. Each FLX380 is configured with 20 Virtual Disks. Each Virtual Disk
presents four Volumes/LUNs. (4 Volumes x 20 Virtual Disks x 7 Disk Arrays
= 560
Yes, server has 8GB of RAM.
Most of the time there's about 1GB of free RAM.
bash-3.00# mdb 0
Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp
fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]
> arc::print
{
anon = ARC_anon
mru = ARC_mru
mru_ghost =
Robert,
I would be interested in seeing your crash dump. ZFS will consume much
of your memory *in the absence of memory pressure*, but it should be
responsive to memory pressure, and give up memory when this happens. It
looks like you have 8GB of memory on your system? ZFS should never
consum
IIRC there was a tunable variable to set how much data to read-in.
And default was 64KB...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski wrote:
Hi.
S10U2+patches, SPARC. NFS v3/tcp server with ZFS as local storage.
ZFS does only striping, actual RAID-10 is done on 3510. I can see
MUCH more throutput generated to disks than over the net to nfs
server. Nothing else runs on the server.
It looks like you are seeing
Hi.
S10U2+patches, SPARC.
NFS v3/tcp server with ZFS as local storage. ZFS does only striping, actual
RAID-10 is done on 3510. I can see MUCH more throutput generated to disks than
over the net to nfs server. Nothing else runs on the server.
bash-3.00# ./nicstat.pl 1
Time Int rKb/s wK
Hi.
v440, S10U2 + patches
OS and Kernel Version: SunOS X 5.10 Generic_118833-20 sun4u sparc
SUNW,Sun-Fire-V440
NFS server with ZFS as a local storage.
We were rsyncing UFS filesystem to ZFS filesystem exported over NFS. After some
time server which exports ZFS over NFS was unresponsiv
Jonathan Edwards wrote:
Here's 10 options I can think of to summarize combinations of zfs with
hw redundancy:
# ZFS ARRAY HWCAPACITYCOMMENTS
-- ---
1 R0 R1 N/2 hw mirror - no zfs healing (XXX)
2 R0 R5
Wee Yeh Tan wrote:
Perhaps, the question should be how one could mix them to get the best
of both worlds instead of going to either extreme.
In the specific case of a 3320 I think Jonathan's chart has a lot of
good info that can be put to use.
In the general case, well, I hate to say this
UNIX admin wrote:
[Solaris 10 6/06 i86pc]
Shortly thereafter, I ran out of space on my "space" pool, but `zfs
list` kept reporting I still had about a GigaByte worth of free
space, while `zpool status` seemed to correctly report I ran out of
space.
Please send us the output of 'zpool status
On Sep 5, 2006, at 06:45, Robert Milkowski wrote:Hello Wee,Tuesday, September 5, 2006, 10:58:32 AM, you wrote:WYT> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote: This is simply not true. ZFS would protect against the same type oferrors seen on an individual drive as it would on a pool made of
> AFAIK, no. The "attach" semantics only works for
> adding mirrors.
> Would be nice if that can be overloaded for RAIDZ.
Sure would be.
> Not sure exactly which blog entry but you might be
> confused that
> stripes can be of different sizes (not different
> sized disks). The
> man page for zp
Hello Wee,
Tuesday, September 5, 2006, 10:58:32 AM, you wrote:
WYT> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>> This is simply not true. ZFS would protect against the same type of
>> errors seen on an individual drive as it would on a pool made of HW raid
>> LUN(s). It might be overki
On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
This is simply not true. ZFS would protect against the same type of
errors seen on an individual drive as it would on a pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a LUN that is
already protected in some way by the
On Tue, Sep 05, 2006 at 10:49:11AM +0200, Pawel Jakub Dawidek wrote:
> On Tue, Aug 22, 2006 at 12:45:16PM +0200, Pawel Jakub Dawidek wrote:
> > Hi.
> >
> > I started porting the ZFS file system to the FreeBSD operating system.
> [...]
>
> Just a quick note about progress in my work. I needed slow
On Tue, Aug 22, 2006 at 12:45:16PM +0200, Pawel Jakub Dawidek wrote:
> Hi.
>
> I started porting the ZFS file system to the FreeBSD operating system.
[...]
Just a quick note about progress in my work. I needed slow down a bit,
but:
All file system operations seems to work. The only exception are
Hi,
On 9/4/06, UNIX admin <[EMAIL PROTECTED]> wrote:
[Solaris 10 6/06 i86pc]
...
Then I added two more disks to the pool with the `zpool add -fn space c2t10d0
c2t11d0`, whereby I determined that those would be added as a RAID0, which is
not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0`
19 matches
Mail list logo