Can anyone comment on Solaris with zfs on HP systems? Do things work
reliably? When there is trouble how many hoops does HP make you jump
through (how painful is it to get a part replaced that isn't flat out
smokin')? Have you gotten bounced between vendors?
Thanks,
Chris
Erik Trimble wrote:
port folks, finger pointing between vendors, or have lots
of grief from an untested combination of parts. If this isn't possible
we'll certainly find a another solution. I already know it won't be the
7000 series.
Thank you,
Chris Banal
Marion Hakanson wrote:
jp...@cam.ac.uk said
hese type of vendors will be at NAB this year? I'd like to talk to a
few if they are...
--
Thank you,
Chris Banal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
no trouble with it as far as I could tell. Would only resliver the
data that was changed while that drive was offline. We had no data loss.
Thank you,
Chris Banal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
What is the best way to tell if your bound by the number of individual
operations per second / random io? "zpool iostat" has an "operations" column
but this doesn't really tell me if my disks are saturated. Traditional
"iostat" doesn't seem to be the greatest place to look when utilizing zfs.
Than
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
for a scrub?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops
of which about 90% are meta data. In hind sight it would have been
significantly better to use a mirrored configuration but we opted for 4 x
(9+2) raidz2 at the time. We can not take the downtime necessary to change
the zpo
On Sat, Oct 3, 2009 at 11:33 AM, Richard Elling wrote:
> On Oct 3, 2009, at 10:26 AM, Chris Banal wrote:
>
> On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling
>> wrote:
>>
>> c is the current size the ARC. c will change dynamically, as memory
>> pressure
>&
On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling wrote:
>
> c is the current size the ARC. c will change dynamically, as memory
> pressure
> and demand change.
How is the relative greediness of c determined? Is there a way to make it
more greedy on systems with lots of free memory?
>
>
> > Whe
We have a production server which does nothing but nfs from zfs. This
particular machine has plenty of free memory. Blogs and Documentation state
that zfs will use as much memory as is "necessary" but how is "necessary"
calculated? If the memory is free and unused would it not be beneficial to
incr
This was previously posed to the sun-managers mailing list but the only
reply I received recommended I post here at well.
We have a production Solaris 10u5 / ZFS X4500 file server which is
reporting NLM_DENIED_NOLOCKS immediately for any nfs locking request. The
lockd does not appear to be busy s
It appears as though zfs reports the size of a directory to be one byte per
file. Traditional file systems such as ufs or ext3 report the actual size of
the data needed to store the directory.
This causes some trouble with the default behavior of some nfs clients
(linux) to decide to to use a read
Since most zfs features / fixes are reported in snv_XXX terms. Is there some
sort of way to figure out which versions of Solaris 10 have the equivalent
features / fixes?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
13 matches
Mail list logo