Hello Rainer,
Monday, March 19, 2007, 3:07:54 AM, you wrote:
RH> Thanks for the feedback. Please see below.
>> ZFS should give back memory used for cache to system
>> if applications are demanding it. Right it should but sometimes it
>> won't.
>>
>> However with databases there's simple workaro
Info on tuning the ARC was just recently updated:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations
-r
Rainer Heilke writes:
> Thanks for the feedback. Please see below.
>
> > ZFS should give back memory used for cache
Hi Richard,
> The consensus best
> practice is to have enough RAM that you don't need to
> swap. If you need to
> swap, your life will be sad no matter what your disk
> config is.
>From my understanding Solaris does not overcommit memory allocation, so every
>allocation must be backed by some f
Using raidz in zfs or raidz2 do all the disks have to be the same size.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The updated information states that the kernel setting is only for the current
Nevada build. We are not going to use the kernel debugger method to change the
setting on a live production system (and do this everytime we need to reboot).
We're back to trying to set their expectations more realist
Rainer Heilke writes:
> The updated information states that the kernel setting is only for the
> current Nevada build. We are not going to use the kernel debugger
> method to change the setting on a live production system (and do this
> everytime we need to reboot).
>
> We're back to tryin
I currently run 6 Oracle 9i and 10g dbs using 8GB SGA apiece in containers on a
v890 and find no difficulties starting Oracle (though we don't start all the
dbs truly simultaneously). The ARC cache doesn't ramp up until a lot of IO has
passed through after a reboot (typically a steady rise over
Hi Kory,
No, they don't have to the same size. But, the pool size will be
constrained by the smallest disk and might not be the best
use of your disk space.
See the output below. I'd be better off mirroring the two 136-GB
disks and using the 4 GB-disk for something else. :-)
Cindy
c0t0d0 = 4
Hello Kory,
Monday, March 19, 2007, 4:47:27 PM, you wrote:
KW> Using raidz in zfs or raidz2 do all the disks have to be the same size.
No, they don't have to be the same size.
However all disks will be reduced to common size and once you
replace (online) all disks to bigger one the pool size wi
Hello Rainer,
Monday, March 19, 2007, 4:50:59 PM, you wrote:
RH> The updated information states that the kernel setting is only
RH> for the current Nevada build. We are not going to use the kernel
RH> debugger method to change the setting on a live production system
RH> (and do this everytime we
> After bootup, ZFS should have near zero memory in the
> ARC.
This makes sense, and I have no idea how long the server has been running
before the test. We can use the above information to help manage their
expectations; on boot-up, ARC will be low, so the de-allocation of resources
won't be a
Thanks. Like above, knowing the ARC takes time to ramp up strongly suggests
that it won't be an issue on a normally booting system. It sounds like your
needs are much greater, and that your databases are running fine.
I can take this information to the DBA's and use it to "manage their
expecta
Hi Rainer,
While I would recommend upgrading to Build 54 or newer to use the
system tunable, its not that big of a deal to set the ARC on boot up.
We've done it on a T2000 for awhile, until we could take it down for
an extended period of time to upgrade it.
Definitely WOULD NOT run a database on
JS wrote:
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
definitely!
-don't use zfs for redo logs - use ufs with directio and
Dagobert Michelsen wrote:
Hi Richard,
The consensus best
practice is to have enough RAM that you don't need to
swap. If you need to
swap, your life will be sad no matter what your disk
config is.
From my understanding Solaris does not overcommit memory allocation, so every
allocation must
Richard Elling wrote:
warning: noun/verb overload. In my context, swap is a verb.
It is also a common shorthand for "swap space."
--
--Ed
begin:vcard
fn:Ed Gould
n:Gould;Ed
org:Sun Microsystems, Inc.;Solaris Cluster
adr;dom:M/S UMPK17-201;;17 Network Circle;Menlo Park;CA;94025
email;in
On 3/19/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Rainer,
Monday, March 19, 2007, 4:50:59 PM, you wrote:
RH> The updated information states that the kernel setting is only
RH> for the current Nevada build. We are not going to use the kernel
RH> debugger method to change the setting
On Wed, Feb 28, 2007 at 11:45:35AM +0100, Roch - PAE wrote:
> > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
Any estimations, when we'll see a [feature] fix for U3?
Should I open a call, to perhaps rise the priority for the fix?
> The bug applies to checksum as well. Al
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called "biscuit/home" and should
have been modified to have its mountpoint set to /export/home.
Before the reboot, I did a lot of
19 matches
Mail list logo