Think of swap space as "current total available virtual memory space
available for program use". It is composed of unreserved RAM (that is,
RAM not currently reserved for usage other than program memory - things
such as file caching, certain ZFS functionality, kernel usage) PLUS
whatever disk
My inclination, based on what I've read and heard from others, is to say
"no".
But again, the best way to find out is to write the code. :\
On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Be
I'm not sure where the poster got this information, or how it seems to
be at odds with the design goals of AVS. Perhaps they only looked at
one piece of the puzzle and then got lost?
I wrote it :-) It's right there in the manual, in fact:
http://docs.sun.com/source/819-6148-10/chap4.html#pgfId-
On 6/9/2010 7:20 PM, Greg Eanes wrote:
On Wed, Jun 9, 2010 at 8:17 PM, devsk wrote:
$ swap -s
total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
available
$ swap -l
swapfile devswaplo blocks free
/dev/dsk/c6t0d0s1 215,1 8 12594952 12
On Wed, Jun 9, 2010 at 8:17 PM, devsk wrote:
> $ swap -s
> total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
> available
>
> $ swap -l
> swapfile dev swaplo blocks free
> /dev/dsk/c6t0d0s1 215,1 8 12594952 12594952
>
> Can someone please do
$ swap -s
total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
available
$ swap -l
swapfile devswaplo blocks free
/dev/dsk/c6t0d0s1 215,1 8 12594952 12594952
Can someone please do the math for me here? I am not able to figure the total.
What
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/9/2010 5:04 PM, Edward Ned Harvey wrote:
>
> Everything is faster with more ram. There is no limit, unless the total
> used disk in your system is smaller than the available ram in your system
> ... which seems very improbable.
>
Off topic, bu
Are you sure of that? This directly contradicts what David Magda
said yesterday.
Yes. Just how is what he said contradictory?
> Unfortunately, there are no simple, easy to implement heartbeat mechanisms
for Solaris.
Not so. Sun/Solaris Cluster is (fairly) simple and (relatively) easy
to i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Joe Auty
>
> I'm also noticing that I'm a little short on RAM. I have 6 320 gig
> drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
> mean that I need at least 6.4 gig of RAM
On Wed, Jun 9, 2010 at 9:20 AM, Geoff Nordli wrote:
> Have you played with the flush interval?
>
> I am using iscsi based zvols, and I am thinking about not using the caching
> in vbox and instead rely on the comstar/zfs side.
>
> What do you think?
If you care about your data, IgnoreFlush should
On Jun 8, 2010, at 5:17 PM, Moazam Raja wrote:
> Hi all, I'm trying to accomplish server to server storage replication
> in synchronous mode where each server is a Solaris/OpenSolaris machine
> with its own local storage.
>
> For Linux, I've been able to achieve what I want with DRBD but I'm
> hop
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it
all if you have a big enough read working set.
-- Garrett
Joe Auty wrote:
>I'm also noticing that I'm a little short on RAM. I have 6 320 gig
>
>
>Brandon High wrote:
>On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
>
>
>What VM software are you using? There are a few knobs you can turn in VBox
>which will help with slow storage. See
>http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on
>reducing the flush interval.
> On Behalf Of Joe Auty
>Sent: Tuesday, June 08, 2010 11:27 AM
>
>
>I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm
>evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet)
is
>giving me kernel panics on the host while starting up VMs which are
obviousl
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alvin Lobo
>
> Is there a way that i can add one HDD from server A and one HDD from
> server B to a zfs pool so that there is an online snapshot taken at
> regular intervals. hence maintaining
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Toyama Shunji
>
> Certainly I feel it is difficult, but is it logically impossible to
> write a filter program to do that, with reasonable memory use?
Good question. I don't know the answer.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of besson3c
>
> I'm wondering if somebody can kindly direct me to a sort of newbie way
> of assessing whether my ZFS pool performance is a bottleneck that can
> be improved upon, and/or whether I
NFS writes on ZFS blows chunks performance wise. The only way to increase the
write speed is by using an slog, the problem is that a "proper" slog device
(one that doesn't lose transactions) does not exist for a reasonable price. The
least expensive SSD that will work is the Intel X25-E, and eve
Hi Alvin,
Which Solaris release is this?
If you are using a OpenSolaris release (build 131), you might consider
the zpool split feature that allows you to clone a mirrored pool by
attaching the HDD to the pool, letting it resilver, and using zpool
split to clone the pool. Then, move the HDD and
Hi Joe,
I have no clue why this drive was removed, particularly for a one time
failure. I would reconnect/reseat this disk and see if the system
recognizes it. If it resilvers, then you're back in business, but I
would use zpool status and fmdump to monitor this pool and its devices
more often.
Cindy Swearingen wrote:
According
to this report, I/O to this device caused a probe failure
because the device isn't available on May 31.
I was curious if this device had any previous issues over a longer
period of time.
Failing or faulted drives can also kill your pool's perf
I'm also noticing that I'm a little short on RAM. I have 6 320 gig
drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
mean that I need at least 6.4 gig of RAM.
What role does RAM play with queuing and caching and other things which
might impact overall disk performance? How much
On Jun 8, 2010, at 1:33 PM, besson3c wrote:
Sure! The pool consists of 6 SATA drives configured as RAID-Z. There
are no special read or write cache drives. This pool is shared to
several VMs via NFS, these VMs manage email, web, and a Quickbooks
server running on FreeBSD, Linux, and Wind
On Wed, Jun 9, 2010 at 7:40 AM, Maurice Volaski
wrote:
>> For Linux, I've been able to achieve what I want with DRBD but I'm
>> hoping I can find a similar solution on Solaris so that I can leverage
>> ZFS. It seems that solution is Sun Availability Suite (AVS)?
>
> AVS is like DRBD, but only to a
For Linux, I've been able to achieve what I want with DRBD but I'm
hoping I can find a similar solution on Solaris so that I can leverage
ZFS. It seems that solution is Sun Availability Suite (AVS)?
AVS is like DRBD, but only to a point. If the drives on your primary
fail, the primary will star
- "Alvin Lobo" skrev:
> Is there a way that i can add one HDD from server A and one HDD from
> server B to a zfs pool so that there is an online snapshot taken at
> regular intervals. hence maintaining a copy on both HDD's.
I think zfs send/receive might be what you are looking for.
Vennlig
Is there a way that i can add one HDD from server A and one HDD from server B
to a zfs pool so that there is an online snapshot taken at regular intervals.
hence maintaining a copy on both HDD's.
--
This message posted from opensolaris.org
___
zfs-disc
27 matches
Mail list logo