t I'm wondering if anyone has had to touch this or
other settings with ZFS appliances they've built...?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
om
LSI resellers and they work wonderfully with ZFS.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h
of huge files (are you writing a lot of 16GB files?) then you'll want
to test for that. Caching anywhere in the pipeline is important for
benchmarks because you aren't going to turn off a cache or remove RAM
in production are you?
-Gary
___
zf
, does filebench have an option for testing either
of those?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
so insulting that I couldn't finish the last 70
pages of the paperback.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
destroying old snapshots or creating new ones that causes this
dead time? What does each of these procedures do that could affect
the system? What can I do to make this less visible to users?
--
-Gary Mills--Unix Group--Computer and Network Services
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote:
>
>On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins <[1]...@ianshome.com>
>wrote:
>
>Gary Mills wrote:
>
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
>
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
> It uses six ZFS filesystems built on a single zpool with 14 daily
> snapshots. Every day at 11:56, a cron command destroys the oldest
> snapshots and
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote:
> >>>>> "gm" == Gary Mills writes:
>
> gm> destroys the oldest snapshots and creates new ones, both
> gm> recursively.
>
> I'd be curious if you try taking the same snaps
conds.
>
> Out of curiosity, how much physical memory does this system have?
Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus
IMAP processes, thousands of them, use memory mapping extensively.
I don't know if this design affects the snapshot recycle behavior.
--
-G
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
> It uses six ZFS filesystems built on a single zpool with 14 daily
> snapshots. Every day at 11:56, a cron command destroys the oldest
> snapshots and
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring all I/O to a crawl.
The job is launched on
also with only two disks. It should be easy to
find a pair of 1U servers, but what's the smallest SAS array that's
available? Does it need an array controller? What's needed on the
servers to connect to it?
--
-Gary Mills--Unix Group--Computer and Network Servic
d
redundant SAS paths.
I plan to use ZFS everywhere, for the root filesystem and the shared
storage. The only exception will be UFS for /globaldevices .
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing l
On Thu, May 06, 2010 at 07:46:49PM -0700, Rob wrote:
> Hi Gary,
> I would not remove this line in /etc/system.
> We have been combatting this bug for a while now on our ZFS file
> system running JES Commsuite 7.
>
> I would be interested in finding out how you were able to pin p
s also some
> additional software setup for that configuration.
That would be the SATA interposer that does that.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r disk device to the zpool will double the bandwidth.
/var/log/syslog is quite large, reaching about 600 megabytes before
it's rotated. This takes place each night, with compression bringing
it down to about 70 megabytes. The server handles about 500,000
messages a day.
--
-Gary
I have seen this too
I 'm guessing you have SATA disks which are on a iSCSI target.
I'm also guessing you have used something like
iscsitadm create target --type raw -b /dev/dsk/c4t0d00 c4t0d0
ie you are not using a zfs shareiscsi property on a zfs volume but creating
the target from the devi
ike they were before, I assume it assembles everything the way it was before,
including the filesytem and such.
Or am I incorrect about this?
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Thanks for quick response. I appreciate it much.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
stand how to do this on a normal pool, but is there any restrictions for
doing this on the root pool? Are there any grub issues?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ap': Device busy
cannot unmount '/space/log': Device busy
cannot unmount '/space/mysql': Device busy
2 filesystems upgraded
Do I have to shut down all the applications before upgrading the
filesystems? This is on a Solaris 10 5/09 system.
--
-Gary Mill
sable
twelve services before doing the upgrade and enable them afterwards.
`fuser -c' is useful to identify the processes. Mapping them to
services can be difficult. The server is essentially down during the
upgrade.
For a root filesystem, you might have to boot off the failsafe
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Code form of the Covered Software You distribute or
otherwise make available. You must inform recipients of any such
Covered Software in Executable form as to how they can obtain such
Covered Software in Source Code form in a reasonable manner on or
through a medium customarily u
that'd be a good idea for ZFS in
this situation, or if there's some NFS tuning that should be done when
dealing specifically with ZFS. Any advice would be greatly appreciated.
Thanks,
--
------
G
gt; OpenSolaris testing tonight and see what
happens.
Thanks for the replies, appreciate the help!
On Tue, Oct 20, 2009 at 1:43 PM, Trevor Pretty wrote:
> Gary
>
> Where you measuring the Linux NFS write performance? It's well know that
> Linux can use NFS in a very "unsafe
Apple is known to strong arm in licensing negotiations. I'd really like to
hear the straight-talk about what transpired.
That's ok, it just means that I won't be using mac as a server.
--
This message posted from opensolaris.org
___
zfs-discuss mailin
is destroyed (as I describe
above), then the associated devices are also removed.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mail
pushed.
It would be nice to see this information at:
http://hub.opensolaris.org/bin/view/Community+Group+on/126-130
but it hasn't changed since 23 October.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zf
Will a scrub fix it? This is a
production system, so I want to be careful.
It's running Solaris 10 5/09 s10x_u7wos_08 X86.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@ope
July. This is an X4450 with ECC
memory. There were no disk errors reported. I suppose we can blame
the memory.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
OpenSolaris advocacy in this arena
while the topic is hot.
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t worked. After the scrub, there are no errors reported.
> >You might be able to identify these object numbers with zdb, but
> >I'm not sure how do that.
>
> You can try to use zdb this way to check if these objects still exist
>
> zdb -d space/dcc 0x11e887
hat we blocked it as a command. It saved us a lot of support
calls.
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[ Dec 19 08:09:11 Executing start method ("/lib/svc/method/fs-local") ]
[ Dec 19 08:09:12 Method "start" exited with status 0 ]
Is a dependancy missing?
--
-Gary Mills--Unix Group--Computer and Network Services-
__
red
The middle one seems to be the issue I'd like to track down the source. Any
docs on how to do this?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mattias Pantzare wrote:
On Sun, Jan 10, 2010 at 16:40, Gary Gendel wrote:
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I
er of disks raidz bug I
reported.
Looks like I've got to bite the bullet and upgrade to the dev tree and hope for
the best.
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
ssume it's been fixed by now. It may have only
affected Oracle database.
I'd like to remove this line from /etc/system now, but I don't know
if it will have any adverse effect on ZFS or the Cyrus IMAP server
that runs on this machine. Does anyone know if ZFS uses large memory
pages?
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote:
>
> This line was a workaround for bug 6642475 that had to do with
> searching for for large contiguous pages. The result was high system
> time and slow response. I can't find any public information on this
> bu
Thanks for all the suggestions. Now for a strange tail...
I tried upgrading to dev 130 and, as expected, things did not go well. All
sorts of permission errors flew by during the upgrade stage and it would not
start X-windows. I've heard that things installed from the contrib and extras
rep
0:50:31 79661 7547 6 3525830G 32G
10:50:361K 117 9 105812 5344 1030G 32G
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-di
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> On Tue, 12 Jan 2010, Gary Mills wrote:
> >
> >Is moving the databases (IMAP metadata) to a separate ZFS filesystem
> >likely to improve performance? I've heard that this is important, but
> >I
On Tue, Jan 12, 2010 at 01:56:57PM -0800, Richard Elling wrote:
> On Jan 12, 2010, at 12:37 PM, Gary Mills wrote:
>
> > On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> >> On Tue, 12 Jan 2010, Gary Mills wrote:
> >>>
> >>> Is movin
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote:
> On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
> > Yes, I understand that, but do filesystems have separate queues of any
> > sort within the ZIL?
>
> I'm not sure. If you can experi
On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
>
> Gary Mills writes:
> >
> > Yes, I understand that, but do filesystems have separate queues of any
> > sort within the ZIL? If not, would it help to put the database
> > filesystems into a separate zpool?
first filesystem but don't destroy the
snapshot. I want to do the opposite. Is this possible?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
My guess is that the grub bootloader wasn't upgraded on the actual boot disk.
Search for directions on how to mirror ZFS boot drives and you'll see how to
copy the correct grub loader onto the boot disk.
If you want to do this simpler, swap the disks. I did this when I was moving
from SXCE to
Is zdb still the only way to dive in to the file system? I've seen the
extensive work by Max Bruning on this but wonder if there are any tools that
make this easier...?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Nov 23, 2011, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. :
> did you see this link
Thank you for this. Some of the other refs it lists will come in handy as well.
kind regards,
Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
What kind of drives are we talking about? Even SATA drives are
available according to application type (desktop, enterprise server,
home PVR, surveillance PVR, etc). Then there are drives with SAS &
fiber channel interfaces. Then you've got Winchester platters vs SSD
vs hybrids. But even before con
't utilizing ZFS in the
Linux version of the X2-2. Has that changed with the Solaris x86
versions of the appliance? Also, does OCZ or someone make an
equivalent to the F20 now?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
ally when there are no contiguous blocks available. Deleting
a snapshot provides some of these, but only temporarily.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolari
I can't comment on their 4U servers but HP's 1&2U includwd SAS
controllers rarely allow JBOD discovery of drives. So I'd recommend an
LSI card and an external storage chassis like those available from
Promise and others.
-Gary
___
hey've changed in the last couple of years...
Best you can do is try but if you don't see each drive individually
you'll know it's by design and not lack of skill on your part.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
7;s nothing inbetween.
Of course, if something outside of ZFS writes to the disk, then data
belonging to ZFS will be modified. I've heard of RAID controllers or
SAN devices doing this when they modify the disk geometry or reserved
areas on the disk.
--
-Gary Mi
folder whenever new messages arrived, making
that portion slow as well. Performance degraded when the storage
became 50% full. It would increase markedly when the oldest snapshot
was deleted.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
_
that
imported the zpool later during the reboot.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ult
The zpool import (without the mount) is done earlier. Check to see
if any of the FC services run too late during the boot.
> As Gary and Bob mentioned, I saw this Issue with ISCSI Devices.
> Instead of export / import is a zpool clear also working?
>
> mpathadm lis
available when the zpool import is
done during the boot. Check with Oracle support to see if they have
found a solution.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It looks like the first iteration has finally launched...
http://tenscomplement.com/our-products/zevo-silver-edition
http://www.macrumors.com/2012/01/31/zfs-comes-to-os-x-courtesy-of-apples-former-chief-zfs-architect
___
zfs-discuss mailing list
zfs-dis
I've seen a couple sources that suggest prices should be dropping by
the end of April -- apparently not as low as pre flood prices due in
part to a rise in manufacturing costs but about 10% lower than they're
priced today.
-Gary
___
zfs-discu
? The largest file size I've used in my still lengthy
benchmarks was 16Gb. If you use the sizes you've proposed, it could
take several days or weeks to complete. Try a web search for "iozone
examples" if you want more details on the command switches.
-Gary
_
non-empty directory to result in a recursive rm... But if they
> really want hardlinks to directories, then yeah, that's horrible.
This all sounds like a good use for LD_PRELOAD and a tiny library
that intercepts and modernizes system calls.
--
-Gary Mills--refurb-
o this by
specifying the `cachefile' property on the command line. The `zpool'
man page describes how to do this.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@o
t LUNs that
behave as perfectly reliable virtual disks, guaranteed to be error
free. Almost all of the time, ZFS will find no errors. If ZFS does
find an error, there's no nice way to recover. Most commonly, this
happens when the SAN is powered down or rebooted while the ZFS host
is still runn
ck size of 512 bytes, even though the Netapp itself used a 4K
block size. This means that the filer was doing the block size
conversion, resulting in much more I/O than the ZFS layer intended.
The fact that Netapp does COW made this situation even worse.
My impression was that very few of their
On Dec 4, 2012, Eugen Leitl wrote:
> Either way I'll know the hardware support situation soon
> enough.
Have you tried contacting Sonnet?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
swap space for paging. Paging out
unused portions of an executing process from real memory to the swap
device is certainly beneficial. Swapping out complete processes is a
desperation move, but paging out most of an idle process is a good
thing.
--
-Gary Mi
On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.
Whose description still reads, "everything ZFS running on illumos-based
di
On Mar 14, 2013, at 5:55 PM, Jim Klimov wrote:
> However, recently the VM "virtual hardware" clocks became way slow.
Does NTP help correct the guest's clock?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
h the `dependency' and
`/dependency' pairs. It should also specify a `single_instance/' and
`transient' service. The method script can do whatever the mount
requires, such as creating the ramdisk.
--
-Gary Mills--Unix Group--Computer and Network Services-
x27;s because ZFS does not have a way to handle a large class of
storage designs, specifically the ones with raw storage and disk
management being provided by reliable SAN devices.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
users to /export/home
So, what are the appropriate commands for these steps?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Norm,
Thank you. I just wanted to double-check to make sure I didn't mess up things.
There were steps that I was head-scratching after reading the man page. I'll
spend a bit more time re-reading it using the steps outlined so I understand
these fully.
Gary
--
This message p
Looking at migrating zones built on an M8000 and M5000 to a new M9000. On the
M9000 we started building new deployments using ZFS. The environments on the
M8/M5 are UFS. these are whole root zones. they will use global zone resources.
Can this be done? Or would a ZFS migration be needed?
than
how to remove the files from the orginal rpool/export/home (non
mount point) rpool? I a bit nervous to do a:
zfs destroy rpool/export/home
Is the the correct and safe methodology?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs
ment with ZFS in this situation anyway because those aren't real
disks. Disk management all has to be done on the SAN storage device.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
debatable issue, one that quickly
becomes exceedingly complex. The decision rests on probabilities
rather than certainties.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r at I/O performance.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ouldn't ZFS I/O scheduling interfere with I/O scheduling
already done by the storage device?
Is there any reason not to use one LUN per RAID group?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mail
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
> On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote:
> >
> > Is there any reason not to use one LUN per RAID group?
[...]
> In other words, if you build a zpool with one vdev of 10GB and
> another with two vde
y making it a separate dataset.
People forget (c), the ability to set different filesystem options on
/var. You might want to have `setuid=off' for improved security, for
example.
--
-Gary Mills--Unix Group--Computer and Network Services-
_
les up to 48 SAS/SATA disk drives
# Provides up to 72 Gb/sec of total bandwidth
* Up to 72 Gb/sec of total bandwidth
* Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth)
* Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth)
* Scales up to 48 drives
-
ding IOs because it
could distribute those IOs across the disks. It would, of course,
require a non-volatile cache to provide fast turnaround for writes.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e
zvol just another block device?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
> On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
> > The `lofiadm' man page describes how to export a file as a block
> > device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
> >
pool was last accessed by another system.' error, or
will the import succeed? Does the cache change the import behavior?
Does it recognize that the server is the same system? I don't want
to include the `-f' flag in the commands above when it's not needed.
--
-Gary Mills-
server lost power.
> Sent from my iPad
Sent from my Sun type 6 keyboard.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Aug 29, 2011 at 05:24:18PM -0700, Richard Elling wrote:
> We use this method to implement NexentaStor HA-Cluster and, IIRC,
> Solaris Cluster uses shared cachefiles, too. More below...
Mine's a cluster too, with quite a simple design.
> On Aug 29, 2011, at 11:13 AM, Ga
both disks. I can boot
either one to get the same GRUB menu and the same default Nevada
build. I'm very impressed with how well ZFS and Live Upgrade work
together.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
_
the second path, we exported the pool. Then, we enabled mpxio and
imported the pool again. `zpool import' will show the new device names,
but you only need to specify the pool name. It worked perfectly.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
__
self-healing properties then make
>sure you have some kind of redundancy on zfs level regardless of your
>redundancy on the array.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing
. You can also put several boot environments on the
same ZFS pool. So far, I've done upgrades to builds 95 and 96 that
way. My AMD box has two 80-gig SATA disks in a ZFS mirror. I'm
very impressed.
--
-Gary Mills--Unix Support--U of M Acad
#x27; takes the blame all the time, in my experience, but
what does it mean? It likely has nothing to do with the filesystem.
Probably an application wrote incorrect information into a file.
--
-Gary Mills--Unix Support--U of M Academic Com
looked at
zpool I/O statistics when the backup is running, but there's nothing
clearly wrong.
I'm wondering if perhaps all the read activity by the backup system
is causing trouble with ZFS' caching. Is there some way to examine
this area?
--
-Gary Mills--Unix Support--U of
s another interesting place to look
> http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
Thanks. I'll review those documents.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 30, 2008 at 10:32:50AM -0700, William D. Hathaway wrote:
> Gary -
>Besides the network questions...
Yes, I suppose I should see if traffic on the Iscsi network is
hitting a limit of some sort.
>What does your zpool status look like?
Pretty simple:
$ zpool status
of the best one.
Those references are for network tuning. I don't want to change
things blindly. How do I tell if they are necessary, that is if
the network is the bottleneck in the I/O system?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
__
1 - 100 of 220 matches
Mail list logo