Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-22 Thread Gary Mills
swap space for paging. Paging out unused portions of an executing process from real memory to the swap device is certainly beneficial. Swapping out complete processes is a desperation move, but paging out most of an idle process is a good thing. -- -Gary Mi

Re: [zfs-discuss] LUN sizes

2012-10-29 Thread Gary Mills
ck size of 512 bytes, even though the Netapp itself used a 4K block size. This means that the filer was doing the block size conversion, resulting in much more I/O than the ZFS layer intended. The fact that Netapp does COW made this situation even worse. My impression was that very few of their

Re: [zfs-discuss] Zpool LUN Sizes

2012-10-28 Thread Gary Mills
t LUNs that behave as perfectly reliable virtual disks, guaranteed to be error free. Almost all of the time, ZFS will find no errors. If ZFS does find an error, there's no nice way to recover. Most commonly, this happens when the SAN is powered down or rebooted while the ZFS host is still runn

Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-21 Thread Gary Mills
o this by specifying the `cachefile' property on the command line. The `zpool' man page describes how to do this. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@o

Re: [zfs-discuss] [developer] Re: History of EPERM for unlink() of directories on ZFS?

2012-06-26 Thread Gary Mills
non-empty directory to result in a recursive rm... But if they > really want hardlinks to directories, then yeah, that's horrible. This all sounds like a good use for LD_PRELOAD and a tiny library that intercepts and modernizes system calls. -- -Gary Mills--refurb-

Re: [zfs-discuss] zfs and iscsi performance help

2012-01-27 Thread Gary Mills
available when the zpool import is done during the boot. Check with Oracle support to see if they have found a solution. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-26 Thread Gary Mills
ult The zpool import (without the mount) is done earlier. Check to see if any of the FC services run too late during the boot. > As Gary and Bob mentioned, I saw this Issue with ISCSI Devices. > Instead of export / import is a zpool clear also working? > > mpathadm lis

Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-24 Thread Gary Mills
that imported the zpool later during the reboot. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-16 Thread Gary Mills
folder whenever new messages arrived, making that portion slow as well. Performance degraded when the storage became 50% full. It would increase markedly when the oldest snapshot was deleted. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- _

Re: [zfs-discuss] Does raidzN actually protect against bitrot? If yes - how?

2012-01-15 Thread Gary Mills
7;s nothing inbetween. Of course, if something outside of ZFS writes to the disk, then data belonging to ZFS will be modified. I've heard of RAID controllers or SAN devices doing this when they modify the disk geometry or reserved areas on the disk. -- -Gary Mi

Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Gary Mills
ally when there are no contiguous blocks available. Deleting a snapshot provides some of these, but only temporarily. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread Gary Mills
On Mon, Aug 29, 2011 at 05:24:18PM -0700, Richard Elling wrote: > We use this method to implement NexentaStor HA-Cluster and, IIRC, > Solaris Cluster uses shared cachefiles, too. More below... Mine's a cluster too, with quite a simple design. > On Aug 29, 2011, at 11:13 AM, Ga

Re: [zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread Gary Mills
server lost power. > Sent from my iPad Sent from my Sun type 6 keyboard. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread Gary Mills
pool was last accessed by another system.' error, or will the import succeed? Does the cache change the import behavior? Does it recognize that the server is the same system? I don't want to include the `-f' flag in the commands above when it's not needed. -- -Gary Mills-

Re: [zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-12 Thread Gary Mills
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote: > On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote: > > The `lofiadm' man page describes how to export a file as a block > > device and then use `mkfs -F pcfs' to create a FAT filesystem on it. > >

[zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-10 Thread Gary Mills
e zvol just another block device? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Gary Mills
ding IOs because it could distribute those IOs across the disks. It would, of course, require a non-volatile cache to provide fast turnaround for writes. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Gary Mills
les up to 48 SAS/SATA disk drives # Provides up to 72 Gb/sec of total bandwidth * Up to 72 Gb/sec of total bandwidth * Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth) * Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth) * Scales up to 48 drives -

Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Gary Mills
y making it a separate dataset. People forget (c), the ability to set different filesystem options on /var. You might want to have `setuid=off' for improved security, for example. -- -Gary Mills--Unix Group--Computer and Network Services- _

Re: [zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote: > On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote: > > > > Is there any reason not to use one LUN per RAID group? [...] > In other words, if you build a zpool with one vdev of 10GB and > another with two vde

[zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
ouldn't ZFS I/O scheduling interfere with I/O scheduling already done by the storage device? Is there any reason not to use one LUN per RAID group? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mail

[zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread Gary Mills
r at I/O performance. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-24 Thread Gary Mills
debatable issue, one that quickly becomes exceedingly complex. The decision rests on probabilities rather than certainties. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-23 Thread Gary Mills
ment with ZFS in this situation anyway because those aren't real disks. Disk management all has to be done on the SAN storage device. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-22 Thread Gary Mills
x27;s because ZFS does not have a way to handle a large class of storage designs, specifically the ones with raw storage and disk management being provided by reliable SAN devices. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Gary Mills
h the `dependency' and `/dependency' pairs. It should also specify a `single_instance/' and `transient' service. The method script can do whatever the mount requires, such as creating the ramdisk. -- -Gary Mills--Unix Group--Computer and Network Services-

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Gary Mills
Code form of the Covered Software You distribute or otherwise make available. You must inform recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily u

[zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Gary Mills
. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs upgrade unmounts filesystems

2010-07-29 Thread Gary Mills
sable twelve services before doing the upgrade and enable them afterwards. `fuser -c' is useful to identify the processes. Mapping them to services can be difficult. The server is essentially down during the upgrade. For a root filesystem, you might have to boot off the failsafe

[zfs-discuss] zfs upgrade unmounts filesystems

2010-07-29 Thread Gary Mills
ap': Device busy cannot unmount '/space/log': Device busy cannot unmount '/space/mysql': Device busy 2 filesystems upgraded Do I have to shut down all the applications before upgrading the filesystems? This is on a Solaris 10 5/09 system. -- -Gary Mill

[zfs-discuss] ZFS disks hitting 100% busy

2010-06-07 Thread Gary Mills
r disk device to the zpool will double the bandwidth. /var/log/syslog is quite large, reaching about 600 megabytes before it's rotated. This takes place each night, with compression bringing it down to about 70 megabytes. The server handles about 500,000 messages a day. -- -Gary

Re: [zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-17 Thread Gary Mills
s also some > additional software setup for that configuration. That would be the SATA interposer that does that. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-05-07 Thread Gary Mills
0% we seems to have quite a number of > issues, much the same as what you've had in the past, ps and prstats > hanging. > > are you able to tell me the IDR number that you applied? The IDR was only needed last year. Upgrading to Solaris 10 10/09 and applying the latest pa

[zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-03 Thread Gary Mills
d redundant SAS paths. I plan to use ZFS everywhere, for the root filesystem and the shared storage. The only exception will be UFS for /globaldevices . -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing l

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-26 Thread Gary Mills
also with only two disks. It should be easy to find a pair of 1U servers, but what's the smallest SAS array that's available? Does it need an array controller? What's needed on the servers to connect to it? -- -Gary Mills--Unix Group--Computer and Network Servic

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-11 Thread Gary Mills
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote: > We have an IMAP e-mail server running on a Solaris 10 10/09 system. > It uses six ZFS filesystems built on a single zpool with 14 daily > snapshots. Every day at 11:56, a cron command destroys the oldest > snapshots and

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
conds. > > Out of curiosity, how much physical memory does this system have? Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus IMAP processes, thousands of them, use memory mapping extensively. I don't know if this design affects the snapshot recycle behavior. -- -G

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote: > >>>>> "gm" == Gary Mills writes: > > gm> destroys the oldest snapshots and creates new ones, both > gm> recursively. > > I'd be curious if you try taking the same snaps

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-05 Thread Gary Mills
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote: > We have an IMAP e-mail server running on a Solaris 10 10/09 system. > It uses six ZFS filesystems built on a single zpool with 14 daily > snapshots. Every day at 11:56, a cron command destroys the oldest > snapshots and

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-04 Thread Gary Mills
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote: > >On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins <[1]...@ianshome.com> >wrote: > >Gary Mills wrote: > > We have an IMAP e-mail server running on a Solaris 10 10/09 system. >

[zfs-discuss] Snapshot recycle freezes system activity

2010-03-04 Thread Gary Mills
destroying old snapshots or creating new ones that causes this dead time? What does each of these procedures do that could affect the system? What can I do to make this less visible to users? -- -Gary Mills--Unix Group--Computer and Network Services

[zfs-discuss] Can I use a clone to split a filesystem?

2010-01-15 Thread Gary Mills
first filesystem but don't destroy the snapshot. I want to do the opposite. Is this possible? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Gary Mills
On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote: > > Gary Mills writes: > > > > Yes, I understand that, but do filesystems have separate queues of any > > sort within the ZIL? If not, would it help to put the database > > filesystems into a separate zpool?

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Gary Mills
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote: > On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote: > > Yes, I understand that, but do filesystems have separate queues of any > > sort within the ZIL? > > I'm not sure. If you can experi

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-13 Thread Gary Mills
On Tue, Jan 12, 2010 at 01:56:57PM -0800, Richard Elling wrote: > On Jan 12, 2010, at 12:37 PM, Gary Mills wrote: > > > On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: > >> On Tue, 12 Jan 2010, Gary Mills wrote: > >>> > >>> Is movin

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: > On Tue, 12 Jan 2010, Gary Mills wrote: > > > >Is moving the databases (IMAP metadata) to a separate ZFS filesystem > >likely to improve performance? I've heard that this is important, but > >I

[zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
0:50:31 79661 7547 6 3525830G 32G 10:50:361K 117 9 105812 5344 1030G 32G -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-01-12 Thread Gary Mills
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote: > > This line was a workaround for bug 6642475 that had to do with > searching for for large contiguous pages. The result was high system > time and slow response. I can't find any public information on this > bu

[zfs-discuss] Does ZFS use large memory pages?

2010-01-11 Thread Gary Mills
ssume it's been fixed by now. It may have only affected Oracle database. I'd like to remove this line from /etc/system now, but I don't know if it will have any adverse effect on ZFS or the Cyrus IMAP server that runs on this machine. Does anyone know if ZFS uses large memory pages?

[zfs-discuss] ZFS filesystems not mounted on reboot with Solaris 10 10/09

2009-12-19 Thread Gary Mills
[ Dec 19 08:09:11 Executing start method ("/lib/svc/method/fs-local") ] [ Dec 19 08:09:12 Method "start" exited with status 0 ] Is a dependancy missing? -- -Gary Mills--Unix Group--Computer and Network Services- __

Re: [zfs-discuss] Permanent errors on two files

2009-12-06 Thread Gary Mills
t worked. After the scrub, there are no errors reported. > >You might be able to identify these object numbers with zdb, but > >I'm not sure how do that. > > You can try to use zdb this way to check if these objects still exist > > zdb -d space/dcc 0x11e887

Re: [zfs-discuss] Permanent errors on two files

2009-12-06 Thread Gary Mills
July. This is an X4450 with ECC memory. There were no disk errors reported. I suppose we can blame the memory. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

[zfs-discuss] Permanent errors on two files

2009-12-04 Thread Gary Mills
Will a scrub fix it? This is a production system, so I want to be careful. It's running Solaris 10 5/09 s10x_u7wos_08 X86. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@ope

Re: [zfs-discuss] ZFS + fsck

2009-11-05 Thread Gary Mills
pushed. It would be nice to see this information at: http://hub.opensolaris.org/bin/view/Community+Group+on/126-130 but it hasn't changed since 23 October. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zf

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-21 Thread Gary Mills
000 enabled e-mail accounts. > whether it is private or I can share in a summary > anything else that might be of interest You are welcome to share this information. -- -Gary Mills--Unix Group--Computer and Network Services-

Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Gary Mills
without any lockups. Roll on > update 8! Was that IDR140221-17? That one fixed a deadlock bug for us back in May. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-07 Thread Gary Mills
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote: > Andre van Eyssen wrote: > >On Mon, 6 Jul 2009, Gary Mills wrote: > > > >>As for a business case, we just had an extended and catastrophic > >>performance degradation that was the result of two ZFS

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Gary Mills
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote: > Gary Mills wrote: > >On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote: > > > >>ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC > >>instead of the Solaris page ca

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Gary Mills
ing that we can be do to optimize the two caches in this environment? Will mmap(2) one day play nicely with ZFS? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org htt

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Gary Mills
mance dropped considerably and the CPU consumption increased. Our problem was indirectly a result of fragmentation, but it was solved by a ZFS patch. I understand that this patch, which fixes a whole bunch of ZFS bugs, should be released soon. I wonder if this was your problem. -- -Gary Mills--Unix

Re: [zfs-discuss] What causes slow performance under load?

2009-05-13 Thread Gary Mills
On Mon, Apr 27, 2009 at 04:47:27PM -0500, Gary Mills wrote: > On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: > > We have an IMAP server with ZFS for mailbox storage that has recently > > become extremely slow on most weekday mornings and afternoons. When > > o

Re: [zfs-discuss] What causes slow performance under load?

2009-04-27 Thread Gary Mills
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: > We have an IMAP server with ZFS for mailbox storage that has recently > become extremely slow on most weekday mornings and afternoons. When > one of these incidents happens, the number of processes increases, the >

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote: > >On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills <[1]mi...@cc.umanitoba.ca> >wrote: > > We run our IMAP spool on ZFS that's derived from LUNs on a Netapp > filer. There's a great dea

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
read/write patterns, but that's unlikely) Since the LUN is just a large file on the Netapp, I assume that all it can do is to put the blocks back into sequential order. That might have some benefit overall. -- -Gary Mills--Unix Support--U o

[zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
using the same blocksize, so that they cooperate to some extent? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-25 Thread Gary Mills
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote: > Gary Mills wrote: > >Does anyone know about this device? > > > >SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket > >for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Serve

[zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-24 Thread Gary Mills
e for ZFS? Is there any way I could use this in a T2000 server? The brackets appear to be different. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] What causes slow performance under load?

2009-04-22 Thread Gary Mills
wa?A1=ind0904&L=HIED-EMAILADMIN > Thread: mail systems using ZFS filesystems? Thanks. Those problems do sound similar. I also see positive experiences with T2000 servers, ZFS, and Cyrus IMAP from UC Davis. None of the people involved seem to be active on either the ZFS mailing list or the C

Re: [zfs-discuss] What causes slow performance under load?

2009-04-21 Thread Gary Mills
th Niagara CPUs are affected. It has to do with kernel code for handling two different sizes of memory pages. You can find more information here: http://forums.sun.com/thread.jspa?threadID=5257060 Also, open a support case with Sun if you haven't already. -- -Gary Mills--Unix Suppo

Re: [zfs-discuss] What causes slow performance under load?

2009-04-20 Thread Gary Mills
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: > We have an IMAP server with ZFS for mailbox storage that has recently > become extremely slow on most weekday mornings and afternoons. When > one of these incidents happens, the number of processes increases, the >

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Gary Mills
On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote: > [perf-discuss cc'd] > > On Sat, Apr 18, 2009 at 4:27 PM, Gary Mills wrote: > > Many other layers are involved in this server.  We use scsi_vhci for > > redundant I/O paths and Sun's Iscsi initiator t

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Gary Mills
On Sat, Apr 18, 2009 at 09:41:39PM -0500, Tim wrote: > >On Sat, Apr 18, 2009 at 9:01 PM, Gary Mills <[1]mi...@cc.umanitoba.ca> >wrote: > > On Sat, Apr 18, 2009 at 06:53:30PM -0400, Ellis, Mike wrote: > > In case the writes are a problem: Wh

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
On Sat, Apr 18, 2009 at 06:06:49PM -0700, Richard Elling wrote: > [CC'ed to perf-discuss] > > Gary Mills wrote: > >We have an IMAP server with ZFS for mailbox storage that has recently > >become extremely slow on most weekday mornings and afternoons. When > >one

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
erent. Wouldn't it still read all of the same files, except for ones that were added after the snapshot was taken? > (You're not by chance using any type of ssh-transfers etc as part of > the backups are you) No, Networker use RPC to connect to the backup server, but there's no

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
> Is something like jumbo packets interesting here? The Iscsi network is lightly utilized. It can't be limited by network bandwidth. There could be some other problem, of course. > Get some data out of fsstat, that could be helpful... What do I look for with fsstat? There are so many dif

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
On Sat, Apr 18, 2009 at 05:25:17PM -0500, Bob Friesenhahn wrote: > On Sat, 18 Apr 2009, Gary Mills wrote: > > >How do we determine which layer is responsible for the slow > >performance? > > If the ARC size is diminishing under heavy load then there must be > excessi

[zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
do we determine which layer is responsible for the slow performance? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-13 Thread Gary Mills
644A74d0 ONLINE 0 0 0 c4t60A98000433469764E4A2D456A696579d0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F6B385Ad0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F664E4Fd0 ONLINE 0 0 0 errors: No known data e

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
far I haven't heard from him. Is there a way to determine this from the Iscsi initiator side? I do have a test mail server that I can play with. > That could make a big difference... > (Perhaps disabling the write-flush in zfs will make a big difference > here, especially on a wri

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
2.8K 108 287K zfs 7 2 7 1.98K 3 5.87K 0 7 67.5K 108 2.34M zfs -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
On Sun, Apr 12, 2009 at 10:49:49AM -0700, Richard Elling wrote: > Gary Mills wrote: > >We're running a Cyrus IMAP server on a T2000 under Solaris 10 with > >about 1 TB of mailboxes on ZFS filesystems. Recently, when under > >load, we've had incidents where IMAP op

[zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
everal moderate-sized databases that are memory-mapped by all processes. I can move these from ZFS to UFS if this is likely to help. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss maili

Re: [zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-10 Thread Gary Mills
On Thu, Apr 09, 2009 at 04:25:58PM +0200, Henk Langeveld wrote: > Gary Mills wrote: > >I've been watching the ZFS ARC cache on our IMAP server while the > >backups are running, and also when user activity is high. The two > >seem to conflict. Fast response for user

[zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-06 Thread Gary Mills
g for longer-term backups. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to set a minimum ARC size?

2009-04-02 Thread Gary Mills
this case ZFS is starved for memory and the whole thing slows to a crawl. Is there a way to set a minimum ARC size so that this doesn't happen? We are going to upgrade the memory, but a lower limit on ARC size might still be a good idea. -- -Gary Mills--Unix Support--U of M Academic

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote: > Gary Mills wrote: > >On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote: > >>>>>>>"gm" == Gary Mills writes: > >>gm> I suppose my RFE for two-level ZFS should be included,

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote: > >>>>> "gm" == Gary Mills writes: > > gm> I suppose my RFE for two-level ZFS should be included, > > Not that my opinion counts for much, but I wasn't deaf to it---I did > respo

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
with ZFS on application servers. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RFE for two-level ZFS

2009-02-21 Thread Gary Mills
On Thu, Feb 19, 2009 at 12:36:22PM -0800, Brandon High wrote: > On Thu, Feb 19, 2009 at 6:18 AM, Gary Mills wrote: > > Should I file an RFE for this addition to ZFS? The concept would be > > to run ZFS on a file server, exporting storage to an application > > server where ZF

Re: [zfs-discuss] RFE for two-level ZFS

2009-02-20 Thread Gary Mills
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote: > Gary Mills wrote: > >Should I file an RFE for this addition to ZFS? The concept would be > >to run ZFS on a file server, exporting storage to an application > >server where ZFS also runs on top of that

[zfs-discuss] RFE for two-level ZFS

2009-02-19 Thread Gary Mills
works around these problems. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-12 Thread Gary Mills
type of safety measure that > needs to be implemented in ZFS if it is to support the average user > instead of just the IT professionals. That implies that ZFS will have to detect removable devices and treat them differently than fixed devices. It might have to be an option that can be enable

Re: [zfs-discuss] Two-level ZFS

2009-02-02 Thread Gary Mills
On Mon, Feb 02, 2009 at 09:53:15PM +0700, Fajar A. Nugraha wrote: > On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills wrote: > > On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote: > >> If there are two (or more) instances of ZFS in the end-to-end data > >> path, each

Re: [zfs-discuss] Two-level ZFS

2009-02-02 Thread Gary Mills
ion. The configuration, with ZFS on both systems, redundancy only on the file server, and end-to-end error detection and correction, does not exist. What additions to ZFS are required to make this work? -- -Gary Mills--Unix Support--U of

[zfs-discuss] Two-level ZFS

2009-02-01 Thread Gary Mills
erver can identify the source of the data in the event of an error? Does this additional exchange of information fit into the Iscsi protocol, or does it have to flow out of band somehow? -- -Gary Mills--Unix Support--U of M Academic Computing and Netwo

[zfs-discuss] What are the usual suspects in data errors?

2009-01-14 Thread Gary Mills
other errors can ZFS checksums reasonably detect? Certainly if some of the other error checking failed to detect an error, ZFS would still detect one. How likely are these other error checks to fail? Is there anything else I've missed in this analysis? -- -Gary Mills--Unix Support--U of

Re: [zfs-discuss] snapshot before patching..

2008-12-30 Thread Gary Mills
init 6'. > And how/what do I do to reverse to the non-patched system in case > something goes terribly wrong? ;-) Just revert to the old BE. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
, /dev/dsk/c2d0s0 seems to be over-defined now. If you give `zpool' a complete disk, by omitting the slice part, it will write its own label to the drive. If you specify it with a slice, it expects that you have already defined that slice. For a root pool, it has to be a slice. -- -Gary Mi

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
On Sat, Dec 20, 2008 at 03:52:46AM -0800, Uwe Dippel wrote: > This might sound sooo simple, but it isn't. I read the ZFS Administration > Guide and it did not give an answer; at least no simple answer, simple enough > for me to understand. > The intention is to follow the thread "Easiest way to r

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-12 Thread Gary Mills
On Fri, Dec 12, 2008 at 04:30:51PM +1300, Ian Collins wrote: > Gary Mills wrote: > > The split responsibility model is quite appealing. I'd like to see > > ZFS address this model. Is there not a way that ZFS could delegate > > responsibility for both error detection and

  1   2   >