swap space for paging. Paging out
unused portions of an executing process from real memory to the swap
device is certainly beneficial. Swapping out complete processes is a
desperation move, but paging out most of an idle process is a good
thing.
--
-Gary Mi
ck size of 512 bytes, even though the Netapp itself used a 4K
block size. This means that the filer was doing the block size
conversion, resulting in much more I/O than the ZFS layer intended.
The fact that Netapp does COW made this situation even worse.
My impression was that very few of their
t LUNs that
behave as perfectly reliable virtual disks, guaranteed to be error
free. Almost all of the time, ZFS will find no errors. If ZFS does
find an error, there's no nice way to recover. Most commonly, this
happens when the SAN is powered down or rebooted while the ZFS host
is still runn
o this by
specifying the `cachefile' property on the command line. The `zpool'
man page describes how to do this.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@o
non-empty directory to result in a recursive rm... But if they
> really want hardlinks to directories, then yeah, that's horrible.
This all sounds like a good use for LD_PRELOAD and a tiny library
that intercepts and modernizes system calls.
--
-Gary Mills--refurb-
available when the zpool import is
done during the boot. Check with Oracle support to see if they have
found a solution.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ult
The zpool import (without the mount) is done earlier. Check to see
if any of the FC services run too late during the boot.
> As Gary and Bob mentioned, I saw this Issue with ISCSI Devices.
> Instead of export / import is a zpool clear also working?
>
> mpathadm lis
that
imported the zpool later during the reboot.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
folder whenever new messages arrived, making
that portion slow as well. Performance degraded when the storage
became 50% full. It would increase markedly when the oldest snapshot
was deleted.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
_
7;s nothing inbetween.
Of course, if something outside of ZFS writes to the disk, then data
belonging to ZFS will be modified. I've heard of RAID controllers or
SAN devices doing this when they modify the disk geometry or reserved
areas on the disk.
--
-Gary Mi
ally when there are no contiguous blocks available. Deleting
a snapshot provides some of these, but only temporarily.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolari
On Mon, Aug 29, 2011 at 05:24:18PM -0700, Richard Elling wrote:
> We use this method to implement NexentaStor HA-Cluster and, IIRC,
> Solaris Cluster uses shared cachefiles, too. More below...
Mine's a cluster too, with quite a simple design.
> On Aug 29, 2011, at 11:13 AM, Ga
server lost power.
> Sent from my iPad
Sent from my Sun type 6 keyboard.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pool was last accessed by another system.' error, or
will the import succeed? Does the cache change the import behavior?
Does it recognize that the server is the same system? I don't want
to include the `-f' flag in the commands above when it's not needed.
--
-Gary Mills-
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
> On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
> > The `lofiadm' man page describes how to export a file as a block
> > device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
> >
e
zvol just another block device?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ding IOs because it
could distribute those IOs across the disks. It would, of course,
require a non-volatile cache to provide fast turnaround for writes.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
les up to 48 SAS/SATA disk drives
# Provides up to 72 Gb/sec of total bandwidth
* Up to 72 Gb/sec of total bandwidth
* Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth)
* Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth)
* Scales up to 48 drives
-
y making it a separate dataset.
People forget (c), the ability to set different filesystem options on
/var. You might want to have `setuid=off' for improved security, for
example.
--
-Gary Mills--Unix Group--Computer and Network Services-
_
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
> On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote:
> >
> > Is there any reason not to use one LUN per RAID group?
[...]
> In other words, if you build a zpool with one vdev of 10GB and
> another with two vde
ouldn't ZFS I/O scheduling interfere with I/O scheduling
already done by the storage device?
Is there any reason not to use one LUN per RAID group?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mail
r at I/O performance.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
debatable issue, one that quickly
becomes exceedingly complex. The decision rests on probabilities
rather than certainties.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ment with ZFS in this situation anyway because those aren't real
disks. Disk management all has to be done on the SAN storage device.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x27;s because ZFS does not have a way to handle a large class of
storage designs, specifically the ones with raw storage and disk
management being provided by reliable SAN devices.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h the `dependency' and
`/dependency' pairs. It should also specify a `single_instance/' and
`transient' service. The method script can do whatever the mount
requires, such as creating the ramdisk.
--
-Gary Mills--Unix Group--Computer and Network Services-
Code form of the Covered Software You distribute or
otherwise make available. You must inform recipients of any such
Covered Software in Executable form as to how they can obtain such
Covered Software in Source Code form in a reasonable manner on or
through a medium customarily u
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sable
twelve services before doing the upgrade and enable them afterwards.
`fuser -c' is useful to identify the processes. Mapping them to
services can be difficult. The server is essentially down during the
upgrade.
For a root filesystem, you might have to boot off the failsafe
ap': Device busy
cannot unmount '/space/log': Device busy
cannot unmount '/space/mysql': Device busy
2 filesystems upgraded
Do I have to shut down all the applications before upgrading the
filesystems? This is on a Solaris 10 5/09 system.
--
-Gary Mill
r disk device to the zpool will double the bandwidth.
/var/log/syslog is quite large, reaching about 600 megabytes before
it's rotated. This takes place each night, with compression bringing
it down to about 70 megabytes. The server handles about 500,000
messages a day.
--
-Gary
s also some
> additional software setup for that configuration.
That would be the SATA interposer that does that.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0% we seems to have quite a number of
> issues, much the same as what you've had in the past, ps and prstats
> hanging.
>
> are you able to tell me the IDR number that you applied?
The IDR was only needed last year. Upgrading to Solaris 10 10/09
and applying the latest pa
d
redundant SAS paths.
I plan to use ZFS everywhere, for the root filesystem and the shared
storage. The only exception will be UFS for /globaldevices .
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing l
also with only two disks. It should be easy to
find a pair of 1U servers, but what's the smallest SAS array that's
available? Does it need an array controller? What's needed on the
servers to connect to it?
--
-Gary Mills--Unix Group--Computer and Network Servic
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
> It uses six ZFS filesystems built on a single zpool with 14 daily
> snapshots. Every day at 11:56, a cron command destroys the oldest
> snapshots and
conds.
>
> Out of curiosity, how much physical memory does this system have?
Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus
IMAP processes, thousands of them, use memory mapping extensively.
I don't know if this design affects the snapshot recycle behavior.
--
-G
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote:
> >>>>> "gm" == Gary Mills writes:
>
> gm> destroys the oldest snapshots and creates new ones, both
> gm> recursively.
>
> I'd be curious if you try taking the same snaps
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
> It uses six ZFS filesystems built on a single zpool with 14 daily
> snapshots. Every day at 11:56, a cron command destroys the oldest
> snapshots and
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote:
>
>On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins <[1]...@ianshome.com>
>wrote:
>
>Gary Mills wrote:
>
> We have an IMAP e-mail server running on a Solaris 10 10/09 system.
>
destroying old snapshots or creating new ones that causes this
dead time? What does each of these procedures do that could affect
the system? What can I do to make this less visible to users?
--
-Gary Mills--Unix Group--Computer and Network Services
first filesystem but don't destroy the
snapshot. I want to do the opposite. Is this possible?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
>
> Gary Mills writes:
> >
> > Yes, I understand that, but do filesystems have separate queues of any
> > sort within the ZIL? If not, would it help to put the database
> > filesystems into a separate zpool?
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote:
> On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
> > Yes, I understand that, but do filesystems have separate queues of any
> > sort within the ZIL?
>
> I'm not sure. If you can experi
On Tue, Jan 12, 2010 at 01:56:57PM -0800, Richard Elling wrote:
> On Jan 12, 2010, at 12:37 PM, Gary Mills wrote:
>
> > On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> >> On Tue, 12 Jan 2010, Gary Mills wrote:
> >>>
> >>> Is movin
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> On Tue, 12 Jan 2010, Gary Mills wrote:
> >
> >Is moving the databases (IMAP metadata) to a separate ZFS filesystem
> >likely to improve performance? I've heard that this is important, but
> >I
0:50:31 79661 7547 6 3525830G 32G
10:50:361K 117 9 105812 5344 1030G 32G
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-di
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote:
>
> This line was a workaround for bug 6642475 that had to do with
> searching for for large contiguous pages. The result was high system
> time and slow response. I can't find any public information on this
> bu
ssume it's been fixed by now. It may have only
affected Oracle database.
I'd like to remove this line from /etc/system now, but I don't know
if it will have any adverse effect on ZFS or the Cyrus IMAP server
that runs on this machine. Does anyone know if ZFS uses large memory
pages?
[ Dec 19 08:09:11 Executing start method ("/lib/svc/method/fs-local") ]
[ Dec 19 08:09:12 Method "start" exited with status 0 ]
Is a dependancy missing?
--
-Gary Mills--Unix Group--Computer and Network Services-
__
t worked. After the scrub, there are no errors reported.
> >You might be able to identify these object numbers with zdb, but
> >I'm not sure how do that.
>
> You can try to use zdb this way to check if these objects still exist
>
> zdb -d space/dcc 0x11e887
July. This is an X4450 with ECC
memory. There were no disk errors reported. I suppose we can blame
the memory.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Will a scrub fix it? This is a
production system, so I want to be careful.
It's running Solaris 10 5/09 s10x_u7wos_08 X86.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@ope
pushed.
It would be nice to see this information at:
http://hub.opensolaris.org/bin/view/Community+Group+on/126-130
but it hasn't changed since 23 October.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zf
000 enabled e-mail accounts.
> whether it is private or I can share in a summary
> anything else that might be of interest
You are welcome to share this information.
--
-Gary Mills--Unix Group--Computer and Network Services-
without any lockups. Roll on
> update 8!
Was that IDR140221-17? That one fixed a deadlock bug for us back
in May.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote:
> Andre van Eyssen wrote:
> >On Mon, 6 Jul 2009, Gary Mills wrote:
> >
> >>As for a business case, we just had an extended and catastrophic
> >>performance degradation that was the result of two ZFS
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote:
> Gary Mills wrote:
> >On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote:
> >
> >>ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC
> >>instead of the Solaris page ca
ing that we can be do to
optimize the two caches in this environment? Will mmap(2) one day
play nicely with ZFS?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
mance dropped considerably and
the CPU consumption increased. Our problem was indirectly a result of
fragmentation, but it was solved by a ZFS patch. I understand that
this patch, which fixes a whole bunch of ZFS bugs, should be released
soon. I wonder if this was your problem.
--
-Gary Mills--Unix
On Mon, Apr 27, 2009 at 04:47:27PM -0500, Gary Mills wrote:
> On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
> > We have an IMAP server with ZFS for mailbox storage that has recently
> > become extremely slow on most weekday mornings and afternoons. When
> > o
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
> We have an IMAP server with ZFS for mailbox storage that has recently
> become extremely slow on most weekday mornings and afternoons. When
> one of these incidents happens, the number of processes increases, the
>
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
>
>On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills <[1]mi...@cc.umanitoba.ca>
>wrote:
>
> We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
> filer. There's a great dea
read/write patterns, but that's unlikely)
Since the LUN is just a large file on the Netapp, I assume that all
it can do is to put the blocks back into sequential order. That might
have some benefit overall.
--
-Gary Mills--Unix Support--U o
using the same blocksize, so that they
cooperate to some extent?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
> Gary Mills wrote:
> >Does anyone know about this device?
> >
> >SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket
> >for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Serve
e for ZFS? Is there any way I could use this in
a T2000 server? The brackets appear to be different.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
wa?A1=ind0904&L=HIED-EMAILADMIN
> Thread: mail systems using ZFS filesystems?
Thanks. Those problems do sound similar. I also see positive
experiences with T2000 servers, ZFS, and Cyrus IMAP from UC Davis.
None of the people involved seem to be active on either the ZFS
mailing list or the C
th Niagara CPUs are affected. It has to do with kernel
code for handling two different sizes of memory pages. You can find
more information here:
http://forums.sun.com/thread.jspa?threadID=5257060
Also, open a support case with Sun if you haven't already.
--
-Gary Mills--Unix Suppo
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
> We have an IMAP server with ZFS for mailbox storage that has recently
> become extremely slow on most weekday mornings and afternoons. When
> one of these incidents happens, the number of processes increases, the
>
On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote:
> [perf-discuss cc'd]
>
> On Sat, Apr 18, 2009 at 4:27 PM, Gary Mills wrote:
> > Many other layers are involved in this server. We use scsi_vhci for
> > redundant I/O paths and Sun's Iscsi initiator t
On Sat, Apr 18, 2009 at 09:41:39PM -0500, Tim wrote:
>
>On Sat, Apr 18, 2009 at 9:01 PM, Gary Mills <[1]mi...@cc.umanitoba.ca>
>wrote:
>
> On Sat, Apr 18, 2009 at 06:53:30PM -0400, Ellis, Mike wrote:
> > In case the writes are a problem: Wh
On Sat, Apr 18, 2009 at 06:06:49PM -0700, Richard Elling wrote:
> [CC'ed to perf-discuss]
>
> Gary Mills wrote:
> >We have an IMAP server with ZFS for mailbox storage that has recently
> >become extremely slow on most weekday mornings and afternoons. When
> >one
erent. Wouldn't it still read all of the same files,
except for ones that were added after the snapshot was taken?
> (You're not by chance using any type of ssh-transfers etc as part of
> the backups are you)
No, Networker use RPC to connect to the backup server, but there's no
> Is something like jumbo packets interesting here?
The Iscsi network is lightly utilized. It can't be limited by
network bandwidth. There could be some other problem, of course.
> Get some data out of fsstat, that could be helpful...
What do I look for with fsstat? There are so many dif
On Sat, Apr 18, 2009 at 05:25:17PM -0500, Bob Friesenhahn wrote:
> On Sat, 18 Apr 2009, Gary Mills wrote:
>
> >How do we determine which layer is responsible for the slow
> >performance?
>
> If the ARC size is diminishing under heavy load then there must be
> excessi
do we determine which layer is responsible for the slow performance?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
644A74d0 ONLINE 0 0 0
c4t60A98000433469764E4A2D456A696579d0 ONLINE 0 0 0
c4t60A98000433469764E4A476D2F6B385Ad0 ONLINE 0 0 0
c4t60A98000433469764E4A476D2F664E4Fd0 ONLINE 0 0 0
errors: No known data e
far I haven't heard from
him. Is there a way to determine this from the Iscsi initiator
side? I do have a test mail server that I can play with.
> That could make a big difference...
> (Perhaps disabling the write-flush in zfs will make a big difference
> here, especially on a wri
2.8K 108 287K zfs
7 2 7 1.98K 3 5.87K 0 7 67.5K 108 2.34M zfs
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Apr 12, 2009 at 10:49:49AM -0700, Richard Elling wrote:
> Gary Mills wrote:
> >We're running a Cyrus IMAP server on a T2000 under Solaris 10 with
> >about 1 TB of mailboxes on ZFS filesystems. Recently, when under
> >load, we've had incidents where IMAP op
everal moderate-sized databases that are
memory-mapped by all processes. I can move these from ZFS to UFS if
this is likely to help.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss maili
On Thu, Apr 09, 2009 at 04:25:58PM +0200, Henk Langeveld wrote:
> Gary Mills wrote:
> >I've been watching the ZFS ARC cache on our IMAP server while the
> >backups are running, and also when user activity is high. The two
> >seem to conflict. Fast response for user
g for longer-term backups.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this case ZFS is starved for memory
and the whole thing slows to a crawl. Is there a way to set a
minimum ARC size so that this doesn't happen?
We are going to upgrade the memory, but a lower limit on ARC size
might still be a good idea.
--
-Gary Mills--Unix Support--U of M Academic
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
> Gary Mills wrote:
> >On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
> >>>>>>>"gm" == Gary Mills writes:
> >>gm> I suppose my RFE for two-level ZFS should be included,
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
> >>>>> "gm" == Gary Mills writes:
>
> gm> I suppose my RFE for two-level ZFS should be included,
>
> Not that my opinion counts for much, but I wasn't deaf to it---I did
> respo
with ZFS on application
servers.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Feb 19, 2009 at 12:36:22PM -0800, Brandon High wrote:
> On Thu, Feb 19, 2009 at 6:18 AM, Gary Mills wrote:
> > Should I file an RFE for this addition to ZFS? The concept would be
> > to run ZFS on a file server, exporting storage to an application
> > server where ZF
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote:
> Gary Mills wrote:
> >Should I file an RFE for this addition to ZFS? The concept would be
> >to run ZFS on a file server, exporting storage to an application
> >server where ZFS also runs on top of that
works around these problems.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
type of safety measure that
> needs to be implemented in ZFS if it is to support the average user
> instead of just the IT professionals.
That implies that ZFS will have to detect removable devices and treat
them differently than fixed devices. It might have to be an option
that can be enable
On Mon, Feb 02, 2009 at 09:53:15PM +0700, Fajar A. Nugraha wrote:
> On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills wrote:
> > On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
> >> If there are two (or more) instances of ZFS in the end-to-end data
> >> path, each
ion. The
configuration, with ZFS on both systems, redundancy only on the
file server, and end-to-end error detection and correction, does
not exist. What additions to ZFS are required to make this work?
--
-Gary Mills--Unix Support--U of
erver can identify the source
of the data in the event of an error?
Does this additional exchange of information fit into the Iscsi
protocol, or does it have to flow out of band somehow?
--
-Gary Mills--Unix Support--U of M Academic Computing and Netwo
other errors can ZFS checksums
reasonably detect? Certainly if some of the other error checking
failed to detect an error, ZFS would still detect one. How likely
are these other error checks to fail?
Is there anything else I've missed in this analysis?
--
-Gary Mills--Unix Support--U of
init 6'.
> And how/what do I do to reverse to the non-patched system in case
> something goes terribly wrong? ;-)
Just revert to the old BE.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss
, /dev/dsk/c2d0s0 seems to be over-defined now.
If you give `zpool' a complete disk, by omitting the slice part, it
will write its own label to the drive. If you specify it with a
slice, it expects that you have already defined that slice. For a
root pool, it has to be a slice.
--
-Gary Mi
On Sat, Dec 20, 2008 at 03:52:46AM -0800, Uwe Dippel wrote:
> This might sound sooo simple, but it isn't. I read the ZFS Administration
> Guide and it did not give an answer; at least no simple answer, simple enough
> for me to understand.
> The intention is to follow the thread "Easiest way to r
On Fri, Dec 12, 2008 at 04:30:51PM +1300, Ian Collins wrote:
> Gary Mills wrote:
> > The split responsibility model is quite appealing. I'd like to see
> > ZFS address this model. Is there not a way that ZFS could delegate
> > responsibility for both error detection and
1 - 100 of 146 matches
Mail list logo