> Hello,
>
> I came across this blog post:
>
> http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/
>
> and would like to hear from you performance gurus how this 2007
> article relates to the 2010 ZFS implementation? What should I use and
> why?
> Hi--
>
> ZFS command operations involving disk space take input and display using
> numeric values specified as exact values, or in a human-readable form
> with a suffix of B, K, M, G, T, P, E, Z for bytes, kilobytes, megabytes,
> gigabytes, terabytes, petabytes, exabytes, or zettabytes.
>
Let'
6120 fibre
arrays ( in HA config ) and I could not get it to become a warm brick like
you describe.
How many processors does your machine have ?
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for
n is to either buy more RAM, or find
> something you can use as an L2ARC cache device for your pool. Ideally,
> it would be an SSD. However, in this case, a plain hard drive would do
> OK (NOT one already in a pool).To add such a device, you would do:
> 'zpool add tank my
> Re-read the section on"Swap Space and Virtual Memory" for particulars on
> how Solaris does virtual memory mapping, and the concept of Virtual Swap
> Space, which is what 'swap -s' is really reporting on.
The Solaris Internals book is awesome for this sort of thing. A bit over
the top in detail
0
c4t2004CF9B63D0d0s0 ONLINE 0 0 0
So the manner in which any given IO transaction gets to the zfs filesystem
just gets ever more complicated and convoluted and it makes me wonder if I
am tossing away performance to get higher levels of safety.
--
Dennis Clarke
dcla...@
happens.
r...@aequitas:/# unshare /mnt
r...@aequitas:/# share -F nfs -o nosub,nosuid,sec=sys,ro,anon=0 /mnt
r...@aequitas:/# unshare /mnt
r...@aequitas:/#
Guess I must now try this with a ZFS fs under that iso file.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open sourc
>On 05-17-10, Thomas Burgess wrote:
>psrinfo -pv shows:
>
>The physical processor has 8 virtual processors (0-7)
> x86 (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
> AMD Opteron(tm) Processor 6128 [ Socket: G34 ]
>
That's odd.
Please try this :
# kstat -m
- Original Message -
From: Thomas Burgess
Date: Saturday, May 15, 2010 8:09 pm
Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
To: Orvar Korvar
Cc: zfs-discuss@opensolaris.org
> Well i just wanted to let everyone know that preliminary results are good.
> The liv
ossible.[1]
If you want you can ssh in to the blastwave server farm and jump on that
also ... I'm always game to play with such things.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for Solar
> On 06/05/2010 21:07, Erik Trimble wrote:
>> VM images contain large quantities of executable files, most of which
>> compress poorly, if at all.
>
> What data are you basing that generalisation on ?
note : I can't believe someone said that.
warning : I just detected a fast rise time on my peda
> Do the following ZFS stats look ok?
>
>> ::memstat
> Page Summary Pages MB %Tot
>
> Kernel 106619 832 28%
> ZFS File Data 79817 623 21%
> Anon 28553 223 7%
> Exec and libs 3055 23 1%
> Page cache 18024 140 5%
> Free (cachelist) 2880 22 1%
> Fre
>>>>>> "ea" == erik ableson writes:
>>>>>> "dc" == Dennis Clarke writes:
>
> >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client
> >> can do the write without error.
>
>
> Hi All,
> I had create a ZFS filesystem test and shared it with "zfs set
> sharenfs=root=host1 test", and I checked the sharenfs option and it
> already update to "root=host1":
Try to use a backslash to escape those special chars like so :
zfs set sharenfs=nosub\,nosuid\,rw\=hostname1\:hostna
> build from source:
> http://smartmontools.sourceforge.net/
>
You can find it at http://blastwave.network.com/csw/unstable/
Just install it with pkgadd or use pkgtrans to extract it and then run the
binary.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open sour
> On Wed, 17 Feb 2010, Dennis Clarke wrote:
>>
>>NAME STATE READ WRITE CKSUM
>>mercury_rpool ONLINE 0 0 0
>> mirror ONLINE 0 0 0
>>c3t0d0s0 ONLINE 0 0 0
>>
boot with init S
or 3 or 6.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for Solaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
otable. You probably have to
go through the installboot procedure for that.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for Solaris
___
zfs-discuss mail
> No, sorry Dennis, this functionality doesn't exist yet, but
> is being worked,
> but will take a while, lots of corner cases to handle.
>
> James Dickens
> uadmin.blogspot.com
1 ) dammit
2 ) looks like I need to do a full offline backup and then restore
to shrink a zpool.
As usual, Thanks
Suppose the requirements for storage shrink ( it can happen ) is it
possible to remove a mirror set from a zpool?
Given this :
# zpool status array03
pool: array03
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features
an also use star which may speed things up, safely.
star -copy -p -acl -sparse -dump -xdir -xdot -fs=96m -fifostats -time \
-C source_dir . destination_dir
that will buffer the transport of the data from source to dest via memory
and work to keep that buffer full as data is written on the out
I hate it when I do that .. 30 secs later I see -m mountpoint which is a
Property but not specified as -o foo-bar format.
erk
# ptime zpool create -f -o autoreplace=on -o version=10 \
> -m legacy \
> fibre01 mirror c2t0d0 c3t16d0 \
> mirror c2t1d0 c3t17d0 \
> mirror c2t2d0 c3t18d0 \
> mirror c2t
none'
real 14.884950400
user0.998020300
sys 3.334027400
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for Solaris
___
zfs-discuss mailing
t reports a long list of names under /export/home or similar.
Then you can easily see the used space per filesystem. Allocating user
quotas and then asking the simple questions seems mysterious to me also.
I am looking into this for my own reasons and will stay in touch.
--
Dennis Clarke
dcl
> Dennis Clarke wrote:
>>> FYI,
>>> OpenSolaris b128a is available for download or image-update from the
>>> dev repository. Enjoy.
>>
>> I thought that dedupe has been out for weeks now ?
>
> The source has, yes. But what Richard was referring
> FYI,
> OpenSolaris b128a is available for download or image-update from the
> dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
Dennis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
7.85G 7.85G
dedup = 1.96, compress = 1.51, copies = 1.00, dedup * compress / copies =
2.95
#
I have no idea what any of that means, yet :-)
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org &l
> On Sat, 7 Nov 2009, Dennis Clarke wrote:
>>
>> Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2
>> directories named [a-z][a-z] where each file is 64K of random
>> non-compressible data and then some english text.
>
> What method did you
> On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
>> Does the dedupe functionality happen at the file level or a lower block
>> level?
>
> it occurs at the block allocation level.
>
>> I am writing a large number of files that have the fol structure :
&g
Does the dedupe functionality happen at the file level or a lower block
level?
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random ASCII chars 64 chars long
some tilde chars .. about 1000 of then
some text ( english ) for 2K
more text ( engl
e12.5G-
neptune_rpool allocated 21.3G-
I'm currently running tests with this :
http://www.blastwave.org/dclarke/crucible_source.txt
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related t
> Dennis Clarke wrote:
>> I just went through a BFU update to snv_127 on a V880 :
>>
>> neptune console login: root
>> Password:
>> Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
>> Last login: Mon Nov 2 16:40:36 on console
>> Sun Microsystems
re at all or shall I just wait
for the putback to hit the mercurial repo ?
Yes .. this is sort of begging .. but I call it "enthusiasm" :-)
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to op
a SHA512 based de-dupe
implementation would be possible and even realistic. That would solve the
hash collision concern I would think.
Merely thinking out loud here ...
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org &l
This seems like a bit of a restriction ... is this intended ?
# cat /etc/release
Solaris Express Community Edition snv_125 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
ore comparable, ranging from 33514530
correctable errors per year."
B. Schroeder, E. Pinheiro, W.-D. Weber. "DRAM errors in the wild: A
Large-Scale Field Study." Sigmetrics/Performance 2009
see http://www.cs.toronto.edu/~bianca/
--
Dennis Clarke
dcla...@opensolaris.ca
h those strange ACL's there.
$ cd /home/dclarke/test
$ rm -rf destination
I'll do some more testing with star 1.5a89 and let you know what I see.
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open sourc
html
that was fast .
Cyril, long time no hear. :-(
Hows life the universe and risc processors for you these days ?
--
Dennis Clarke
dcla...@opensolaris.ca <- Email related to the open source Solaris
dcla...@blastwave.org <- Email related to open source for Solaris
ps: I have been busy po
It wasn't :
# zfs get refquota,refreservation,quota,reservation fibre0
NAMEPROPERTYVALUE SOURCE
fibre0 refquotanone default
fibre0 refreservation none default
fibre0 quota none default
fibre0 reservation none default
what the
like the write traffic to the new device is being ignored
in the non-verbose output data.
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
self replies are so degrading ( pun intended )
I see this patch :
Document Audience: PUBLIC
Document ID:139555-08
Title: SunOS 5.10: Kernel Patch
Copyright Notice: Copyright © 2009 Sun Microsystems, Inc. All Rights
Reserved
Update Date:Fri Jul 10 04:29:40 MDT 2009
I have a
Pardon me but I had to change subject lines just to get out of that other
thread.
In that other thread .. you were saying :
>> dick hoogendijk uttered:
>> true. Furthermore, much so-called consumer hardware is very good these
>> days. My guess is ZFS should work quite reliably on that hardware.
>
> To enable mpxio, you need to have
>
> mpxio-disable="no";
>
> in your fp.conf file. You should run /usr/sbin/stmsboot -e to make
> this happen. If you *must* edit that file by hand, always run
> /usr/sbin/stmsboot -u afterwards to ensure that your system's MPxIO
> config is correctly updated.
e needs to buy the ZFS guys some keg(s) of
whatever beer they want. Or maybe new Porsche Cayman S toys.
That would be gratitude as something more than just words.
Thank you.
--
Dennis Clarke
ps: the one funny thing is that I had to get a few things swapped
out and I guess that resets th
> Dennis Clarke writes:
>
>> This will probably get me bombed with napalm but I often just
>> use star from Jörg Schilling because its dead easy :
>>
>> star -copy -p -acl -sparse -dump -C old_dir . new_dir
>>
>> and you're done.[1]
>>
&g
> Richard Elling writes:
>
>> You can only send/receive snapshots. However, on the receiving end,
>> there will also be a dataset of the name you choose. Since you didn't
>> share what commands you used, it is pretty impossible for us to
>> speculate what you might have tried.
>
> I thought I m
> On Tue, 16 Jun 2009, roland wrote:
>
>> so, we have a 128bit fs, but only support for 1tb on 32bit?
>>
>> i`d call that a bug, isn`t it ? is there a bugid for this? ;)
>
> I'd say the bug in this instance is using a 32-bit platform in 2009! :-)
Rich, a lot of embedded industrial solutions are
o be enabled.
I agree that "Compression is a choice" and would add :
Compression is a choice and it is the default.
Just my feelings on the issue.
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rror, and so on. In either case, new_device begins to
resilver immediately.
so yeah, you have it.
Want to go for bonus points? Try to read into that man page to figure out
how to add a hot spare *after* you are all mirrored up.
--
Dennis Clarke
_
.@blastwave.org
------
Dennis Clarke wrote:
> # w
> 3:14pm up 11:24, 3 users, load average: 0.46, 0.29, 0.23
> User tty login@ idle JCPU PCPU what
> dclarke console 1:22pm 1:52 2:02 1:31 /usr/lib/nwam-manager
> dclarke pts/4 1:44pm 1:10
>> CTRL+C does nothing and kill -9 pid does nothing to this command.
>>
>> feels like a bug to me
>
> Yes, it is:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=6758902
>
Now I recall why I had to reboot. Seems as if a lot of commands hang now.
Things like :
df -ak
zfs list
zpool list
t
> Dennis Clarke wrote:
>>> Dennis Clarke wrote:
>>>>>>> It may be because it is blocked in kernel.
>>>>>>> Can you do something like this:
>>>>>>> echo "0t::pid2proc|::walk thread|::findstack
>>>>>
> Dennis Clarke wrote:
>>>>> It may be because it is blocked in kernel.
>>>>> Can you do something like this:
>>>>> echo "0t::pid2proc|::walk thread|::findstack -v"
>>> So we see that it cannot complete import here and is waitin
ONLINE
c0d0p0ONLINE
#
please see ALL the details at :
http://www.blastwave.org/dclarke/blog/files/kernel_thread_stuck.README
also see output from fmdump -eV
http://www.blastwave.org/dclarke/blog/files/fmdump_e.log
Please let me know what else you may need.
--
Dennis Clarke
__
> Dennis Clarke wrote:
>> I tried to import a zpool and the process just hung there, doing nothing.
>> It has been ten minutes now so I tries to hit CTRL-C. That did
nothing.
>
> It may be because it is blocked in kernel.
>
> Can you do something like this:
>
>
> Dennis Clarke wrote:
>> I tried to import a zpool and the process just hung there, doing
>> nothing.
>> It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
>>
>
> This symptom is consistent with a process blocked waiting on disk I/O.
> Dennis Clarke wrote:
>> I tried to import a zpool and the process just hung there, doing
>> nothing.
>> It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
>>
>
> This symptom is consistent with a process blocked waiting on disk I/O.
>
I tried to import a zpool and the process just hung there, doing nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
So then I tried :
Sun Microsystems Inc. SunOS 5.11 snv_110 November 2008
r...@opensolaris:~# ps -efl
F S UID PID PPID C PRI NI
>> And after some 4 days without any CKSUM error, how can yanking the
>> power cord mess boot-stuff?
>
> Maybe because on the fifth day some hardware failure occurred? ;-)
ha ha ! sorry .. that was pretty funny.
--
Dennis
___
zfs-discuss mailing lis
> Hey, Dennis -
>
> I can't help but wonder if the failure is a result of zfs itself finding
> some problems post restart...
Yes, yes, this is what I am feeling also, but I need to find the data also
and then I can sleep at night. I am certain that ZFS does not just toss
out faults on a whim bec
> On Tue, 24 Mar 2009, Dennis Clarke wrote:
>>
>> You would think so eh?
>> But a transient problem that only occurs after a power failure?
>
> Transient problems are most common after a power failure or during
> initialization.
Well the issue here is that power w
> On Tue, 24 Mar 2009, Dennis Clarke wrote:
>>
>> However, I have repeatedly run into problems when I need to boot after a
>> power failure. I see vdevs being marked as FAULTED regardless if there
>> are
>> actually any hard errors reported by the on disk SMART Fi
c1t1d0s7 ONLINE 0 0 0
errors: No known data errors
# fmadm faulty -afg
#
I do TOTALLY trust that last line that says "No known data errors" which
makes me wonder if the Severe FAULTs are for unknown data errors :-)
--
Dennis Clarke
sig du jour : "An app
mirror c8t2004CFAC0E97d0 c8t202037F859F1d0 \
> mirror c8t2004CFB53F97d0 c8t202037F84044d0 \
> mirror c8t2004CFA3C3F2d0 c8t2004CF2FCE99d0 \
> mirror c8t2004CF9645A8d0 c8t2004CFA3F328d0 \
> mirror c8t202037F812EAd0 c8t2004CF96FF00d0 \
> mir
> You've tripped over a variant of:
>
> 6335095 Double-slash on /. pool mount points
>
> - Eric
>
oh well .. no points for originality then I guess :-)
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
> On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote:
note that it was well after 2 AM for me .. half blind asleep
that's my excuse .. I'm sticking to it. :-)
>>
>> > in /usr/src/cmd/zpool/zpool_main.c :
>> >
>>
>> at line 680
> in /usr/src/cmd/zpool/zpool_main.c :
>
at line 680 forwards we can probably check for this scenario :
if ( ( altroot != NULL ) && ( altroot[0] != '/') ) {
(void) fprintf(stderr, gettext("invalid alternate root '%s': "
"must be an absolute path\n"), altroot);
nvlist_free(nvroot);
Not sure if this has been reported or not.
This is fairly minor but slightly annoying.
After fresh install of snv_64a I run zpool import to find this :
# zpool import
pool: zfs0
id: 13628474126490956011
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: T
On 4/23/07, Richard Elling <[EMAIL PROTECTED]> wrote:
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US or UK and ca
On 4/27/07, Ben Miller <[EMAIL PROTECTED]> wrote:
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
324:read(7, 0x000CA00C, 5120) = 0
324:llsee
On 4/26/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
For the love of God do NOT do stuff like that.
Just create ZFS on a pile of disks the way t
Dear ZFS and OpenSolaris people :
I recently upgraded a large NFS server upwards from Solaris 8. This is a
production manufacturing facility with football field sized factory floors
and 25 tonne steel products. Many on-site engineers on AIX and CATIA as well
as Solaris users and Windows and ev
On 4/18/07, J.P. King <[EMAIL PROTECTED]> wrote:
> Can we discuss this with a few objectives ? Like define "backup" and
> then describe mechanisms that may achieve one? Or a really big
> question that I guess I have to ask, do we even care anymore?
Personally I think you would benefit from s
On 4/18/07, Nicolas Williams <[EMAIL PROTECTED]> wrote:
On Wed, Apr 18, 2007 at 03:47:55PM -0400, Dennis Clarke wrote:
> Maybe with a definition of what a "backup" is and then some way to
> achieve it. As far as I know the only real backup is one that can be
> tossed int
On 4/18/07, Bill Sprouse <[EMAIL PROTECTED]> wrote:
It seems that neither Legato nor NetBackup seem to lend themselves well to the
notion of lots of file systems within storage pools from an administration
perspective. Is there a preferred methodology for doing traditional backups to
tape fr
I really need to take a longer look here.
/*
* zpool iostat [-v] [pool] ... [interval [count]]
*
* -v Display statistics for individual vdevs
*
* This command can be tricky because we want to be able to deal with pool
.
.
.
I think I may need to deal with a raw option here ?
bort();
/* NOTREACHED */
}
The iostat_cbdata struct would need a new int element also :
typedef struct iostat_cbdata {
zpool_list_t *cb_list;
/*
* The cb_raw int is added here by Dennis Clarke
*/
int cb_raw;
int cb_verbose;
int cb_iterat
> Robert Milkowski wrote:
>> Hello Ivan,
>> Sunday, March 11, 2007, 12:01:28 PM, you wrote:
>>
>> IW> Got it, thanks, and a more general question, in a single disk
>> IW> root pool scenario, what advantage zfs will provide over ufs w/
>> IW> logging? And when zfs boot integrated in neveda, will l
>
> You don't honestly, really, reasonably, expect someone, anyone, to look
> at the stack
well of course he does :-)
and I looked at it .. all of it and I can tell exactly what the problem is
but I'm not gonna say because its a trick question.
so there.
Dennis
> On Sun, 18 Feb 2007, Calvin Liu wrote:
>
>> I want to run command "rm Dis*" in a folder but mis-typed a space in it
>> so it became "rm Dis *". Unfortunately I had pressed the return button
>> before I noticed the mistake. So you all know what happened... :( :( :(
>
> Ouch!
>
>> How can I get th
d and you can install them and run them in a very
stable fashion long term. Once you add a single patch to that system you
have wandered out of "this is shipped on media" to somewhere else.
--
Dennis Clarke
___
zfs-discuss m
boldly plowing forwards I request a few disks/vdevs to be mirrored
all at the same time :
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
scrub: resilver completed with 0 errors on Thu Feb 1 04:17:58 2007
config:
NAME STATE READ WRITE CKSUM
zfs0 ONLI
> Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb:
>
>>> Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0:
>>> fault detected in device; service still available
>>> Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0:
>>&
> Ihsan Dogan wrote:
>
>>>I think you hit a major bug in ZFS personally.
>>
>> For me it also looks like a bug.
>
> I think we don't have enough information to judge. If you have a supported
> version of Solaris, open a case and supply all the data (crash dump!) you
> have.
I agree we need da
> Hello Michael,
>
> Am 24.1.2007 14:36 Uhr, Michael Schuster schrieb:
>
>>> --
>>> [EMAIL PROTECTED] # zpool status
>>> pool: pool0
>>> state: ONLINE
>>> scrub: none requested
>>> config:
>>
>> [...]
>>
>>> Jan 23 18:51:38 newponit ^
> Hello,
>
> We're setting up a new mailserver infrastructure and decided, to run it
> on zfs. On a E220R with a D1000, I've setup a storage pool with four
> mirrors:
Good morning Ihsan ...
I see that you have everything mirrored here, thats excellent.
When you pulled a disk, was it a
>> What do you mean by UFS wasn't an option due to
>> number of files?
>
> Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
> Financials environment well exceeds this limitation.
>
what ?
$ uname -a
SunOS core 5.10 Generic_118833-17 sun4u sparc SUNW,UltraSPARC-IIi-cEngine
> Roch - PAE wrote:
>>
>> Just posted:
>>
>>http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
>
> Nice article. Now what about when we do this with more than one disk
> and compare UFS/SVM or VxFS/VxVM with ZFS as the back end - all with
> JBOD storage ?
>
> How then does ZFS compare as
> On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote:
>> > http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
>>
>> So just to confirm; disabling the zil *ONLY* breaks the semantics of
>> fsync()
>> and synchronous writes from the application perspective; it will do
>> *NOTHING*
>
2. A severe test, as of patience or belief;
a trial.
[ Dennis Clarke [EMAIL PROTECTED] ]
*
TEST 1 ) file write.
Building file structure at /export/nfs/local_test/
This test will create 62^3 = 238328 files o
>> Note that "attach" has no option for -n which would just show me the
>> damage I am about to do :-(
>
> In general, ZFS does a lot of checking before committing a change to the
> configuration. We make sure that you don't do things like use disks
> that are already in use, partitions aren't ove
zpool other than tar to a DLT. The last thing I want to do is destroy my
data when I am trying to add redundency.
Any thoughts ?
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Another thing to keep an eye out for is disk caching. With ZFS,
> whenever the NFS server tells us to make sure something is on disk, we
> actually make sure it's on disk by asking the drive to flush dirty data
> in its write cache out to the media. Needless to say, this takes a
> while.
>
> W
ware bugs.
but it does imply that the software is way better than the hardware eh ?
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Anton B. Rang wrote:
>>> "INFORMATION: If a member of this striped zpool becomes unavailable or
>>> develops corruption, Solaris will kernel panic and reboot to protect your
>>> data."
>>>
>>
>> OK, I'm puzzled.
>>
>> Am I the only one on this list who believes that a kernel panic, instead
>> of
easily with any built in tools in the SXCR these days. There is already an
RFE filed on that but I think its low priority. You can recover a zpool
easily enough with zpool import but if you ever lose a few disks or some
disaster hits then you had better have Veritas NetBackup or similar in
place.
> Dennis,
> i'm not sure if this will help you, but i had something similar happen and
> was able to get my zpool back.
>
> i decided to install (not upgrade) Nevada snv-51 which was the current build
> at the time. I had (and thankfully still have) a zpool which i'd created
> under snv-37 on a se
> On 11/23/06, James Dickens <[EMAIL PROTECTED]> wrote:
>> On 11/23/06, Dennis Clarke <[EMAIL PROTECTED]> wrote:
>> >
>> > assume worst case
>> >
>> > someone walks up to you and drops an array on you.
>> They say "its
s "noot
boot" shell? Is there any way to backup those ZFS filesystems while booted
from CDROM/DVD or boot net ?
Essentially, if I had nothing but bare metal here and a tape drive can I
access the zpool that resides on six 36GB disks on controller 2 or am
Have a gander below :
> Agreed - it sucks - especially for small file use. Here's a 5,000 ft view
> of the performance while unzipping and extracting a tar archive. First
> the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII
> CPUs and 4Gb of RAM:
>
> $ cp emacs-21.4a.tar.g
1 - 100 of 123 matches
Mail list logo