> From: Richard Elling [mailto:rich...@nexenta.com]
>
> 4% seems to be a pretty good SWAG.
Is the above "4%" wrong, or am I wrong?
Suppose 200bytes to 400bytes, per 128Kbyte block ...
200/131072 = 0.0015 = 0.15%
400/131072 = 0.003 = 0.3%
which would mean for 100G unique data = 153M to 312M ram.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brandon High
>
> Dedup is to
> save space, not accelerate i/o.
I'm going to have to disagree with you there. Dedup is a type of
compression. Compression can be used for storage savings, and
> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
> > increases the probability of arc/ram cache hit. So dedup allows you
> to
> > stretch your disk, and also stretch your ram cache. Which also
> > benefits performance.
>
> Theoretically, yes, but there will be an overhead in cpu/memory tha
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> ZFS-FUSE is horribly unstable,
That may be true. I couldn't say.
> although that's more an indication of
> the stability of the storage stack on Linux.
But this, I take
> From: David Magda [mailto:dma...@ee.ryerson.ca]
>
> On Jul 10, 2010, at 14:20, Edward Ned Harvey wrote:
>
> >> A few companies have already backed out of zfs
> >> as they cannot afford to go through a lawsuit.
> >
> > Or, in the case of Apple,
> From: Tim Cook [mailto:t...@cook.ms]
>
> Because VSS isn't doing anything remotely close to what WAFL is doing
> when it takes snapshots.
It may not do what you want it to do, but it's still copy on write, as
evidenced by the fact that it takes instantaneous snapshots, and snapshots
don't get o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Linder, Doug
>
> Out of sheer curiosity - and I'm not disagreeing with you, just
> wondering - how does ZFS make money for Oracle when they don't charge
> for it? Do you think it's such an imp
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> > A private license, with support and indemnification from Sun, would
> > shield Apple from any lawsuit from Netapp.
>
> The patent holder is not compelled
> in any way to offer a license for use of the patent. Without a patent
> l
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave Pooser
>
> I'm looking at a new web server for the company, and am considering
> Solaris
> specifically because of ZFS. (Oracle's lousy sales model-- specifically
> the
> unwillingness to
> From: BM [mailto:bogdan.maryn...@gmail.com]
>
> But don't forget that Oracle looks like killing OpenSolaris and entire
> community
> after all: there are no latest builds at genunix.org (latest is 134 and
> seems
> like that's it), Oracle stopped build OSOL after build 135 (I have no
> idea wher
> From: Ian Collins [mailto:i...@ianshome.com]
> > From: Edward Ned Harvey
> > The sun hardware is the
> > recommended way to go, but it's also more expensive.
>
> Not in my neck of the woods, Sun have always been most competitive.
Interesting. I wonder what
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> Is it possible to set zfs for bi-directional synchronization of data
> across two locations? I am thinking this is almost impossible. Consider
You are probably looking for lustr
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> Not to beat a dead horse here, but that's an Apples-to-Oranges
> comparison (it's raining idioms!). You can't compare an OEM server
> (Dell, Sun, whatever) to a custom-built b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> When you pay for the higher prices for OEM hardware, you're paying for
> the
> knowledge of parts availability and compatibility. And a si
> From: BM [mailto:bogdan.maryn...@gmail.com]
>
> latest (just a week ago): Apple Support reported me that their
> engineers in US has no green idea why Darwin kernel panics on their
Stop it... You did *not* just use "apple" and "support" in the same sentence,
did you?? ;-) You almost made me
Ok guys, can we please kill this thread about commodity versus enterprise
hardware?
Let's agree on one thing: Some people believe commodity hardware is just as
good as enterprise systems. Other people do not believe that. In both
situations, the conclusion has been reached based on personal
Obviously, this thread got sidetracked on a tangent about different types of
hardware, be it mac mini, supermicro, dell, or sun/oracle. But I do find
the subject of legality of ZFS and the netapp lawsuit(s) to be an
interesting subject. (And thanks to whoever pointed out the information of
how MS
I believe I know enough to figure this out on my own, but there's usually
some little "gotcha" that you don't think of until you hit it. I'm just
betting that Cindy has already a procedure written for just this purpose.
;-)
In general, if you've been good about backing up your rpool via "zfs s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cindy Swearingen
>
> Hi Ned,
>
> One of the benefits of using a mirrored ZFS configuration is just
> replacing each disk with a larger disk, in place, online, and so on...
Yes, the autoexpan
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard L. Hamilton
>
> I would imagine that if it's read-mostly, it's a win, but
> otherwise it costs more than it saves. Even more conventional
> compression tends to be more resource intens
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Pasi Kärkkäinen
>
> Redhat Fedora 13 includes BTRFS, but it's not used as a default (yet).
>
> RHEL6 beta also includes BTRFS support (tech preview), but again,
>
> Upcoming Ubuntu 10.10 will
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Jahnel
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through
> the transfer.
The standard answer is mbuffer. I think you should ask yourself what's
going wrong wi
> From: Giovanni Tirloni [mailto:gtirl...@sysdroid.com]
>
> We have hundreds of servers using LACP and so far have not noticed any
> increase in the error rate.
If the error rate is not zero, you have an increased error rate.
In linux, you just do this:
sudo /sbin/ifconfig -a | grep errors | gre
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of v
>
> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to
> one physical disk iops, since raidz1 is like raid5 , so is raid5 has
> same performance like raidz1? ie. random iops
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of v
>
> For synchronous write, and if io is small, will the whole io be place
> on zil? or just the pointer be save into zil? what about large size io?
This one doesn't have a really clear answer
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Hernan F
>
> Hi,
> Out of pure curiosity, I was wondering, what would happen if one tries
> to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I tested it once, for the same r
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Saso Kiselkov
>
> If you plan on using it as a storage server for multimedia data
> (movies), don't even bother considering compression, as most media
> files
> already come heavily compressed.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of John Andrunas
>
> I know this is potentially a loaded question, but what is generally
> considered the optimal disk configuration for ZFS. I have 48 disks on
> 2 RAID controllers (2x24). The
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Robert Milkowski
> >
> I had a quick look at your results a moment ago.
> The problem is that you used a server with 4GB of RAM + a raid card
> with
> a 256MB of cache.
> Then your filesize for
> From: Robert Milkowski [mailto:mi...@task.gda.pl]
>
> [In raidz] The issue is that each zfs filesystem block is basically
> spread across
> n-1 devices.
> So every time you want to read back a single fs block you need to wait
> for all n-1 devices to provide you with a part of it - and keep in m
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Phil Harman
>
> Milkowski and Neil Perrin's zil synchronicity [PSARC/2010/108] changes
> with sync=disabled, when the changes work their way into an available
>
> The fact that people run unsaf
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
>
> What size of ZIL device would be recommened for my pool consisting for
Get the smallest one. Even an unrealistic high performance scenario cannot
come close to using 32G. I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
>
> What about mirroring? Do I need mirrored ZIL devices in case of a power
> outage?
You don't need mirroring for the sake of *power outage* but you *do* need
mirroring for the s
> From: Arne Jansen [mailto:sensi...@gmx.net]
> >
> > Can anyone else confirm or deny the correctness of this statement?
>
> As I understand it that's the whole point of raidz. Each block is its
> own
> stripe.
Nope, that doesn't count for confirmation. It is at least theoretically
possible to
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Linder, Doug
>
> On a related note - all other things being equal, is there any reason
> to choose NFS over ISCI, or vice-versa? I'm currently looking at this
iscsi and NFS are completely dif
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> Is it true? Any way to find it in every hierarchy?
Yup. Nope.
If you use ZFS, you make a filesystem at whatever level you need it, in
order for the .zfs directory to be availabl
I remember asking about this a long time ago, and everybody seemed to think
it was a non-issue. The vague and unclearly reported rumor that ZFS behaves
poorly when it's 100% full. Well now I have one really solid data point to
confirm it. And possibly how to reproduce it, avoid it, and prevent i
> From: Garrett D'Amore [mailto:garr...@nexenta.com]
>
> Fundamentally, my recommendation is to choose NFS if your clients can
> use it. You'll get a lot of potential advantages in the NFS/zfs
> integration, so better performance. Plus you can serve multiple
> clients, etc.
>
> The only reason
> From: Fred Liu [mailto:fred_...@issi.com]
>
> But too many file systems may be an issue for management and also
> normal user cannot create file system.
> I think it should go like what NetApp's snapshot does.
> It is a pity.
For Windows/CIFS clients, the solution I use is:
ln -s .zfs/s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Russ Price
>
> Good advice - ZFS can use quite a lot of CPU cycles. A low-end AMD
> quad-core is
I know "a lot of CPU cycles" is a relative term. But I never notice CPU
utilization, even unde
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Shawn Ferry
>
> I am given to understand that you can delete snapshots in current
> builds (I don't have anything recent where I can test).
So ... You believe the "can't-delete-snap-because-di
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ross Walker
>
> If that's the case why not create a second pool called 'backup' and
> 'zfs send' periodically to the backup pool?
+1
This is what I do.
__
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of devsk
>
> I have many core files stuck in snapshots eating up gigs of my disk
> space. Most of these are BE's which I don't really want to delete right
> now.
Ok, you don't want to delete them
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dav Banks
This message:
> How's that working for you? Seems like it would be as straightforward
> as I was thinking - only possible.
And this message:
> Yeah, that's starting to sound like a f
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of devsk
>
> Thanks, Michael. That's exactly right.
>
> I think my requirement is: writable snapshots.
>
> And I was wondering if someone knowledgeable here could tell me if I
> could do this ma
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > http://arc.opensolaris.org/caselog/PSARC/2010/193/mail
>
> Agree. This is a better solution because some configurable parameters
> are hidden from "zfs get all"
Forgive me for not seeing it ... That link is extremely dense, and 34 p
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gary Gendel
>
> I do a backup of the pool nightly, so I feel confident that I don't
> need to mirror the drive and can break the mirror and expand the pool
> with the detached drive.
>
> I und
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mark J Musante
>
> > I do a backup of the pool nightly, so I feel confident that I don't
> need to mirror the drive and can break the mirror and expand the pool
> with the detached drive.
> >
>
> From: Darren J Moffat [mailto:darr...@opensolaris.org]
>
> It basically says that 'zfs send' gets a new '-b' option so "send back
> properties", and 'zfs recv' gets a '-o' and '-x' option to allow
> explicit set/ignore of properties in the stream. It also adds a '-r'
> option for 'zfs set'.
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> This can happen if there is a failure in a common system component
> during the write (eg. main memory, HBA, PCI bus, CPU, bridges, etc.)
I bet that's the cause. Because as
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gregory Gee
>
> Prior to pool version 19, mirroring the log device is highly
> recommended.
>
> I have the following.
>
> This system is currently running ZFS pool version 14.
>
> This r
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gregory Gee
>
> Edward, disabling ZIL might be ok, but let me characterize what my home
> server does and tell me if disabling ZIL is ok.
You should understand what it all means, and make your
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jonathan Loran
>
> But here's what's keeping me up at night: We're running zpool v15,
> which as I understand it means if an X25e log fails, the pool is toast.
> Obviously, the log devices ar
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Darren Taylor
>
> I'm not sure
> where the problem is, but essentially i have a zpool i cannot import.
> This particular pool used to have a two drives (not shown below), one
> for cache and an
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Josephes
>
> I have a host running svn_133 with a root mirror pool that I'd like to
> rebuild with a fresh install on new hardware; but I still have data on
> the pool that I would like t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of P-O Yliniemi
>
> Drives for storage: 16*1.5TB Seagate ST31500341AS, connected to two
> AOC-SAT2-MV8 controllers
> Drives for operating system: 2*80GB Intel X25-M (mirror)
>
> Is there any advan
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>
> Anyone have any experience with a R510 with the PERC H200/H700
> controller
> with ZFS?
>
> My perception is that Dell doesn't play well with OpenSolaris.
I have an R710...
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of valrh...@gmail.com
>
> 2. I'd also recommend avoiding the PERC cards, in particular since it
> makes drives attached to it impossible to transport to another system.
This is not true. You can
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of P-O Yliniemi
>
> * Using a separate disk for logging might give problems if the log
> device goes wrong. To avoid this - keep the log on the disk pool, or
> mirror the log device.
Just a coupl
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
> >
> >I have an R710... Not quite the same, but similar.
> >
> Thanks Edward.
>
> What did you end up using for the L2ARC? The SSDs shown in the online
> configurator are SLC b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Terry Hull
>
> I am wanting to build a server with 16 - 1TB drives with 2 8 drive
> RAID Z2 arrays striped together. However, I would like the capability
> of adding additional stripes of 2
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
>I am looking for references of folks using ZFS with either NFS
> or iSCSI as the backing store for VMware (4.x) backing store for
I'll try to clearly separate what I know
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Simone Caldana
>
> I would like to backup my main zpool (originally called "data") inside
> an equally originally named "backup"zpool, which will also holds other
>
> Basically I'd like to end
I am guessing you're experiencing cpu or memory failure. Or motherboard, or
disk controller.
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Anderson
> Sent: Thursday, August 12, 2010 3:46 AM
> To: zfs
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Twa
>
> I have three zpools on a server and want to add a mirrored pair of
> ssd's for the ZIL. Can the same pair of SSDs be used for the ZIL of
> all three zpools or is it one ZIL SLOG
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Twa
>
> My plan now is to buy the ssd's and do extensive testing. I want to
> focus my performance efforts on two zpools (7x146GB 15K U320 + 7x73GB
> 10k U320). I'd really like two ssd'
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
>I am looking for references of folks using ZFS with either NFS
> or iSCSI as the backing store for VMware (4.x) backing store for
> virtual machines.
Since I had ulterio
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Cusack
>
> I haven't met anyone who uses Solaris because of OpenSolaris.
What rock do you live under?
Very few people would bother paying for solaris/zfs if they couldn't try it
for fre
I'm confused. I have compression enabled on a ZFS filesystem, which
contains for all intents and purposes, just a single 20G file, and I see ...
# ls -lh somefile
-rw--- 1 root root 20G Aug 13 17:41 somefile
# du -h somefile
5.6G somefile
(Sounds like approx 25-3
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> #3 I previously believed that vmfs3 was able to handle sparse files
> amazingly well, like, when you create a new vmdk, it appears almost
> instantly
> From: cyril.pli...@gmail.com [mailto:cyril.pli...@gmail.com] On Behalf
> Of Cyril Plisko
>
> The compressratio shows you how much *real* data was compressed.
> The file in question, however, can be sparse file and have its size
> vastly
> different from what du says, even without compression.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Russ Price
>
> For me, Solaris had zero mindshare since its beginning, on account of
> being
> prohibitively expensive.
I hear that a lot, and I don't get it. $400/yr does move it out of peo
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrej Podzimek
>
> Or Btrfs. It may not be ready for production now, but it could become a
> serious alternative to ZFS in one year's time or so. (I have been using
I will much sooner pay for
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jerome Warnier
>
> Do not forget Btrfs is mainly developed by ... Oracle. Will it survive
> better than Free Solaris/ZFS?
It's gpl. Just as zfs is cddl. They cannot undo, or revoke the free
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> The $400 number is bogus since the amount that Oracle quotes now
> depends on the value of the hardware that the OS will run on. For my
Using the same logic, if I said MS
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
>
> The cost discussion is ridiculous, period. $400 is a steal for
> support. You'll pay 3x or more for the same thing from Redhat or
> Novell.
Actually, as a comparison with the mes
> From: Garrett D'Amore [mailto:garr...@nexenta.com]
> Sent: Sunday, August 15, 2010 8:17 PM
>
> (The only way I could see this changing would be if there was a sudden
> license change which would permit either ZFS to overtake btrfs in the
> Linux kernel, or permit btrfs to overtake zfs in the Sol
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
>
> However, if Oracle makes a binary release of BTRFS-derived code, they
> must
> release the source as well; BTRFS is under the GPL.
When a copyright holder releases someth
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> Can someone provide a link to the requisite source files so that we
> can see the copyright statements? It may well be that Oracle assigned
> the copyright to some other party.
BTRFS is inside the linux kernel.
Copyright (C)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Miles Nordin
>
>1 /*
>2 * Copyright (C) 2008 Red Hat. All rights reserved.
Holy crap. That's three different results. One said oracle, one said red
hat, and one said FSF. So I wen
Suppose I have a storagepool: /storagepool
And I have snapshots on it. Then I can access the snaps under
/storagepool/.zfs/snapshots
But is there any way to enable this within all the subdirs? For example,
cd /storagepool/users/eharvey/some/foo/dir
cd .zf
> System:
> Dell 2950
> 16G RAM
> 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no
> extra drive slots, a single zpool.
> svn_124, but with my zpool still running at the 2009.06 version (14).
>
> My plan is to put the SSD into an open disk slot on the 2950, but will
> have to co
> Thanks Ed. It sounds like you have run in this mode? No issues with
> the perc?
> >
> > You can JBOD with the perc. It might be technically a raid0 or
> > raid1 with a
> > single disk in it, but that would be functionally equivalent to JBOD.
The only time I did this was ...
I have a Windows se
> Replacing failed disks is easy when PERC is doing the RAID. Just remove
> the failed drive and replace with a good one, and the PERC will rebuild
> automatically.
Sorry, not correct. When you replace a failed drive, the perc card doesn't
know for certain that the new drive you're adding is mea
> The Intel specified random write IOPS are with the cache enabled and
> without cache flushing. They also carefully only use a limited span
> of the device, which fits most perfectly with how the device is built.
How do you know this? This sounds much more detailed than any average
person could
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'
However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)
Is it crazy for me to try the s
> *snip*
> I hope that's clear.
Yes, perfectly clear, and very helpful. Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> It says at the end of the zfs send section of the man page "The format
> of the stream is committed. You will be able to receive your streams on
> future versions of ZFS."
>
> 'Twas not always so. It used to say "The format of the stream is
> evolving. No backwards compatibility is guaranteed. Y
> I previously had a linux NFS server that I had mounted 'ASYNC' and, as
> one would expect, NFS performance was pretty good getting close to
> 900gb/s. Now that I have moved to opensolaris, NFS performance is not
> very good, I'm guessing mainly due to the 'SYNC' nature of NFS. I've
> seen vario
If there were a ³zfs send² datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ³zfs receive² and
occupying all that disk space?
I am aware that ³zfs send² is not a backup solution, due to vulnerability of
even a single bit error, and lack of granul
> Depending of your version of OS, I think the following post from Richard
> Elling
> will be of great interest to you:
> -
> http://richardelling.blogspot.com/2009/10/check-integrity-of-zfs-send-streams.
> html
Thanks! :-)
No, wait!
According to that page, if you "zfs receive -n" then you
> If feasible, you may want to generate MD5 sums on the streamed output
> and then use these for verification.
That's actually not a bad idea. It should be kinda obvious, but I hadn't
thought of it because it's sort-of duplicating existing functionality.
I do have a "multipipe" script that behav
Where exactly do you get zstreamdump?
I found a link to zstreamdump.c ... but is that it? Shouldn't it be part of
a source tarball or something?
Does it matter what OS? Every reference I see for zstreamdump is about
opensolaris. But I'm running solaris.
__
> Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot
> more CPU efficient on i386 and AMD64, and even on SPARC. If the
> compressor is able to keep up with the network and disk, then it is
> fast enough. See "http://www.lzop.org/";.
In my development/testing this week, I did "
> I use the excellent pbzip2
>
> zfs send ... | tee >(md5sum) | pbzip2 | ssh remote ...
>
> Utilizes those 8 cores quite well :)
This (pbzip2) sounds promising, and it must be better than what I wrote.
;-) But I don't understand the syntax you've got above, using tee,
redirecting to somethi
> OS means Operating System, or OpenSolaris. This is in the second
> meaning I wrote OS in my answer. It was not obvious you were using
> Solaris 10 though. Sorry about that.
>
> (FYI, zstreamdump seems to be an addition to build 125.)
Oh - I never connected OS to OpenSolaris. ;-)
So I gathe
> I see 3.6X less CPU
> consumption from 'lzop -3' than from 'gzip -3'.
Where do you get lzop from? I don't see any binaries on their site, nor
blastwave, nor opencsw. And I am having difficulty building it from source.
___
zfs-discuss mailing list
zf
Oh well. I built LZO, and can't seem to link it in the lzop build, despite
correctly setting the FLAGS variables they say in the INSTALL file. I'd
love to provide an lzop comparison, but can't get it. I give up ... Also,
can't build python-lzo. Also would be sweet, but hey.
For whoever cares,
> cat my_log_file | tee >(gzip > my_log_file.gz) >(wc -l) >(md5sum) |
> sort | uniq -c
That is great. ;-) Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> We've been using ZFS for about two years now and make a lot of use of
> zfs
> send/receive to send our data from one X4500 to another. This has been
> working well for the past 18 months that we've been doing the sends.
>
> I recently upgraded the receiving thumper to Solaris 10 u8 and since
> t
301 - 400 of 1156 matches
Mail list logo