I've used the COMPRESS feature for quite a while and you can flip back and
forth without any problem. When you turn the compress ON nothing happens to the
existing data. However when you start updating your files all new blocks will
be compressed; so it is possible to have your file be composed
No such facility exists to automagically convert an existing UFS filesystem to
ZFS. You've to create a new ZFS pool/filesystem and then move your data.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I've used ZFS since July/August 2006 when Sol 10 Update 2 came out (first
release to integrate ZFS.) I've used it on three servers (E25K domain, and 2
E2900s) extensivesely; two them are production. I've over 3TB of storage from
an EMC SAN under ZFS management for no less than 6 months. Like you
I'm not sure what benefit you forsee by running a COW filesystem (ZFS) on a COW
array (NetApp).
Back to regularly scheduled programming: I still say you should let ZFS manage
JBoD type storage. I can personally recount the horror of relying upon an
intelligent storage array (EMC DMX3500 in our
You're right that storage level snapshots are filesystem agnostic. I'm not sure
why you believe you won't be able to restore individual files by using a NetApp
snapshot? In the case of ZFS you'd take a periodic snapshot and use it to
restore files, in the case of NetApp you can do the same (of c
Agreed, I guess I didn't articulate my point/thought very well. The best config
is to present JBoDs and let ZFS provide the data protection. This has been a
very stimulating conversation thread; it is shedding new light into how to best
use ZFS.
This message posted from opensolaris.org
_
Here's another website working on his rescue, myy prayers are for a safe return
of this CS icon.
http://www.helpfindjim.com/
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
I contacted the author of 'lsof' regarding the missing ZFS support. The command
works but fails to display any files that are opened by the process in a ZFS
filesystem. He indicates that the required ZFS kernel structure definitions
(header files) are not shipped with the OS. He further indicate
I did find zfs.h and libzfs.h (thanks Eric). However, when I try to compile the
latest version (4.87C) of lsof it finds the following files missing: dmu.h
zfs_acl.h zfs_debug.h zfs_rlock.h zil.h spa.h zfs_context.h zfs_dir.h
zfs_vfsops.h zio.h txg.h zfs_ctldir.h zfs_ioctl.h zfs_znode.h zio_impl.
I think so. After all there are features shipped which are not fully
baked/guranteed like the send/receive. Isn't shipping the header files better
than letting developers guess their structure and possibly make mistakes? Of
course the developer can compile against OpenSolaris source but far easi
I'm sorry dude, I can't make head or tail from your post. What is your point?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We've Solaris 10 Update 3 (aka 11/06) running on an E2900 (24 x 96). On this
server we've been running a large SAS environment totalling well over 2TB. We
also take daily snapshots of the filesystems and clone them for use by a local
zone. This setup has been in use for well over 6 months.
Star
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
This message posted from opensolaris.org
___
zfs-discuss ma
I've since stopped making the second clone when I realized the
.zfs/snapshot/ still exists after the clone operation is completed.
So my need for the local clone is met by the direct access to the snapshot.
However, the poor performance of the destroy is still valid. It is quite
possible that w
To clarify further; EMC note "EMC Host Connectivity Guide for Solaris"
indicates that ZFS is supported on 11/06 (aka Update 3) and onwards. However,
they sneak in a cautionary disclaimer that snapshot and clone features are
supported by Sun. If one reads it carefully it appears that they do supp
Test setup:
- E2900 with 12 US-IV+ 1.5GHz processor, 96GB memory, 2x2Gbps FC HBAs, MPxIO
in round-robbin config.
- 50x64GB EMC disks presented on both 2 FCs.
- ZFS pool defined using all 50 disks
- Multiple ZFS filesystems built on the above pool.
I'm observing the following:
- When the
Completely forgot to mention the OS in my previous post; Solaris 10 06/06.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Therein lies my dillemma:
- We know the I/O sub-system is capable of much higher I/O rates
- Under the test setup I've SAS datasets which are lending themselves to
compression. This should manifest itself as lots of read I/O resulting in much
smaller (4x) write I/O due to compression. This m
I've a few questions:
- Does 'zpool iostat' report numbers from the top of the ZFS stack or at the
bottom? I've corelated the zpool iostat numbers with the system iostat numbers
and they matchup. This tells me the numbers are from the 'bottom' of the ZFS
stack, right? Having said that it'd be
We're running ZFS with compress=ON on a E2900. I'm hosting SAS/SPDS datasets
(files) on these filesystems and am achieving 1:3.87 (as reported by zfs)
compression. Your mileage will vary depending on the data you are writing. If
your data is already compressed (zip files) then don't expect any p
Good start, I'm now motivated to run the same test on my server. My h/w config
for the test will be:
- E2900 (24 way x 96GB)
- 2 2Gbps QLogic cards
- 40 x 64GB EMC LUNs
I'll run the AOL deidentified clickstream database. It'll primarily be a write
test. I intend to use the following scenarios:
I finally got around to running a 'benchmark' using the AOL clickstream data
(2GB of text files and approximately 36 million rows). Here are the Oracle
settings during the test.
- Same Oracle settings for all tests
- All disks in question are 32GB EMC hypers
- I had the standard Oracle tablespac
One correction in the interest of full disclosure, tests were conducted on a
machine that is different from my original post indicated a server
configuration. Here's the server config used in tests:
- E25K domain (1 board: 4P/8Way x 32GB)
- 2 2Gbps FC
- MPxIO
- Solaris 10 Update 2 (06/06); no ot
I'm experiencing a bizzare write performance problem while using a ZFS
filesystem. Here are the relevant facts:
[b]# zpool list[/b]
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mtdc 3.27T502G 2.78T14% ONLINE -
zfspool68.5
Here's the information you requested.
Script started on Tue Sep 12 16:46:46 2006
# uname -a
SunOS umt1a-bio-srv2 5.10 Generic_118833-18 sun4u sparc SUNW,Netra-T12
# prtdiag
System Configuration: Sun Microsystems sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 96GB
==
I ran the DTrace script and the resulting output is rather large (1 million
lines and 65MB), so I won't burden this forum with that much data. Here are the
top 100 lines from the DTrace output. Let me know if you need the full output
and I'll figure out a way for the group to get it.
dtrace: de
One more piece of information. I was able to ascertain the slowdown happens
only when ZFS is used heavily; meaning lots of inflight I/O. This morning when
the system was quiet my writes to the /u099 filesystem was excellent and it has
gone south like I reported earlier.
I am currently awaiting
I did a non-scientific benchmark against ASM and ZFS. Just look for my posts
and you'll see it. To summarize it was a statistical tie for simple loads of
around 2GB of data and we've chosen to stick with ASM for a variety of reasons
not the least of which is its ability to rebalance when disks a
I don't see a patch for this on the SunSolve website. I've opened a service
request to get this patch for Sol10 06/06. Stay tuned.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
Wow! I solved a tricky problem this morning thanks to Zones & ZFS integration.
We have a SAS SPDS database environment running on Sol10 06/06. The SPDS
database is unique in that when a table is being updated by one user it is
unavailable to the rest of the user community. Our nightly update job
I've found a small bug in the ZFS & Zones integration in Sol10 06/06 release.
This evening I started tweaking my configuration to make it consistent (I like
orthogonal naming standards) and hit upon this situation:
- Setup a ZFS clone as /zfspool/bluenile/cloneapps; this is a clone of my
global
Some people have privately asked me the configuration details when the problem
was encountered. Here they are:
zonecfg:bluenile> info
zonepath: /zones/bluenile
autoboot: false
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inhe
You're most certainly are hitting the SSH limitation. Note that SSH/SCP
sessions are single threaded and won't utilize all of the system resources even
if they are available.
Around 4 months back I was doing some testing between 2 fully configured T2000s
connected using crossover cables and fig
I'm glad you asked this question. We are currently expecting 3511 storage
sub-systems for our servers. We were wondering about their configuration as
well. This ZFS thing throws a wrench in the old line think ;-) Seriously, we
now have to put on a new hat to figure out the best way to leverage b
Thanks for the stimulating exchange of ideas/thoughts. I've always been a
believer of letting s/w do my RAID functions; for example in the old days of
VxVM I always preferred to do mirroring at the s/w level. It is my belief that
there is more 'meta' information available at the OS level than at
Today ZFS proved its mettle at our site. We've a set of Sun servers (25k and
2900s) that are all connected to a DMX3500 via a SAN. Different servers use the
storage differently; some of the storage on the server side was configured with
ZFS while others were configured as UFS filesystems while s
Oh my, one day after I posted my horror story another one strikes. This is
validation of the design objectives of ZFS, looks like this type of stuff
happens more often than not. In the past we'd have just attributed this type of
problem to some application induced corruption, now ZFS is pinning
Glad it worked for you. I suspect in your case the corruption happened way down
in the tree and you could get around it by pruning the tree (rm the file) below
the point of corruption. I suspect this could be due to a very localized
corruption like Alpha particle problem where a bit was flipped
[b]Setting:[/b]
We've operating in the following setup for well over 60 days.
- E2900 (24 x 92)
- 2 2Gbps FC to EMC SAN
- Solaris 10 Update 2 (06/06)
- ZFS with compression turned on
- Global zone + 1 local zone (sparse)
- Local zone is fed ZFS clones from the global Zone
[b]Daily Routine
I'm observing the following behavior on our E2900 (24 x 92 config), 2 FCs, and
... I've a large filesystem (~758GB) with compress mode on. When this
filesystem is under heavy load (>150MB/S) I've problems saving files in 'vi'. I
posted here about it and recall that the issue is addressed in Sol1
Thanks, I just downloaded Update 3 and hopefully the problem will go away.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Our setup:
- E2900 (24 x 96); Solaris 10 Update 2 (aka 06/06)
- 2 2Gbps FC HBA
- EMC DMX storage
- 50 x 64GB LUNs configured in 1 ZFS pool
- Many filesystems created with COMPRESS enabled; specifically I've one that is
768GB
I'm observing the following puzzling behavior:
- We are currently crea
Quick update, since my original post I've confirmed via DTrace (rwtop script in
toolkit) that the application is not generating 150MB/S * compressratio of I/O.
What then is causing this much I/O in our system?
This message posted from opensolaris.org
__
I'll see if I can confirm what you are suggesting. Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We've been using ZFS for at least 3 months in a production environment. Not
only are we using the basic functionality but we use the snapshot/cloning
feature heavily along with Zones. We're running Solaris 10 Update 2 (aka 06/06)
version and are going to Update 3 shortly. Our diskspace is large
I've some important information that should shed some light on this behavior:
This evening I created a new filesystem across the very same 50 disks including
the COMPRESS attribute. My goal was to isolate some workload to the new
filesystem and started moving a 100GB directory tree over to the n
I'm observing the following behavior in our environment (Sol10U2, E2900, 24x96,
2x2Gbps, ...)
- I've a compressed ZFS filesystem where I'm creating a large tar file. I
notice that the tar process is running fine (accumulating CPU, truss shows
writes, ...) but for whatever reason the timestamp o
You're probably hitting the same wall/bug that I came across; ZFS in all
versions up to and including Sol10U3 generates excessive I/O when it encounters
'fssync' or if any of the files were opened with 'O_DSYNC' option.
I do believe Oracle (or any DB for that matter) opens the file with O_DSYNC
U3 is under consideration, we're going through some rudimentary testing of the
update.
I ran the following commands
gtar cf , I was just creating a tar
file
The remote copying was done as follows: scp -c arcfour . [EMAIL
PROTECTED]:/
BTW, the reverse operation of repopulating my FS (by unta
Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting an
incorrect bug.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bag-o-tricks-r-us, I suggest the following in such a case:
- Two ZFS pools
- One for production
- One for Education
- Isolate the LUNs feeding the pools if possible, don't share spindles.
Remember on EMC/Hitachi you've logical LUNs created by striping/concat'ng
carved up physical disks, so
I did some straight up Oracle/ZFS testing but not on Zvols. I'll give it a shot
and report back, next week is the earliest.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
I can vouch for this situation. I had to go through a long maintenance to
accomplish the following:
- 50 x 64GB drives in a zpool; needed to seperate out 15 of them out due to
performance issues. There was no need to increase storage capacity.
Because I couldn't yank 15 drives from the existing
53 matches
Mail list logo