If you are using (3) 3511's, then won't it be possibly that your 3GB workload
will be largely or entirely served out of RAID controller cache?
Also, I had a question for your production backups (millions of small files),
do you have atime=off set for the filesystems? That might be helpful.
--
Hi Yariv -
It is hard to say without more data, but perhaps you might be a victim of
"Stop looking and start ganging":
http://bugs.opensolaris.org/view_bug.do?bug_id=6596237
It looks like this was fixed in S10u8, which was released last month.
If you open a support ticket (or search for this
There is a calculator at Corporate Strategies:
http://ctistrategy.com/resources/sun-7000-calculator/
Note that if the ctistrategy site is unavailable for some reason, you can also
just download the free 7000 series virtual appliance which will run happily in
VMWare or VirtualBox.
--
This mess
I know dedup is on the roadmap for the 7000 series, but I don't think it is
officially supported yet, since we would have seen a note about the release of
the software on the FishWorks Wiki
http://wikis.sun.com/display/FishWorks/Software+Updates
--
This message posted from opensolaris.org
_
The OpenSolaris "Just enough OS" (JeOS) project has been working on making
stripped down images available for virtual machines as well as automated
installer profiles.
See: http://hub.opensolaris.org/bin/view/Project+jeos/WebHome
for the project home page.
Also, a frequently updated blog on the
/dev/rdsk/* devices are character based devices, not block based. In general,
character based devices have to be accessed serially (and don't do buffering),
versus block devices which buffer and allow random access to the data. If you
use:
ls -lL /dev/*dsk/c3d1p0
you should see that the /dev/ds
The zfs kernel modules handle the caching/flushing of data across all the
devices in the zpools. It uses a different method for this than the "standard"
virtual memory system used by traditional file systems like UFS. Try defining
your NVRAM card with ZFS as a log device using the /dev/dsk/xyz
Gary -
Besides the network questions...
What does your zpool status look like?
Are you using compression on the file systems?
(Was single-threaded and fixed in s10u4 or equiv patches)
--
This message posted from opensolaris.org
___
zfs-disc
You might want to also try toggling the Nagle tcp setting to see if that helps
with your workload:
ndd -get /dev/tcp tcp_naglim_def
(save that value, default is 4095)
ndd -set /dev/tcp tcp_naglim_def 1
If no (or a negative) difference, set it back to the original value
ndd -set /dev/tcp tcp_nagl
How are the two sides different? If you run something like 'openssl md5sum' on
both sides is it much faster on one side?
Does one machine have a lot more memory/ARC and allow it to skip the physical
reads? Is the dataset compressed on one side?
--
This message posted from opensolaris.org
Keep in mind that if you use ZFS you get a lot of additional functionality like
snapshots, compression, clones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
I don't understand your statement/questions. This wasn't a response to "ZFS
versus every possible storage platform in the world". The original poster was
asking about comparing ZFS versus hardware RAID on specific machines as
mentioned in the title. AFAIK you don't get compression, snapshots
As others have mentioned, it would be easier to take a stab at this if there is
some more data to look at.
Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs
etc info.
Can you provide zpool status output?
As far as checking ls performance, just to remove name servic
If the fix is put into Solaris 10 update 4 (as Matt expects) it should trickle
into the R&S patch cluster as well.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of
value.
SunOS b60 5.11 snv_60 i86pc i386 i86pc
I wanted to check out the FMA handling of a serious zpool error, so I did the
following:
2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
2007-04-07.15:21:37 zpool scr
One option is you can replace all the existing devices in a raidz vdev with
larger devices, and then export/import the pool and the vdev will grow in size.
I agree that you simply can't add a single device to grow a raidz vdev.
This message posted from opensolaris.org
___
There was some discussion on the "always panic for fatal pool failures" issue
in April 2006, but I haven't seen if an actual RFE was generated.
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/017276.html
This message posted from opensolaris.org
Why are you using software-based RAID 5/RAIDZ for the tests? I didn't think
this was a common setup in cases where file system performance was the primary
consideration.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Hi Lori,
Thanks to you and your team for posting the zfs boot image kit. I was able
to jumpstart a VMWare virtual machine using a Nevada b62 image patched with
your conversion kit and it went very smoothly.
Here is the profile that I used:
# Jumpstart profile for VMWare image w/ two emulated
I've only used Lori Alt's patch for b62 boot images via jumpstart
(http://www.opensolaris.org/jive/thread.jspa?threadID=28725&tstart=15)
which made it an easy process with mirrored boot ZFS drives and no UFS
partitions required. If you have a jumpstart server, I think that is the best
way to go.
I think it would be handy if a utility could read a full zfs snapshot and
restore subsets of files or directories like using something like tar -xf or
ufsrestore -i.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
An example would be if you had a raw snapshot on tape. A single file or subset
of files could be restored from it without needing the space to load the full
snapshot into a zpool. This would be handy if you have a zpool with 500GB of
space and 300GB used. If you had a snapshot that was 250GB
It would be really handy if whoever was responsible for the message at:
http://www.sun.com/msg/ZFS-8000-A5
could add data about which zpool versions are supported at specific OS/patch
releases.
The current message doesn't help the user figure out how to accomplish their
implied task, which is t
In addition to Brendan's advice about benchmarking, it would be a good idea to
use the newer Solaris release (Solaris 10 08/07), which has a lot of ZFS
improvements (performance and functional).
This message posted from opensolaris.org
___
zfs-discu
If you are using 6 Thumpers via iSCSI to provide storage to your zpool and
don't use either mirroring or RAIDZ/RAIDZ2 across the Thumpers, if one Thumper
goes down then your storage pool is unavailable. I think you want some form of
RAID at both levels.
This message posted from opensolaris.
If this is reproducible, can you force a panic so it can be analyzed?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Matt,
Interesting proposal. Has there been any consideration if free space being
reported for a ZFS filesystem would take into account the copies setting?
Example:
zfs create mypool/nonredundant_data
zfs create mypool/redundant_data
df -h /mypool/nonredundant_data /mypool/redun
27 matches
Mail list logo