Here's an example of a ZFS-based product you can buy with a large
number of disks in the volume:
http://www.aberdeeninc.com/abcatg/petarack.htm
360 3T drives
A full petabyte of storage (1080TB) in a single rack, under a single
namespace or volume
On Sat, Oct 6, 2012 at 11:48 AM, Richard Elling
Reducing the record size would negatively impact performance. For rational why, see thesection titled "Match Average I/O Block Sizes" in my blog post on filesystem caching:http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.htmlBrad
Brad Diggs | Principal Sales
S11 FCSBrad
Brad Diggs | Principal Sales Consultant | 972.814.3698eMail: brad.di...@oracle.comTech Blog: http://TheZoneManager.comLinkedIn: http://www.linkedin.com/in/braddiggs
On Dec 29, 2011, at 8:11 AM, Robert Milkowski wrote: And these results are from S11 FCS I assume.On older builds or
effectively leverage this caching potential, that won't happen. OUD far outperforms ODSEE. That said OUD may get some focus in this area. However, time willtell on that one.For now, I hope everyone benefits from the little that I did validate.Have a great day!Brad
Brad Diggs | Principal
/2010/02/directory-data-priming-strategies.htmlThanks again!Brad
Brad Diggs | Principal Sales ConsultantTech Blog: http://TheZoneManager.comLinkedIn: http://www.linkedin.com/in/braddiggs
On Dec 8, 2011, at 4:22 PM, Mark Musante wrote:You can see the original ARC case here:http://arc.opensolaris.org
deduplication that the L1ARCwill also only require 1TB of RAM for the data.Note that I know the deduplication table will use the L1ARC as well. However, the focus of my questionis on how the L1ARC would benefit from a data caching standpoint.Thanks in advance!Brad
Brad Diggs | Principal Sales
3G per TB would be a better ballpark estimate.
On Wed, Jun 15, 2011 at 8:17 PM, Daniel Carosone wrote:
> On Wed, Jun 15, 2011 at 07:19:05PM +0200, Roy Sigurd Karlsbakk wrote:
>>
>> Dedup is known to require a LOT of memory and/or L2ARC, and 24GB isn't
>> really much with 34TBs of data.
>
> The f
Thank you for your insight. This is a system that was handed down to me when
another sysadmin went to greener pastures. There were no quotas set on the
system. I used zfs destroy to free up some space and did put a quota on it. I
still have 0 freespace available. I think this is due to the
> As for certified systems, It's my understanding that Nexenta themselves don't
> "certify" anything. They have systems which are recommended and supported by
> their network of VAR's.
The certified solutions listed on Nexenta's website were certified by Nexenta.
___
I am new to OpenSolaris and I have been reading about and seeing screenshots of
the ZFS Administration Console. I have been looking at the dates on it and
every post is from about two years ago. I am just wondering is this option not
available on OpenSolaris anymore and if it is how do I set it
For de-duplication to perform well you need to be able to fit the de-dup table
in memory. Is a good rule-of-thumb for needed RAM Size=(pool capacity/avg
block size)*270 bytes? Or perhaps it's Size/expected_dedup_ratio?
And if you limit de-dup to certain datasets in the pool, how would this
calc
Ed,
See my answers inline:
"I don't think your question is clear. What do you mean "on oracle backed by
storage luns?""
We'll be using luns from a storage array vs ZFS controller disks. The luns are
mapped the db server and from there initialize under ZFS.
" Do you mean "on oracle hardware?"
Has anyone done much testing of just using the solid state devices (F20 or F5100) as devices for ZFS pools? Are there any concerns with running in this mode versus usingsolid state devices for L2ARC cache?Second, has anyone done this sort of testing with MLC based solid state drives?What has your
Hi! I'd been scouring the forums and web for admins/users who deployed zfs
with compression enabled on Oracle backed by storage array luns.
Any problems with cpu/memory overhead?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Correct, but presumably "for a limited time only". I would think that over time
as the technology improves that the default would change.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
Just wanted to make a quick announcement that there will be an OpenStorage
Summit in Palo Alto, CA in late October. The conference should have a lot of
good OpenSolaris talks, with ZFS experts such as Bill Moore, Adam Levanthal,
and Ben Rockwood already planning to give presentations. The confere
Peter -
Here is an example, where the company myco wants to add a property "myprop"
to a file system "myfs" contained
within the pool "mypool".
zfs set myco:myprop=11 mypool/myfs
On Mon, Aug 2, 2010 at 1:45 PM, Peter Taps wrote:
> Folks,
>
> I need to store some application-specific settings fo
to have someone do some benchmarkingof MySQL in a cache optimized server with F20 PCIe flash cards but never got around to it.So, if you want to get all of the caching benefits of DmCache, just run your app on Solaris 10 today. ;-)Have a great day!Brad Brad Diggs | Principal Security Sales
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I yanked a disk to simulate failure to the test pool to test hot spare failover
- everything seemed fine until the copy back completed. The hot spare is still
showing in used...do we need to remove the spare from the pool to get it to
deattach?
# zpool status
pool: ZPOOL.TEST
state: ONLINE
The reason I asked was just to understand how those attributes play with
ufs/vxfs...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Whats the default size of the file system cache for Solaris 10 x86 and can it
be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
thanks - :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hmm so that means read requests are hitting/fulfilled by the arc cache?
Am I correct in assuming that because the ARC cache is fulfilling read
requests, the zpool and l2arc is barely touched?
--
This message posted from opensolaris.org
___
zfs-discuss
I'm not showing any data being populated in the L2ARC or ZIL SSDs with a J4500
(48 - 500GB SATA drives).
# zpool iostat -v
capacity operationsbandwidth
poolused avail read write read write
- -
What build are you on?
zpool import hangs for me on b134.
On Wed, Apr 21, 2010 at 9:21 AM, John Balestrini wrote:
> Howdy All,
>
> I have a raidz pool that hangs the system when importing. I attempted a
> pfexec zpool import -F pool1 (which has been importing for two days with no
> result), but d
I'm wondering if the author is talking about "cache mirroring" where the cache
is mirrored between both controllers. If that is the case, is he saying that
for every write to the active controlle,r a second write issued on the passive
controller to keep the cache mirrored?
--
This message post
I had always thought that with mpxio, it load-balances IO request across your
storage ports but this article
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has
got me thinking its not true.
"The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10 byt
Marion - Do you happen to know which SAS hba it applys to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Since the j4500 doesn't have a internal SAS controller, would it be safe to say
that ZFS cache flushes would be handled by the host's SAS hba?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Is there anyway to assign a unique name or id to a disk part of a zpool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Don't use raidz for the raid type - go with a striped set
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We're running 10/09 on the dev box but 11/06 is prodqa.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
y ignores cache flushes from zfs?
Brad
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
With the default compression scheme (LZJB ), how does one calculate the ratio
or amount compressed ahead of time when allocating storage?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Hi! So after reading through this thread and checking the bug report...do we
still need to tell zfs to disable cache flush?
set zfs:zfs_nocacheflush=1
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Did you buy the SSDs directly from Sun? I've heard there could possibly be
firmware that's vendor specific for the X25-E.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
I was reading your old posts about load-shares
http://opensolaris.org/jive/thread.jspa?messageID=294580 .
So between raidz and load-share "striping", raidz stripes a file system block
evenly across each vdev but with load sharing the file system block is written
on a vdev that's not filled up
"Zfs does not do striping across vdevs, but its load share approach
will write based on (roughly) a round-robin basis, but will also
prefer a less loaded vdev when under a heavy write load, or will
prefer to write to an empty vdev rather than write to an almost full
one."
I'm trying to visualize t
@hortnon - ASM is not within the scope of this project.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can anyone recommend a optimum and redundant striped configuration for a X4500?
We'll be using it for a OLTP (Oracle) database and will need best performance.
Is it also true that the reads will be load-balanced across the mirrors?
Is this considered a raid 1+0 configuration?
zpool create -f
Richard,
"Yes, write cache is enabled by default, depending on the pool configuration."
Is it enabled for a striped (mirrored configuration) zpool? I'm asking because
of a concern I've read on this forum about a problem with SSDs (and disks)
where if a power outage occurs any data in cache woul
"(Caching isn't the problem; ordering is.)"
Weird I was reading about a problem where using SSDs (intel x25-e) if the power
goes out and the data in cache is not flushed, you would have loss of data.
Could you elaborate on "ordering"?
--
This message posted from opensolaris.org
Has anyone worked with a x4500/x4540 and know if the internal raid controllers
have a bbu? I'm concern that we won't be able to turn off the write-cache on
the internal hds and SSDs to prevent data corruption in case of a power failure.
--
This message posted from opensolaris.org
__
Hi Adam,
>From your the picture, it looks like the data is distributed evenly (with the
>exception of parity) across each spindle then wrapping around again (final 4K)
>- is this one single write operation or two?
| P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | <-one write
op
If a 8K file system block is written on a 9 disk raidz vdev, how is the data
distributed (writtened) between all devices in the vdev since a zfs write is
one continuously IO operation?
Is it distributed evenly (1.125KB) per device?
--
This message posted from opensolaris.org
___
@ross
"If the write doesn't span the whole stripe width then there is a read
of the parity chunk, write of the block and a write of the parity
chunk which is the write hole penalty/vulnerability, and is 3
operations (if the data spans more then 1 chunk then it is written in
parallel so you can thi
Hi! I'm attempting to understand the pros/cons between raid5 and raidz after
running into a performance issue with Oracle on zfs
(http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0).
I would appreciate some feedback on what I've understood so far:
WRITES
raid5 - A FS block is
@relling
"For small, random read IOPS the performance of a single, top-level
vdev is
performance = performance of a disk * (N / (N - P))
133 * 12/(12-1)=
133 * 12/11
where,
N = number of disks in the vdev
P = number of parity devices in the vdev"
performance of a dis
@eric
"As a general rule of thumb, each vdev has the random performance
roughly the same as a single member of that vdev. Having six RAIDZ
vdevs in a pool should give roughly the performance as a stripe of six
bare drives, for random IO."
It sounds like we'll need 16 vdevs striped in a pool to at
@ross
"Because each write of a raidz is striped across the disks the
effective IOPS of the vdev is equal to that of a single disk. This can
be improved by utilizing multiple (smaller) raidz vdevs which are
striped, but not by mirroring them."
So with random reads, would it perform better on a rai
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle but whats
the difference between a raidz mirrored vdev vs a raid10 setup?
We have tested a zfs stripe configuration before with 15 disks and our tester
was extremely happy with the performance. After
"This doesn't make sense to me. You've got 32 GB, why not use it?
Artificially limiting the memory use to 20 GB seems like a waste of
good money."
I'm having a hard time convincing the dbas to increase the size of the SGA to
20GB because their philosophy is, no matter what eventually you'll have
"Try an SGA more like 20-25 GB. Remember, the database can cache more
effectively than any file system underneath. The best I/O is the I/O
you don't have to make."
We'll be turning up the SGA size from 4GB to 16GB.
The arc size will be set from 8GB to 4GB.
"This can be a red herring. Judging by t
Richard - the l2arc is c1t13d0. What tools can be use to show the l2arc stats?
raidz1 2.68T 580G543453 4.22M 3.70M
c1t1d0 - -258102 689K 358K
c1t2d0 - -256103 684K 354K
c1t3d0 - -258102 690K 359K
repost - Sorry for ccing the other forums.
I'm running into a issue where there seems to be a high number of read iops
hitting disks and physical free memory is fluctuating between 200MB -> 450MB
out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and
slog on another 32GB
I'm running into a issue where there seems to be a high number of read iops
hitting disks and physical free memory is fluctuating between 200MB -> 450MB
out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and
slog on another32GB X25-E ssd.
According to our tester, Oracle
Have you considered running your script with ZFS pre-fetching disabled
altogether to see if
the results are consistent between runs?
Brad
Brad Diggs
Senior Directory Architect
Virtualization Architect
xVM Technology Lead
Sun Microsystems, Inc.
Phone x52957/+1 972-992-0002
Mail
You might want to have a look at my blog on filesystem cache
tuning... It will probably help
you to avoid memory contention between the ARC and your apps.
http://www.thezonemanager.com/2009/03/filesystem-cache-optimization.html
Brad
Brad Diggs
Senior Directory Architect
Hi Victor,
Yes, you may access the system via ssh. Please contact me at bar001 at uark dot
edu and I will reply with details of how to connect.
Thanks,
Brad
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
t 2435913 tank' has output and is very long...is there anything
I should be looking for? Without -t 243... this command failed on dmu_read, now
it just keeps going forever.
Your help is much appreciated.
Thanks,
Brad
--
This message posted from opensolaris.org
Uberblock
magic = 00bab10c
version = 4
txg = 2435911
guid_sum = 16655261404755214374
timestamp = 1240287900 UTC = Mon Apr 20 23:25:00 2009
Thanks,
Brad
--
This message posted from opensolaris.org
___
zfs-discuss
Hi Victor,
Here's the output of 'zdb -e -bcsvL tank' (similar to above but with -c).
Thanks,
Brad
Traversing all blocks to verify checksums ...
zdb_blkptr_cb: Got error 50 reading <0, 11, 0, 0> [L0 packed nvlist]
4000L/4000P DVA[0]=<0:2500014000:4000> DVA[1]=&l
Here's the output of 'zdb -e -bsvL tank' (without -c) in case it helps. I'll
post with -c if it finishes.
Thanks,
Brad
Traversing all blocks ...
block traversal size 431585053184 != alloc 431585209344 (unreachable 156160)
bp count: 4078410
bp logi
:56 2009
Thanks for your help,
Brad
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
post above but the solution doesn't work.
Please let me know if you need any other information.
Thanks,
Brad
bash-3.2# zpool import
pool: tank
id: 4410438565134310480
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported u
I've run into this too... I believe the issue is that the block
size/allocation unit size in ZFS is much larger than the default size
on older filesystems (ufs, ext2, ext3).
The result is that if you have lots of small files smaller than the
block size, they take up more total space on the filesy
If you have an older Solaris release using ZFS and Samba, and you upgrade to a
version with CIFS support, how do you ensure the file systems/pools have
casesensitivity mixed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
ld and see if that makes any kind of
difference.
Thanks for the suggestions.
Brad
> Just a thought, but have you physically disconnected
> the bad disk? It's not unheard of for a bad disk to
> cause problems with others.
>
> Failing that, it's the "corrupted data&q
I do, thank you. The disk that went out sounds like it had a head crash or some
such - loud clicking shortly after spin-up then it spins down and gives me
nothing. BIOS doesn't even detect it properly to do a firmware update.
> Do you know 7200.11 has firmware bugs?
>
> Go to seagate website
r...@opensolaris:~# zpool import -f tank
internal error: Bad exchange descriptor
Abort (core dumped)
Hoping someone has seen that before... the Google is seriously letting me down
on that one.
> I guess you could try 'zpool import -f'. This is a
> pretty odd status,
> I think. I'm pretty sure
Any ideas on this? It looks like a potential bug to me, or there is something
that I'm not seeing.
Thanks again!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
> I've seen reports of a recent Seagate firmware update
> bricking drives again.
>
> What's the output of 'zpool import' from the LiveCD?
> It sounds like
> ore than 1 drive is dropping off.
r...@opensolaris:~# zpool import
pool: tank
id: 16342816386332636568
state: FAULTED
status: The p
> I would get a new 1.5 TB and make sure it has the new
> firmware and replace
> c6t3d0 right away - even if someone here comes up
> with a magic solution, you
> don't want to wait for another drive to fail.
The replacement disk showed up today but I'm unable to replace the one marked
UNAVAIL:
Sure, and thanks for the quick reply.
Controller: Supermicro AOC-SAT2-MV8 plugged into a 64-big PCI-X 133 bus
Drives: 5 x Seagate 7200.11 1.5TB disks for the raidz1.
Single 36GB western digital 10krpm raptor as system disk. Mate for this is in
but not yet mirrored.
Motherboard: Tyan Thunder K8W S
Greetings!
I lost one out of five disks on a machine with a raidz1 and I'm not sure
exactly how to recover from it. The pool is marked as FAULTED which I certainly
wasn't expecting with only one bum disk.
r...@blitz:/# zpool status -v tank
pool: tank
state: FAULTED
status: One or more devic
Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I
guess my aggregate number for read and write bandwidth should equal the
aggregate numbers for the pool? Yes?
The downside is that fsstat has the same granularity issue as zpool iostat.
What I'd really like is nread an
I'd like to track a server's ZFS pool I/O throughput over time. What's a good
data source to use for this? I like zpool iostat for this, but if I poll at two
points in time I would get a number since boot (e.g. 1.2M) and a current number
(e.g. 1.3K). If I use the current number then I've lost da
> Are you sure this isn't a case of CR 6433264 which
> was fixed
> long ago, but arrived in patch 118833-36 to Solaris
> 10?
It certainly looks similar, but this system already had 118833-36 when the
error occurred, so if this bug is truly fixed, it must be something else. Then
again, I wasn't
Problem solved... after the resilvers completed, the status reported that the
filesystem needed an upgrade.
I did a zpool upgrade -a, and after that completed and there was no resilvering
going on, the zpool add ran successfully.
I would like to suggest, however, that the behavior be fixed --
I'm trying to add some additional devices to my existing pool, but it's not
working. I'm adding a raidz group of 5 300 GB drives, but the command always
fails:
r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, "path", &path) ==
I'm trying to add some additional devices to my existing pool, but it's not
working. I'm adding a raidz group of 5 300 GB drives, but the command always
fails:
r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, "path", &path) ==
Thanks for the response Peter. However, I'm not looking to create a different
boot environment (bootenv). I'm actually looking for a way within JumpStart to
separate out the ZFS filesystems from a new installation to have better control
over quotas and reservations for applications that usuall
Does anyone know of a way to specify the creation of ZFS file systems for a ZFS
root pool during a JumpStart installation? For example, creating the following
during the install:
Filesystem Mountpoint
rpool/var /var
rpool/var
> - on a sun cluster, luns are seen on both nodes. Can
> we prevent mistakes like creating a pool on already
> assigned luns ? for example, veritas wants a "force"
> flag. With ZFS i can do :
> node1: zpool create X add lun1 lun2
> node2 : zpool create Y add lun1 lun2
> and then, results are unexpe
Great point. Hadn't thought of it in that way.
I haven't tried truncating a file prior to trying
to remove it. Either way though, I think it is a
bug if once the filesystem fills up, you can't remove
a file.
Brad
On Thu, 2008-06-05 at 21:13 -0600, Keith Bierman wrote:
> On Ju
2:15 f2
Is there an existing bug on this that is going to address
enabling the removal of a file without the pre-requisite
removal of a snapshot?
Thanks in advance,
Brad
--
-
_/_/_/ _/_/ _/
Solaris 10 update 5 was released 05/2008, but no zpool shrink :-( Any update?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
How do you ascertain the current zfs vdev cache size (e.g.
zfs_vdev_cache_size) via mdb or kstat or any other cmd?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing
great feature.
Just some food for thought.
Thanks in advance,
Brad
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
Is the gzip compression algorithm planned to be in Solaris 10 Update 5?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Darren,
Please find responses in line below...
On Fri, 2008-02-08 at 10:52 +, Darren J Moffat wrote:
> Brad Diggs wrote:
> > I would like to use ZFS but with ZFS I cannot prime the cache
> > and I don't have the ability to control what is in the cache
> > (e
Thanks in advance,
Brad
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> OK, you asked for "creative" workarounds... here's one (though it requires
> that the filesystem be briefly unmounted, which may be deal-killing):
That is, indeed, creative. :) And yes, the unmount make it
impractical in my environment.
I ended up going back to rsync, because we had mor
Just wanted to voice another request for this feature.
I was forced on a previous Solaris10/ZFS system to rsync whole filesystems, and
snapshot the backup copy to prevent the snapshots from negatively impacting
users. This obviously has the effect of reducing available space on the system
by o
> > At the moment, I'm hearing that using h/w raid under my zfs may be
> >better for some workloads and the h/w hot spare would be nice to
> >have across multiple raid groups, but the checksum capabilities in
> >zfs are basically nullified with single/multiple h/w lun's
> >resulting in "reduced pro
Did you find a resoltion to this issue?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
write cache was enabled on all the ZFS drives, but disabling it gave a
negligible speed improvement: (FWIW, the pool has 50 drives)
(write cache on)
/bin/time tar xf /tmp/vbulletin_3-6-4.tar
real 51.6
user0.0
sys 1.0
(write cache off)
/bin/time tar xf /tmp/vbulletin_
Ah, thanks -- reading that thread did a good job of explaining what I was
seeing. I was going
nuts trying to isolate the problem.
Is work being done to improve this performance? 100% of my users are coming in
over NFS,
and that's a huge hit. Even on single large files, writes are slower by a
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over
the weekend.
After some extensive testing, the extreme slowness appears to only occur when a
ZFS filesystem is mounted over NFS.
One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto a ZFS
filesyste
1 - 100 of 110 matches
Mail list logo