I might be mistaken, but it looks like 3ware does have a driver, several in
fact:
http://www.3ware.com/support/downloadpageprod.asp?pcode=9&path=Escalade9500SSeries&prodname=3ware%209500S%20Series
Any comment on this? I'm thinking about picking up a server with this card,
and it would be cool
I've seen similar issues. However, it appears most of my problems stem from
ZFS. I'd do something ZFS doesn't like and then I'd have to power cycle the
server to get it back. I actually wrote a large post about my experiences with
b134 and ZFS:
http://opensolaris.org/jive/thread.jspa?message
I have been working on the same problem now for almost 48 straight hours. I
have managed to recover some of my data using
zpool import -f pool
The command never completes, but you can do a
zpool list
and
zpool status
and you will see the pool.
Then you do
zfs list
and the file systems sho
all.
James
* For NTFS 4kB clusters on VMWare / NFS, I believe 4kB zfs recordsize will
provide best performance (avoid partial writes). Thoughts welcome on that too.
** Assumes 10k SAS can do max 900 sequential writes each striped across 12
mirrors and rounded down (900 based on TomsHardware
Chris & Eff,
Thanks for your expertise on this and other posts. Greatly appreciated. I've
just been re-reading some of the great SSD-as-ZIL discussions.
Chris,
Cost: Our case is a bit non-representative as we have spare P410/512's that
came with ESXi hosts (USB boot) so I've budgetted them at
G'day All.
I’m trying to select the appropriate disk spindle speed for a proposal and
would welcome any experience and opinions (e.g. has anyone actively chosen
10k/15k drives for a new ZFS build and, if so, why?).
This is for ZFS over NFS for VMWare storage ie. primarily random 4kB read/
Edward,
Thanks for the reply.
Good point on platter density. I'ld considered the benefit of lower
fragmentation but not the possible increase in sequential iops due to density.
I assume while a 2TB 7200rpm drive may have better sequential IOPS than a
500GB, it will not be double and therefor
Thanks Richard & Edward for the additional contributions.
I had assumed that "maximum sequential transfer rates" on datasheets (btw -
those are the same for differing capacity seagate's) were based on large block
sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g. Seagate
Constell
sed on
Nevada build 99) and still no luck.
Please advise,
James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I am having a problem importing a pool in 2009.06 that was created on zfs-fuse
(ubuntu 8.10).
Basically, I was having issues with a controller, and took a disk offline.
After restarting with a new controller, I was unable to import the pool (in
ubuntu). Someone had suggested that I try
Hi there,
I think I have managed to confuse myself so i am asking outright hoping for a
straight answer.
First, my situation. I have several disks of varying sizes I would like to run
as redundant storage ina file server at home. Performance is not my number one
priority, largest capacity pos
ed) exceeds memory, your
performance degrades exponentially probably before that.
James Dickens
http://uadmin.blogspot.com
> I.e., I am not using any snapshots and have also turned off automatic
> snapshots because I was bitten by system hangs while destroying datasets
> with living s
please post the output of zpool status -v.
Thanks
James Dickens
On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi wrote:
> Greeting All
>
> I have create a pool that consists oh a hard disk and a ssd as a cache
>
> zpool create hdd c11t0d0p3
> zpool add hdd cache c8
ly what to keep and
what to throw away.
James Dickens
http://uadmin.blogspot.com
On Sat, Mar 6, 2010 at 2:15 AM, Abdullah Al-Dahlawi wrote:
> hi James
>
>
> here is the out put you've requested
>
> abdul...@hp_hdx_16:~/Downloads# zpool status -v
> pool: hdd
>
i'm evaluating a system with an Adaptec 52445 Raid HBA, and
>> the driver supplied by Opensolaris doesn't support JBOD drives.
FYI, there is a bug report open for this issue:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=686
On 6/04/10 11:47 PM, Willard Korfhage wrote:
Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't
show any serial numbers for the disk attached to the Areca raid card.
You'll need to reboot and go into the card bios to
get that information.
James C. McPherson
ted since September 15th,
2008 (build 99).
What do you mean by overpromised and underdelivered?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a
failed drive. zpool status shows that the pool is in DEGRADED state.
I want syslog to log these type of ZFS errors. I have syslog running and
logging all sorts of error to a log server. But this failed disk in ZFS pool
Thanks for the clue.
Still not successful, but some hope is there.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some of the error messages are generated only once.
Joji James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ctual* problem with OpenSolaris binary distro
as a base for a NAS system?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ol/dsk/rpool/puddle_slog ONLINE 0 0 0
zfs list -rt volume puddle
NAMEUSED AVAIL REFER MOUNTPOINT
puddle/l2arc 8.25G 538G 7.20G -
puddle/log_test1.25G 537G 1.25G -
puddle/temp_cache 4.13G 537G 4.00G -
James Dickens
http://uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson wrote:
> We have a server with a couple X-25E's and a bunch of larger SATA
> disks.
>
> To save space, we want to install Solaris 10 (our install is only about
> 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
> attached to a z
20 hours it just took you to fix that machine could
have been 2 hours if it had a service contract. Doesn't take too long
for that kind of math to blow out any savings whiteboxes may have had.
Worst case, someone goes and buys Dell. :-)
--
James Litchfield | Senior Consultant
Ph
+1
On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski wrote:
>
> fyi
>
> --
> Robert Milkowski
> http://milek.blogspot.com
>
>
> Original Message Subject: zpool import despite missing
> log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From:
> Tim Haley
want.
you need to clone a filesystem per guest because ZFS can only rollback full
filesystems, not invidual files. your VM solution may have finer tuned
controlls for its own snapshots but those are don't use ZFS' abililities.
James Dickens
uadmin.blogspot.com
forced to stick with Samba for our CIFS needs.
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
existing
data to a new device using functions from device removal modifications, i
could be wrong but it may not be as far as people fear. Device removal was
mentioned in the Next word for ZFS video.
James Dickens
http://uadmin.blogspot.com
jamesd...@gmail.com
>
>
> --
> Erik Trimble
evelopment branches - should be available to public in about 3-3.5
weeks time. Plenty of instructions on how to do this on the net and
in this list.
For Solaris, you need to wait for the next update release.
cheers,
James
___
zfs-discuss mailin
for gam_server / gamin.
$ nm /usr/lib/gam_server | grep port_create
[458] | 134589544| 0|FUNC |GLOB |0|UNDEF |port_create
The patch for port_create has never gone upstream however, while gvfs uses
glib's gio, which has backends for inotify, solari
Last I checked, iSCSI volumes go direct to the primary storage and not via the
slog device.
Can anybody confirm that is the case and if there is a mechanism/tuneable to
force it via the slog and if there is any benefit/point in this for most cases?
I have a 9 drive system (four mirrors of two disks and one hot spare) with a
10th SSD drive for ZIL.
The ZIL is corrupt.
I've been unable to recover using FreeBSD 8, Opensolaris x86, and using logfix
(http://github.com/pjjw/logfix)
In FreeBSD 8.0RC3 and below (uses v13 ZFS):
1) Boot Single Use
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember doing so
intentionally. So I can't do commands like zpool replace since there are no
pools.
It says it was last used by the FreeBSD box, but the Fre
the claims are meaningless.
http://mail.opensolaris.org/pipermail/opensolaris-help/2009-November/015824.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ing on your rpool? (at install time, or if you
need to migrate your rpool to new media)
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
A majority of the time when the server is rebooted I get this on a zpool:
pool: ipapool
state: FAULTED
status: An intent log record could not be read.
Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run 'zpool online',
devzero: when you have an exported pool with no log disk and you want to mount
the pool.
Here is the changes to make it compile on dev-129:
--- logfix.c.2009-04-26 2009-12-18 11:39:40.917435361 -0800
+++ logfix.c2009-12-18 12:19:27.507337246 -0800
@@ -20,6 +20,7 @@
#include
#include
+#i
Written by jktorn:
>Have you tried build 128 which includes pool recovery support?
>
>This is because FreeBSD hostname (and hostid?) is recorded in the
>labels along with active pool state.
>
>It does not work that way at the moment, though readonly import is
>quite useful option that can be tried.
If you are asking if anyone has experienced two drive failures simultaneously?
The answer is yes.
It has happened to me (at home) and to one client, at least that I can
remember. In both cases, I was able to dd off one of the failed disks (with
just bad sectors or less bad sectors) and recons
ttabbal:
If I understand correctly, raidz{1} is 1 drive protection and space is
(drives - 1) available. Raidz2 is 2 drive protection and space is (drives - 2)
etc. Same for raidz3 being 3 drive protection.
Everything I've seen you should stay around 6-9 drives for raidz, so don't
do
Can you post a "zpool import -f" for us to see?
One thing I ran into recently is that if the drives arrangement was changed
(like drives swapped) it can't recover. I moved an 8 drive array recently, and
didn't worry about the order of the drives. It could not be mounted without
reordering the
galenz: "I am on different hardware, thus I cannot restore the drive
configuration exactly."
Actually, you can learn most of it, if not all of it you need.
Do "zpool import -f" with no pool name and it should dump the issue with the
pool (what is making it fail.) If that doesn't contain privi
re not involved in the boot process
though)
HTH,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 4 0 7 2 0 0 2 8.00M zfs
James Dickens
On Thu, Dec 24, 2009 at 11:22 PM, Michael Herf wrote:
> FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
> running visibly faster (somewhere around 3-5x faster).
>
> echo zfs_prefetch_
not sure of your experience level, but did you try running devfsadm and
then checking in format for your new disks
James Dickens
uadmin.blogspot.com
On Sun, Dec 27, 2009 at 3:59 AM, Muhammed Syyid wrote:
> Hi
> I just picked up one of these cards and had a few questions
> After i
t. You probably need a newer
version of Solaris, but I cannot tell you if any newer versions support later
zfs versions.
This forum is for OpenSolaris support. You should contact your Solaris support
provider for further help on this matter.
cheers,
drives and prstat shows that my processor and
memory aren't a bottleneck. What could cause such a marked decrease in
throughput? Is anyone else experiencing similar effects?
Thanks,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ust be specified by a full path.
Could it be that "discouraged" and "experimental" mean "not tested as
thoroughly as you might like, and certainly not a good idea in any sort
of production environment?"
It sounds like a bug, sure, but the fix might be to remove the o
Mike Gerdts wrote:
> This unsupported feature is supported with the use of Sun Ops Center
> 2.5 when a zone is put on a "NAS Storage Library".
Ah, ok. I didn't know that.
--
James Carlson 42.703N 71.076W
___
zfs
On Fri, Jan 8, 2010 at 1:44 PM, Ian Collins wrote:
> James Lee wrote:
>
>> I haven't seen much discussion on how deduplication affects performance.
>> I've enabled dudup on my 4-disk raidz array and have seen a significant
>> drop in write throughput, from
On 01/08/2010 02:42 PM, Lutz Schumann wrote:
> See the reads on the pool with the low I/O ? I suspect reading the
> DDT causes the writes to slow down.
>
> See this bug
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566.
> It seems to give some backgrounds.
>
> Can you test sett
No, sorry Dennis, this functionality doesn't exist yet, but is being worked,
but will take a while, lots of corner cases to handle.
James Dickens
uadmin.blogspot.com
On Sun, Jan 10, 2010 at 3:23 AM, Dennis Clarke wrote:
>
> Suppose the requirements for storage shrink ( it can hap
Yes send and receive will do the job. see zfs manpage for details.
James Dickens
http://uadmin.blogspot.com
On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole wrote:
> Good morning all.
>
> I am in the process of building my V1 SAN for media storage in house, and i
> am already thin
hot spares to the system should
one fail. If you are truly paranoid, 3-way mirror can be used. then you can
loose 2 disks without a loss of data.
Spread disks across multiple controllers, and get disks from different
companies and different lots to less the likely hood of getting hit by a bad
batch takin
/dev/dsk/c0d1s09.8G10M 9.7G 1%/test*
>
the act of deleting files in UFS simply does a few accounting changes to the
filesystem thus has no affect on the blocks in ZFS volume, and in some cases
could actually make the zvol space grow. The only possible way to have ZFS
g, our SAN guys
don't really understand ZFS or else I would have made the pool redundant
in the first place.
Thanks,
James
[1] starlight ~ # zdb -l /dev/dsk/c4t0d0s0
LABEL 0
version=22
n
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
Jim
---
n 10/ 6/11 05:19 AM, Frank Van Damme wrote:
Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a runn
On 10/07/2011 11:02 AM, James Lee wrote:
> Hello,
>
> I had a pool made from a single LUN, which I'll call c4t0d0 for the
> purposes of this email. We replaced it with another LUN, c4t1d0, to
> grow the pool size. Now c4t1d0 is hosed and I'd like to see about
> rec
Subscribe
--
ORACLE
James Cypcar | Solaris and Network Domain, Global Systems Support
Oracle Global Customer Services
Log, update, and monitor your Service Request online
usinghttps://support.oracle.com
___
zfs-discuss mailing list
zfs-discuss
inline
On 07/02/12 15:00, Nico Williams wrote:
On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn
wrote:
On Mon, 2 Jul 2012, Iwan Aucamp wrote:
I'm interested in some more detail on how ZFS intent log behaves for
updated done via a memory mapped file - i.e. will the ZIL log updates done
to an m
Agreed - msync/munmap is the only guarantee.
On 07/ 3/12 08:47 AM, Nico Williams wrote:
On Tue, Jul 3, 2012 at 9:48 AM, James Litchfield
wrote:
On 07/02/12 15:00, Nico Williams wrote:
You can't count on any writes to mmap(2)ed files hitting disk until
you msync(2) with MS_SYNC. The s
s where it goes 2-5k reads and I'm
seeing 20-80% l2arc hits. These have been running for about a week and, given
my understanding of how L2ARC fills, I'd suggest maybe leaving it to warm up
longer (e.g. 1-2 weeks?)
caveat: I'm a complete newbie to zfs so I could be
I’m testing the new online zpool expansion feature of Solaris 10 9/10. My
zpool was created using the entire disk (ie. no slice number was used). When I
resize my LUN on our SAN (an HP-EVA4400) the EFI label does not change.
On the zpool, I have autoexpand=on, and I’ve tried using zpool online
it to the zpool to create a mirror, then detach the
old smaller device. Then run zpool online -e to actually expand the zpool.
James.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Anyone know what this means? After a scrub I apparently have an error in a
file name that I don't understand:
zpool status -v pumbaa1
pool: pumbaa1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be
A reboot and then another scrub fixed this. Reboot made no difference. So
after the reboot I started another scrub and now the pool shows clean.
So the sequence was like this:
1. zpool reported ioerrors after a scrub with an error on a file in a snapshot
2. destroyed the snapshot with the err
Is there an optimal method of making a complete copy of a ZFS, aside from the
conventional methods (tar, cpio)?
We have an existing ZFS that was not created with the optimal recordsize.
We wish to create a new ZFS with the optimal recordsize (8k), and copy
all the data from the existing ZFS to th
After some errors were logged as to a problem with a ZFS file system,
I ran zfs status followed by zfs status -v...
# zpool status
pool: ehome
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore
re:
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
--
James Carlson, Solaris Networking <[EMAIL PROTECTED]>
Sun Microsystems / 35 Network Drive71.232W Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677
_
ess
> by calculating the best theoretical correct speed (which should be
> really slow, one write per disc spin)
>
> this has been on my TODO list for ages.. :(
Does the perl script at http://brad.livejournal.com/2116715.html do what you
want?
--
James Andrewartha
___
08-June/048457.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048550.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg
upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including
snapshots).
I've been encountering some pretty big slow-downs on this system when running
certain zfs commands. The one causing me the most pain at
A nit on the nit...
cat does not use mmap for files <= 32K in size. For those files
it's a simple read() into a buffer and write() it out.
Jim
---
Chris Gerhard wrote:
> A slight nit.
>
> Using cat(1) to read the file to /dev/null will not actually cause the data
> to be read thanks to the mag
I believe the answer is in the last email in that thread. hald doesn't offer
the notifications and it's not clear that ZFS can handle them. As is noted,
there are complications with ZFS due to the possibility of multiple disks
comprising a volume, etc. It would be a lot of work to make it work
corr
I've tried using S10 U6 to reinstall the boot file (instead of U5) over
jumpstart as its a ldom, noticed a another error.
Boot device: /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED] File and
args: -s
Requesting Internet Address for 0:14:4f:f9:84:f3
boot: cannot open kernel/sparcv9/unix
ix]
Help needed. Any ideas?
Thanks,
James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
another update - instead of net booting to recovery i tried adding the iso to
the primary ldom and adding it to the ldom to run installboot again from a S10
U6 dvd iso. I have return to my first error message:
{0} ok boot /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED] -s
SPARC Enterpr
When installing the 137137-09 patch it run out of / space, just like
http://www.opensolaris.org/jive/thread.jspa?threadID=82413&tstart=0
However tried the 6 steps to recover that didn't work.
I just rebuild the ldom and attached the LDOM image files from the old system
and did a zpool import to re
o idea where to even begin researching VSS unfortunately...
James
(Sent from my mobile)
-Original Message-
From: Tim
Sent: Wednesday, 07 Jan 2009 23:18
To: Jason J. W. Williams
Cc: zfs-discuss@opensolaris.org; storage-disc...@opensolaris.org
Subject: Re: [storage-discuss] [zfs-dis
Hi all,
I moved from Sol 10 Update4 to update 6.
Before doing this I exported both of my zpools, and replace the discs
containing the ufs root on with two new discs (these discs did not have any
zpool /zfs info and are raid mirrored in hardware)
Once I had installed update6 I did a zpool impor
Looking at format it is missing 12 discs!
Which is probably not suprisingly the number of discs in the external storage
controller.
The other presnt disc have moved to c2 from c0.
The driver is the same for both the discs (it is the HP CQPAry3 driver) and the
external storage is on the same con
known issue? I've seen this 5 times over the past few days. I think
these were, for the most part BFUs on top of B107. x86.
# pstack fmd.733
core 'fmd.733' of 733:/usr/lib/fm/fmd/fmd
- lwp# 1 / thread# 1
fe8c3347 libzfs_fini (0, fed9e000, 8047d08, fed749
ion-and-the-zero-length-file-problem/
http://lwn.net/Articles/323169/
http://mjg59.livejournal.com/108257.html http://lwn.net/Articles/323464/
http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
http://lwn.net/Articles/323752/ *
http://lwn.net/Articles/322823/ *
* are currently subscriber-only,
POSIX has a Synchronized I/O Data (and File) Integrity Completion
definition (line 115434 of the Issue 7 (POSIX.1-2008) specification).
What it
says is that writes for a byte range in a file must complete before any
pending
reads for that byte range are satisfied.
It does not say that if you
rvell SAS
driver for Solaris at all, so I'd say it's not supported.
http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
testing it out, but mostly under Windows.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@op
> What you could do is to write a program which calls
> efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a
> new label you will be able to export/import the pool
Awesome..
Worked for me, anyways. .C file attached
Although I did a "zpool export" before opening the device
There is a 32-bit and 64-bit version of the file system module
available on x86. Given the quality of the development team, I'd be *very*
surprised if such issues as suggested in your message exist.
Jurgen's comment highlights the major issue - the lack of space to
cache data when in 32-bit mode.
ur backups?
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
when
using those drives as ZILs.
Are you planning on using these drives as primary data storage and ZIL
for the same volumes or as primary storage for (say) your rpool and
ZIL for a data pool on spinning metal?
cheers,
James
___
zfs-discuss ma
On 25/06/2009, at 5:16 AM, Miles Nordin wrote:
and mpt is the 1068 driver, proprietary, works on x86 and SPARC.
then there is also itmpt, the third-party-downloadable closed-source
driver from LSI Logic, dunno much about it but someone here used it.
I'm confused. Why do you say the mpt dr
to the write operations.
I'm not sure where to go from here, these results are appalling (about
3x the time of the old system with 8x 10kRPM spindles) even with two
Enterprise SSDs as separate log devices.
cheers,
James
___
zfs-discuss maili
t a good, valid test to
measure the IOPS of these SSDs?
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
NFSv3 so far for these tests as it it widely
regarded as faster, even though less functional.
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tialy. So I'd suggest running the test from a lot of
clients simultaneously
I'm sure that it will be a more performant system in general, however,
it is this explicit set of tests that I need to maintain or improve
performance on.
cheers,
James
developers here had explicitly performed tests to check these similar
assumptions and found no evidence that the Linux/XFS sync
implementation to be lacking even though there were previous issues
with it in one kernel revision.
cheers,
James
___
zfs
ther tests and compare linux/XFS and perhaps remove LVM
(though, I don't see why you would remove LVM from the equation).
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
insightful observations?
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is much faster for deletes.
cheers,
James
bash-3.2# cd /nfs/xfs_on_LVM
bash-3.2# ( date ; time tar xf zeroes-10k.tar ; date ; time rm -rf
zeroes/ ; date ) 2>&1
Sat Jul 4 15:31:13 EST 2009
real0m18.145s
user0m0.055s
sys 0m0.500s
Sat Jul 4 15:31:31 EST 2009
real0m4.585
u have any methods to "correctly" measure the performance of an
SSD for the purpose of a slog and any information on others (other
than anecdotal evidence)?
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
id controller w/ BBWC
enabled and the cache disabled on the HDDs? (i.e. correctly configured
for data safety)
Should a correctly performing raid card be ignoring barrier write
requests because it is already on stable storage?
cheers,
James
___
zfs-
1 - 100 of 644 matches
Mail list logo