Why didn't this command just fail?
># zpool add tank c4t0d0
>invalid vdev specification
>use '-f' to override the following errors:
>mismatched replication level: pool uses raidz and new vdev is disk
I did not use '-f' and yet my configuration was changed. That was unexpected
behaviour.
Thanks
On Dec 5, 2007, at 17:50, can you guess? wrote:
>> my personal-professional data are important (this is
>> my valuation, and it's an assumption you can't
>> dispute).
>
> Nor was I attempting to: I was trying to get you to evaluate ZFS's
> incremental risk reduction *quantitatively* (and if yo
apologies in advance for prolonging this thread .. i had considered
taking this completely offline, but thought of a few people at least
who might find this discussion somewhat interesting .. at the least i
haven't seen any mention of Merkle trees yet as the nerd in me yearns
for
On Dec 5,
On Dec 6, 2007, at 00:03, Anton B. Rang wrote:
>> what are you terming as "ZFS' incremental risk reduction"?
>
> I'm not Bill, but I'll try to explain.
>
> Compare a system using ZFS to one using another file system -- say,
> UFS, XFS, or ext3.
>
> Consider which situations may lead to data los
The man page gives this form:
zpool create [-fn] [-R root] [-m mountpoint] pool vdev ...
however, lower down, there is this command:
# zpool create mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Isn't the "pool" element missing in the command?
This message posted from opensolaris.org
__
SunOS 5.10 Last change: 25 Apr 2006
Yes, I see that my other server is more up to date.
SunOS 5.10 Last change: 13 Feb 2007
This one was recently installed.
Is there a patch that was not included with 10_Recommended?
This message posted from opensolaris.org
mis _HOLD_ # cat /etc/release
Solaris 10 6/06 s10s_u2wos_09a SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 09 June 2006
mis _HOLD_ #
This message
of the Iscsi ethernet interfaces. It certainly appears
> to be doing round-robin. The I/O are going to the same disk devices,
> of course, but by two different paths. Is this a correct configuration
> for ZFS? I assume it's safe, but I thought I should check.
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup with scsi_vhci. Dual GigE
Jonathan Loran wrote:
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup with
duced. Moving
large file stores between zfs file systems would be so handy! From my
own sloppiness, I've suffered dearly from the the lack of it.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT M
On Dec 29, 2007, at 2:33 AM, Jonathan Loran wrote:
> Hey, here's an idea: We snapshot the file as it exists at the time of
> the mv in the old file system until all referring file handles are
> closed, then destroy the single file snap. I know, not easy to
> implement, but th
Joerg Schilling wrote:
Jonathan Edwards <[EMAIL PROTECTED]> wrote:
since in the current implementation a mv between filesystems would
have to assign new st_ino values (fsids in NFS should also be
different), all you should need to do is assign new block pointers in
the new side
Joerg Schilling wrote:
Carsten Bormann <[EMAIL PROTECTED]> wrote:
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap.
is.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran - -
-/ / /
worse yet, run windoz in
a VM. Hardly practical. Why is it we always have to be second class
citizens! Power to the (*x) people!
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /
our data. I
bought a 4 port LSI SAS card (yes a bit pricy) and have had 0 problems
since and hot swap actually works. I never tried it with the 3114 I had
just never seen it actually working before so I was quite pleasantly
surprised.
Jonathan
___
SATA drives on mine without a problem. As
far as which ones are supported by Solaris someone else will have to
answer as I actually use ZFS on FreeBSD. SATA controllers are usually
less expensive than SAS controllers of course.
Jonathan
___
zfs-disc
ZIL off to see how my NFS on ZFS performance is effected before spending
the $'s. Anyone know when will we see this in Solaris 10?
Thanks,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /
Neil Perrin wrote:
>
>
> Roch - PAE wrote:
>> Jonathan Loran writes:
>> > > Is it true that Solaris 10 u4 does not have any of the nice ZIL
>> controls > that exist in the various recent Open Solaris flavors? I
>> would like to > move my ZIL t
message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran - -
-
o
using fast SSD for the ZIL when it comes to Solaris 10 U? as a preferred
method.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ /
he irony is that the
requirement for this very stability is why we haven't seen the features
in the ZFS code we need in Solaris 10.
Thanks,
Jon
Mike Gerdts wrote:
On Jan 30, 2008 2:27 PM, Jonathan Loran <[EMAIL PROTECTED]> wrote:
Before ranting any more, I'll do the test of disablin
Richard Elling wrote:
Nick wrote:
Using the RAID cards capability for RAID6 sounds attractive?
Assuming the card works well with Solaris, this sounds like a
reasonable solution.
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter wil
Anton B. Rang wrote:
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter will break down under highly randomized write loads.
Oh? What precisely do you mean by "break down"? RAID 5's write performance is
well-understood and it's used successfully in
Hi List,
I'm wondering if one of you expert DTrace guru's can help me. I want to
write a DTrace script to print out a a histogram of how long IO requests
sit in the service queue. I can output the results with the quantize
method. I'm not sure which provider I should be using for this. Doe
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
...
I know, I know, I should have gone with a JBOD setup, but it's too late for
that in this iteration of this server. We we set this up, I had the gear
already, and it's not in my budget to get new stuff right now.
What kind of arra
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The s
up for the VFS layer.
>
> I'd also check syscall latencies - it might be too obvious, but it can be
> worth checking (eg, if you discover those long latencies are only on the
> open syscall)...
>
> Brendan
>
>
>
--
- _/ _/ / -
[EMAIL PROTECTED] wrote:
On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote:
Thanks for any help anyone can offer.
I have faced similar problem (although not exactly the same) and was going to
monitor disk queue with dtrace but couldn't find any docs/urls abo
Uwe Dippel wrote:
> [i]google found that solaris does have file change notification:
> http://blogs.sun.com/praks/entry/file_events_notification
> [/i]
>
> Didn't see that one, thanks.
>
> [i]Would that do the job?[/i]
>
> It is not supposed to do a job, thanks :), it is for a presentation at a
David Magda wrote:
> On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
>
>> In some circles, CDP is big business. It would be a great ZFS offering.
>
> ZFS doesn't have it built-in, but AVS made be an option in some cases:
>
> http://opensolaris.org/os/project/avs
On Feb 27, 2008, at 8:36 AM, Uwe Dippel wrote:
> As much as ZFS is revolutionary, it is far away from being the
> 'ultimate file system', if it doesn't know how to handle event-
> driven snapshots (I don't like the word), backups, versioning. As
> long as a high-level system utility needs to
Quick question:
If I create a ZFS mirrored pool, will the read performance get a boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get read off of both
disks? The latter case would have a CPU expense, so I would thi
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>
>>
>> Quick question:
>>
>> If I create a ZFS mirrored pool, will the read performance get a boost?
>> In other words, will the data/parity be read round robin between the
>
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
>
>>
>>
>> Roch Bourbonnais wrote:
>>>
>>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>>>
>>>>
>>>> Quick question:
>>>>
On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote:
> Running just plain "iosnoop" shows accesses to lots of files, but none
> on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m /export/
> home/shannon"
> shows nothing at all. I tried /usr/demo/dtrace/iosnoop.d too, still
> nothing.
hi Bil
On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote:
> Ok, that's much better! At least I'm getting output when I touch
> files
> on zfs. However, even though zpool iostat is reporting activity, the
> above program isn't showing any file accesses when the system is idle.
>
> Any ideas?
assuming th
the ZIO pipeline gets filled from the dmu_tx routines (for the whole
pool), i guess it would make the most sense to look at the
dmu_tx_create() entry from vnops (as Jeff already pointed out.)
---
jonathan
___
zfs-discuss mailing list
z
with Solaris instead on the SAN box? It's just commodity x86 server
hardware.
My life is ruined by too many choices, and not enough time to evaluate
everything.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/
Shawn Ferry wrote:
On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote:
Now I know this is counterculture, but it's biting me in the back side
right now, and ruining my life.
I have a storage array (iSCSI SAN) that is performing badly, and
requires some upgrades/reconfiguration. I h
Patrick Bachmann wrote:
> Jonathan,
>
> On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote:
>
>> What I'm left with now is to do more expensive modifications to the new
>> mirror to increase its size, or using zfs send | receive or rsync to
>>
Patrick Bachmann wrote:
Jonathan,
On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote:
I'm 'not sure I follow how this would work.
The keyword here is thin provisioning. The sparse zvol only uses
as much space as the actual data needs. So, if you use a sparse
x27;s choice of NFS v4 ACLs. This is the only way to ensure
CIFS compatibility, and it is the way the industry will be moving.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ /
Robert Milkowski wrote:
Hello Jonathan,
Friday, March 14, 2008, 9:48:47 PM, you wrote:
>
Carson Gaspar wrote:
Bob Friesenhahn wrote:
On Fri, 14 Mar 2008, Bill Shannon wrote:
What's the best way to backup a zfs filesystem to tape, where the size
of the files
On Mar 14, 2008, at 3:28 PM, Bill Shannon wrote:
> What's the best way to backup a zfs filesystem to tape, where the size
> of the filesystem is larger than what can fit on a single tape?
> ufsdump handles this quite nicely. Is there a similar backup program
> for zfs? Or a general tape manageme
On Mar 20, 2008, at 11:07 AM, Bob Friesenhahn wrote:
> On Thu, 20 Mar 2008, Mario Goebbels wrote:
>
>>> Similarly, read block size does not make a
>>> significant difference to the sequential read speed.
>>
>> Last time I did a simple bench using dd, supplying the record size as
>> blocksize to it
On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote:
> On Thu, 20 Mar 2008, Jonathan Edwards wrote:
>>
>> in that case .. try fixing the ARC size .. the dynamic resizing on
>> the ARC
>> can be less than optimal IMHO
>
> Is a 16GB ARC size not considered
Bob Friesenhahn wrote:
> On Tue, 25 Mar 2008, Robert Milkowski wrote:
>> As I wrote before - it's not only about RAID config - what if you have
>> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with
>> specific parameters, then specific file system options, etc.
>
> Some zfs-re
> This guy seems to have had lots of fun with iSCSI :)
> http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html
>
>
This is scaring the heck out of me. I have a project to create a zpool
mirror out of two iSCSI targets, and if the failure of one of them will
panic my system, that wil
kristof wrote:
> If you have a mirrored iscsi zpool. It will NOT panic when 1 of the
> submirrors is unavailable.
>
> zpool status will hang for some time, but after I thinkt 300 seconds it will
> put the device on unavailable.
>
> The panic was the default in the past, And it only occurs if all
Vincent Fox wrote:
> Followup, my initiator did eventually panic.
>
> I will have to do some setup to get a ZVOL from another system to mirror
> with, and see what happens when one of them goes away. Will post in a day or
> two on that.
>
>
On Sol 10 U4, I could have told you that. A few
On Apr 9, 2008, at 11:46 AM, Bob Friesenhahn wrote:
> On Wed, 9 Apr 2008, Ross wrote:
>>
>> Well the first problem is that USB cables are directional, and you
>> don't have the port you need on any standard motherboard. That
>
> Thanks for that info. I did not know that.
>
>> Adding iSCSI suppor
Just to report back to the list... Sorry for the lengthy post
So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more
or less work as expected. If I unplug one side of the mirror - unplug
or power down one of the iSCSI targets - I/O to the zpool stops for a
while, perhaps a
Chris Siebenmann wrote:
> | What your saying is independent of the iqn id?
>
> Yes. SCSI objects (including iSCSI ones) respond to specific SCSI
> INQUIRY commands with various 'VPD' pages that contain information about
> the drive/object, including serial number info.
>
> Some Googling turns up
Luke Scharf wrote:
> Maurice Volaski wrote:
>
>>> Perhaps providing the computations rather than the conclusions would
>>> be more persuasive on a technical list ;>
>>>
>>>
>> 2 16-disk SATA arrays in RAID 5
>> 2 16-disk SATA arrays in RAID 6
>> 1 9-disk SATA array in RAID 5.
>>
>
Bob Friesenhahn wrote:
>> The "problem" here is that by putting the data away from your machine,
>> you loose the chance to "scrub"
>> it on a regular basis, i.e. there is always the risk of silent
>> corruption.
>>
>
> Running a scrub is pointless since the media is not writeable. :-)
>
>
Bob Friesenhahn wrote:
> On Tue, 22 Apr 2008, Jonathan Loran wrote:
>>>
>> But that's the point. You can't correct silent errors on write once
>> media because you can't write the repair.
>
> Yes, you can correct the error (at time of read) due to
Dominic Kay wrote:
> Hi
>
> Firstly apologies for the spam if you got this email via multiple aliases.
>
> I'm trying to document a number of common scenarios where ZFS is used
> as part of the solution such as email server, $homeserver, RDBMS and
> so forth but taken from real implementations
s, which use an indirect map,
we just use the Solaris map, thus:
auto_home:
*zfs-server:/home/&
Sorry to be so off (ZFS) topic.
Jon
--
- _____/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- __
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other po
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
Jonathan Loran wrote:
> Since no one has responded to my thread, I have a question: Is zdb
> suitable to run on a live pool? Or should it only be run on an exported
> or destroyed pool? In fact, I see that it has been asked before on this
> forum, but is there a users
have
all of the dmu_zfetch() logic in that instead of in-line with the
original dbuf_read().
Jonathan
PS: Hi Darren!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e-based
access, full history (although it could be collapsed by deleting older
snapshots as necessary), and no worries about stream format changes.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
backup disk to the primary system and import it as the new
primary pool.
It's a bit-perfect incremental backup strategy that requires no
additional tools.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
base files or large log files. The actual modified/appended
blocks would be sent rather than the whole changed file. This may be
an important point depending on your file modification patterns.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@op
ld presumably expect it to be instantaneous if it was creating
a sparse file. It's not a compressed filesystem though is it? /dev/
zero tends to be fairly compressible ;-)
I think, as someone else pointed out, running zpool iostat at the same
time might
ions.
>
>
Ben,
Haven't read this whole thread, and this has been brought up before, but
make sure you power supply is running clean. I can't tell you how many
times I've seen very strange and intermittent system errors occur from a
ardware and software, but they are all steep on the ROI
curve. I would be very excited to see block level ZFS deduplication
roll out. Especially since we already have the infrastructure in place
using Solaris/ZFS.
Cheers,
Jon
--
- _/ _/ / - Jonathan Loran -
e willing to run it and provide feedback. :)
>
> -Tim
>
>
Me too. Our data profile is just like Tim's: Terra bytes of satellite
data. I'm going to guess that the d11p ratio won't be fantastic for
us. I sure would like
t; Check out the following blog..:
>
> http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool
>
>
Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that
will dump those checksums?
Jon
--
- _/ _/ / - Jonathan Loran -
sed upon
block reference count. If a block has few references, it should expire
first, and vise versa, blocks with many references should be the last
out. With all the savings on disks, think how much RAM you could buy ;)
Jon
--
- _/ _/ / -
d your tree
is and what your churn rate is .. we know on QFS we can go up to 100M,
but i trust the tree layout a little better there, can separate the
metadata out if i need to and have planned on it, and know that we've
got some tools to relayout the metadata or dump/restore for
tml
This has the advantage of requiring no other libraries and no compile
phase at all.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1M), fsck(1M),
etc. Given that you use zfs(1M) for all that kind of manipulation,
it seems like this is not a huge deal.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
n reading this file sequentially
> will not be that sequential.
On the other hand, if you are reading the file sequentially, ZFS has
very good read-ahead algorithms.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u could just make sure your argument always has a '@' in
it:
zfs destroy -s [EMAIL PROTECTED]
Cheers,
- jonathan
> However, 'zfs destroy ' will fail if the filesystem has snapshots
> (presumably most will, if your intent is to destroy a snapshot), which
>
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering
wrote:
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why
It's written each time a transaction group
commits.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d
0 -> cv_broadcast
0 <- cv_broadcast
0<- releasef
0 <- ioctl
So the sync happens.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jun 22, 2006 at 07:46:57PM +0200, Roch wrote:
>
> As I recall, the zfs sync is, unlike UFS, synchronous.
Uh, are you talking about sync(2), or lockfs -f? IIRC, lockfs -f is always
synchronous.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Devel
nied.
>
> I was thinking of caching the {vfs, inode #, gen#, pid} and using that
> to allow such processes to re-open files they _recently_ (the cache
> should have LRU/LFU eviction) opened.
That doesn't seem like a very predictable interface. The security guarantees
are not very s
you could simply drop WRITE entirely, because you never
need to do a open(..., O_WRITE) afterwards.
As with most basic privileges, you need to be careful if you drop it. This
is not a surprise.
Cheers,
- jonathan
> -Urspr?ngliche Nachricht-
> Von: [EMAIL PROTECTED] im Auftrag vo
t the next
> time.
>
> Is this a known issue?
The easiest way to work around it is to turn the zfs mount into a "legacy"
mount, and mount it using vfstab.
zfs set mountpoint=legacy pool/dataset
(add pool/dataset mount l
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed << N / 2 -1 minimum # disks to lose before loss of data:
On Jun 28, 2006, at 17:25, Erik Trimble wrote:
On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote:
On Jun 28, 2006, at 12:32, Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N /
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter. With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)
o get access to profile-enabled
commands:
$ zfs create pool/aux2
cannot create 'pool/aux2': permission denied
$ pfksh
$ zfs create pool/aux2
$ exit
$
Either set your shell to pf{k,c,}sh, or run it explicitly.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
can read at 60-70Mb/sec. Why am I
not getting 65*8 (500MB/sec+) performance. Maybe it's the marvell driver at
fault here?
My thinking is that I need to get raid0 performing as expected before looking
at raidz, but I'm afraid I really don't know where to begin.
All thought
Richard Elling wrote:
> Dana H. Myers wrote:
>
>> Jonathan Wheeler wrote:
>>
>>> On the one hand, that's greater then 1 disk's worth, so I'm getting
>>> striping performance out of a mirror GO ZFS. On the other, if I can get
>>> s
1) I'm hoping to learn more about the zfs &
solaris performance tuning by digging on in and investigating. 2) I have
some notion of hopefully being helpful by providing developers with some
real world data that might help in improving the code. I'm more then
happy to to an
> On 7/17/06, Jonathan Wheeler <[EMAIL PROTECTED]>
> wrote:
> > Reads: 4 disks gives me 190MB/sec. WOAH! I'm very
> happy with that. 8 disks should scale to 380 then,
> Well 320 isn't all that far off - no biggie.
> > Looking at the 6 disk raidz is in
On Jun 21, 2006, at 11:05, Anton B. Rang wrote:
My guess from reading between the lines of the Samsung/Microsoft
press release is that there is a mechanism for the operating system
to "pin" particular blocks into the cache (e.g. to speed boot) and
the rest of the cache is used for write
On Jul 30, 2006, at 23:44, Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on a
presentation for my management on this ---
That can be a tough question to answer depending on what you're
looking for .. you could take the feature comparison approach like
On Aug 1, 2006, at 03:43, [EMAIL PROTECTED] wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x really
screwed up in NFS-land? This Solaris NFS replaces a Linux-based NFS
server that the clients (linux and IRIX) liked just fine.
Yes; the Linux NFS server and client work tog
On Aug 1, 2006, at 14:18, Torrey McMahon wrote:
(I hate when I hit the Send button when trying to change windows)
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
The correct comparison is done when all the factors are taken
into account. Making blank
On Aug 1, 2006, at 22:23, Luke Lonergan wrote:
Torrey,
On 8/1/06 10:30 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:
http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
attac
On Aug 2, 2006, at 17:03, prasad wrote:
Torrey McMahon <[EMAIL PROTECTED]> wrote:
Are any other hosts using the array? Do you plan on carving LUNs
out of
the RAID5 LD and assigning them to other hosts?
There are no other hosts using the array. We need all the available
space (2.45TB) on
101 - 200 of 235 matches
Mail list logo