OK... Is all this 3x; 6x potential performance boost still going to hold
true in a Single Controller scenario?
Hardware is x4100's (Solaris 10) w/ 6-disk raidz on external 3320's?
I seem to remember /(wait... checking Notes...) / correct... the ZFS
filesystem is < 50% capacity.
This info coul
Bart Smaalders wrote:
Ian Collins wrote:
Bart Smaalders wrote:
A 6 disk raidz set is not optimal for random reads, since each disk in
the raidz set needs to be accessed to retrieve each item. Note that if
the reads are single threaded, this doesn't apply. However, if multiple
reads are extant
with no seen effects `dmesg` reports lots of
kern.warning] WARNING: marvell88sx1: port 3: error in command 0x2f: status 0x51
found in snv_62 and opensol-b66 perhaps
http://bugs.opensolaris.org/view_bug.do?bug_id=6539787
can someone post part of the headers even if the code is closed?
Ian Collins wrote:
Bart Smaalders wrote:
michael T sedwick wrote:
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random
reads /(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAI
Bart Smaalders wrote:
> michael T sedwick wrote:
>> Given a 1.6TB ZFS Z-Raid consisting 6 disks:
>> And a system that does an extreme amount of small /(<20K) /random
>> reads /(more than twice as many reads as writes) /
>>
>> 1) What performance gains, if any does Z-Raid offer over other RAID
>> or
Oliver Schinagl wrote:
Hello,
I'm quite interested in ZFS, like everybody else I suppose, and am about
to install FBSD with ZFS.
cool.
On that note, i have a different first question to start with. I
personally am a Linux fanboy, and would love to see/use ZFS on linux. I
assume that I can us
On Wed, Jun 20, 2007 at 11:16:39AM +1000, James C. McPherson wrote:
> Roshan Perera wrote:
> >
> >I don't think panic should be the answer in this type of scenario, as
> >there is redundant path to the LUN and Hardware Raid is in place inside
> >SAN. From what I gather there is work being carried o
michael T sedwick wrote:
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
michael T sedwick wrote:
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
On Tue, Jun 19, 2007 at 07:16:06PM -0700, John Brewer wrote:
> bash-3.00# zpool import
> pool: zones
> id: 4567711835620380868
> state: ONLINE
> status: The pool is formatted using an older on-disk version.
> action: The pool can be imported using its name or numeric identifier, though
>
How do you upgrade from version 5 to version 6, I had created this under snv_62
and it zpool called zones worked with snv_b63 and 10u4beta now under snv_b66 I
get error and the upgrade option does not work: any ideas?
bash-3.00# df
/ (/dev/dsk/c0d0s0 ): 6819012 blocks 765336
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance i
Roshan Perera wrote:
Thanks for all your replies. Lot of info to take it back. In this case it
seems like emcp carried out a repair to a path to LUN Followed by a
panic.
Jun 4 16:30:12 su621dwdb emcp: [ID 801593 kern.notice] Info: Assigned
volume Symm 000290100491 vol 0ffe to
I don't think pan
Hello,
I'm quite interested in ZFS, like everybody else I suppose, and am about
to install FBSD with ZFS.
On that note, i have a different first question to start with. I
personally am a Linux fanboy, and would love to see/use ZFS on linux. I
assume that I can use those ZFS disks later with any o
What is the best (meaning fastest) way to move a large file system
from one pool to another pool on the same machine. I have a machine
with two pools. One pool currently has all my data (4 filesystems), but it's
misconfigured. Another pool is configured correctly, and I want to move the
file sy
Thanks for all your replies. Lot of info to take it back. In this case it seems
like emcp carried out a repair to a path to LUN Followed by a panic.
Jun 4 16:30:12 su621dwdb emcp: [ID 801593 kern.notice] Info: Assigned volume
Symm 000290100491 vol 0ffe to
I don't think panic should be the ans
Joe S wrote:
I have a couple of performance questions.
Right now, I am transferring about 200GB of data via NFS to my new
Solaris server. I started this YESTERDAY. When writing to my ZFS pool
via NFS, I notice what I believe to be slow write speeds. My client
hosts vary between a MacBook Pro
Correction:
SATA Controller is a Sillcon Image 3114, not a 3112.
On 6/19/07, Joe S <[EMAIL PROTECTED]> wrote:
I have a couple of performance questions.
Right now, I am transferring about 200GB of data via NFS to my new Solaris
server. I started this YESTERDAY. When writing to my ZFS pool via
I have a very similar setup on opensolaris b62 - 5 disks on raidz on one
onboard sata port and four 3112-based ports. I have noticed that although this
card seems like a nice cheap one, it is only two channels, so therein lies a
huge performance decrease. I have thought about getting another car
Victor Engle wrote:
> The best practices guide on opensolaris does recommend replicated
> pools even if your backend storage is redundant. There are at least 2
> good reasons for that. ZFS needs a replica for the self healing
> feature to work. Also there is no fsck like tool for ZFS so it is a
> I also have two trivial questions (just to be sure).
> Do the disks have to be equal in size for RAID-Z?
Not really. But just like most raid5 implementations, only the amount
of space on the smallest disk (or other storage object) can be used on
all the components. The extra space on the other
I have a couple of performance questions.
Right now, I am transferring about 200GB of data via NFS to my new Solaris
server. I started this YESTERDAY. When writing to my ZFS pool via NFS, I
notice what I believe to be slow write speeds. My client hosts vary between
a MacBook Pro running Tiger to
I had the same question last week decided to take a similar approach.
Instead of a giant raidz of 6 disks, i created 2 raidz's of 3 disks each. So
when I want to add more storage, I just add 3 more disks.
On 6/19/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Huitzi,
Yes, you are correct.
I also have two trivial questions (just to be sure).
Do the disks have to be equal in size for RAID-Z?
In a three disks RAID-Z, can I specify which disk to use for parity?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Paul,
While testing iscsi targets exported from thumpers via 10GbE and
imported 10GbE on T2000s I am not seeing the throughput I expect,
and more importantly there is a tremendous amount of read IO
happending on a purely sequential write workload. (Note all systems
have Sun 10GbE cards an
Huitzi,
Yes, you are correct. You can add more raidz devices in the future as
your excellent graphic suggests.
A similar zpool add example is described here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view
This new section describes what operations are supported for both raidz
an
On Jun 19, 2007, at 11:23 AM, Huitzi wrote:
Hi once again and thank you very much for your reply. Here is
another thread.
I'm planning to deploy a small file server based on ZFS. I want to
know if I can start with 2 RAIDs, and add more RAIDs in the future
(like the gray RAID in the attac
We have the same problem and I have just moved back to UFS because of
this issue. According to the engineer at Sun that i spoke with, he
implied that there is an RFE out internally that is to address this problem.
The issue is this:
When configuring a zpool with 1 vdev in it and zfs times out a w
[EMAIL PROTECTED] said:
> attached below the errors. But the question still remains is ZFS only happy
> with JBOD disks and not SAN storage with hardware raid. Thanks
ZFS works fine on our SAN here. You do get a kernel panic (Solaris-10U3)
if a LUN disappears for some reason (without ZFS-level r
Hi
The minimum disks for raidz is 3, ( you can fool it but it wont
protect your data), and the minimum disks for raidz2 is 4.
James Dickens
uadmin.blogspot.com
On 6/19/07, Huitzi <[EMAIL PROTECTED]> wrote:
Hi,
I'm planning to deploy a small file server based on ZFS, but I want to know how
Huitzi wrote:
Hi,
I'm planning to deploy a small file server based on ZFS, but I want to
know how many disks do I need for raidz and for raidz2, I mean, which are
the minimum disks required.
If you have 2 disks, use mirroring (raidz would be no better)
If you have 3 disks, use 3-way mirror or
> The best practices guide on opensolaris does recommend replicated
> pools even if your backend storage is redundant. There are at least 2
> good reasons for that. ZFS needs a replica for the self healing
> feature to work. Also there is no fsck like tool for ZFS so it is a
> good idea to make s
Hi,
I'm planning to deploy a small file server based on ZFS, but I want to know how
many disks do I need for raidz and for raidz2, I mean, which are the minimum
disks required.
Thank you in advance.
This message posted from opensolaris.org
___
zfs
On 19 June, 2007 - Ed Ravin sent me these 1,7K bytes:
> Also, any pointers to troubleshooting performance issues with
> Solaris and ZFS would be appreciated. The last time I was heavily
> using Solaris was 2.6, and I see a lot of good toys have been added
> to the system since then.
Does it only
I want to set the values of arc c and arc p (C_max and P_addr) to different
memory values. What would be the hexademical value for 256mb and for 128mb?
I'm trying to use "mdb -k" to limit the amount of memory ZFS uses.
This message posted from opensolaris.org
___
Victor Engle wrote:
Roshan,
As far as I know, there is no problem at all with using SAN storage
with ZFS and it does look like you were having an underlying problem
with either powerpath or the array.
Correct. A write failed.
The best practices guide on opensolaris does recommend replicated
Roshan,
As far as I know, there is no problem at all with using SAN storage
with ZFS and it does look like you were having an underlying problem
with either powerpath or the array.
The best practices guide on opensolaris does recommend replicated
pools even if your backend storage is redundant.
Victror,
Thanks for your comments but I believe it contradict what ZFS information given
below and now Bruce's mail.
After some digging around I found that the messages file has thrown out some
powerpath errors to one of the devices that may have caused the proble.
attached below the errors. Bu
Hi,
if you understand german or want to brush it up a little, I've a new ZFS
white paper in german for you:
http://blogs.sun.com/constantin/entry/new_zfs_white_paper_in
Since there's already so much collateral on ZFS in english, I thought it's
time for some localized stuff for my country.
The
Roshan,
Could you provide more detail please. The host and zfs should be
unaware of any EMC array side replication so this sounds more like an
EMC misconfiguration than a ZFS problem. Did you look in the messages
file to see if anything happened to the devices that were in your
zpools? If so then
Hi All,
We have come across a problem at a client where ZFS brought the system down
with a write error on a EMC device due to mirroring done at the EMC level and
not ZFS, Client is total EMC committed and not too happy to use the ZFS for
oring/RAID-Z. I have seen the notes below about the ZFS a
My shop recently switched our mail fileserver from an old Network
Appliance to a Solaris box running ZFS. Customers are mostly
indifferent to the change, except for one or two uses which are
dramatically slower. The most noticeable problem is that deleting
email messages is much slower.
Each cus
42 matches
Mail list logo