Hi Anton,
Thank you for the information. That is exactly our scenario. We're 70%
write heavy, and given the nature of the workload, our typical writes
are 10-20K. Again the information is much appreciated.
Best Regards,
Jason
On 1/3/07, Anton B. Rang <[EMAIL PROTECTED]> wrote:
>> In our recent
> Is there some reason why a small read on a raidz2 is not statistically very
> likely to require I/O on only one device? Assuming a non-degraded pool of
> course.
ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way that all disks must be
read to compute and verify the checksum.
This me
>> In our recent experience RAID-5 due to the 2 reads, a XOR calc and a
>> write op per write instruction is usually much slower than RAID-10
>> (two write ops). Any advice is greatly appreciated.
>
> RAIDZ and RAIDZ2 does not suffer from this malady (the RAID5 write hole).
1. This isn't the "wr
On Jan 3, 2007, at 19:55, Jason J. W. Williams wrote:
performance should be good? I assumed it was an analog to RAID-6. In
our recent experience RAID-5 due to the 2 reads, a XOR calc and a
write op per write instruction is usually much slower than RAID-10
(two write ops). Any advice is greatly
If you dig into the email archives you'll see lots of threads about
where to use ZFS or hw level raid, the tradeoffs, possible performance
hits, etc. It really is context sensitive.
Karen Chau wrote:
Hi Torrey, thanks for you response.
I'm not sure if I can create a LUN using a single disk on
> AFAIK, the manpage is accurate. The space "used" by a snapshot is exactly
> the amount of space that will be freed up when you run 'zfs destroy
> '. Once that operation completes, 'zfs list' will show that the
> space "used" by adjacent snapshots has changed as a result.
>
> Unfortunately,
Hi Robert,
I've read that paper. Thank you for the condescension.
-J
On 1/3/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Jason,
Thursday, January 4, 2007, 1:55:02 AM, you wrote:
JJWW> Hi Robert,
JJWW> Our X4500 configuration is multiple 6-way (across controllers) RAID-Z2
JJWW> grou
Hello Jason,
Thursday, January 4, 2007, 1:55:02 AM, you wrote:
JJWW> Hi Robert,
JJWW> Our X4500 configuration is multiple 6-way (across controllers) RAID-Z2
JJWW> groups striped together. Currently, 3 RZ2 groups. I'm about to test
JJWW> write performance against ZFS RAID-10. I'm curious why RAID
On Jan 2, 2007, at 6:48 AM, Darren Reed wrote:
Darren J Moffat wrote:
...
Of course. I didn't mention it because I thought it was obvious
but this would NOT break the COW or the transactional integrity of
ZFS.
One of the possible ways that the "to be bleached" blocks are
dealt with i
Hello Peter,
Thursday, January 4, 2007, 1:12:47 AM, you wrote:
>> I've been using a simple model for small, random reads. In that model,
>> the performance of a raidz[12] set will be approximately equal to a single
>> disk. For example, if you have 6 disks, then the performance for the
>> 6-dis
Hi Robert,
That makes sense. Thank you. :-) Also, it was zpool I was looking at.
zfs always showed the correct size.
-J
On 1/3/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Jason,
Wednesday, January 3, 2007, 11:40:38 PM, you wrote:
JJWW> Just got an interesting benchmark. I made two
Hello Jason,
Wednesday, January 3, 2007, 11:40:38 PM, you wrote:
JJWW> Just got an interesting benchmark. I made two zpools:
JJWW> RAID-10 (9x 2-way RAID-1 mirrors: 18 disks total)
JJWW> RAID-Z2 (3x 6-way RAIDZ2 group: 18 disks total)
JJWW> Copying 38.4GB of data from the RAID-Z2 to the RAID-10
Hi Robert,
Our X4500 configuration is multiple 6-way (across controllers) RAID-Z2
groups striped together. Currently, 3 RZ2 groups. I'm about to test
write performance against ZFS RAID-10. I'm curious why RAID-Z2
performance should be good? I assumed it was an analog to RAID-6. In
our recent expe
Hello Jason,
Wednesday, January 3, 2007, 11:11:31 PM, you wrote:
JJWW> Hi Richard,
JJWW> Hmmthat's interesting. I wonder if its worth benchmarking RAIDZ2
JJWW> if those are the results you're getting. The testing is to see the
JJWW> performance gain we might get for MySQL moving off the FLX2
Hello Nicholas,
Wednesday, January 3, 2007, 5:08:25 PM, you wrote:
>
I agree this needs to be corrected and am glad to see that a bug was open for it. Do you know what the bugid is for it?
I don't know the bug id however
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg
Hello zfs-discuss,
zfs recv -v at the end reported:
received 928Mb stream in 6346 seconds (150Kb/sec)
I'm not sure but shouldn't it be 928MB and 150KB ?
Or perhaps we're counting bits?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
> I've been using a simple model for small, random reads. In that model,
> the performance of a raidz[12] set will be approximately equal to a single
> disk. For example, if you have 6 disks, then the performance for the
> 6-disk raidz2 set will be normalized to 1, and the performance of a 3-way
EE> My main question is: does anyone have experience doing this in
EE> production? It looks good on html and man pages, but I would like to
EE> know if there are any caveats I should be aware of. Various threads
EE> I've read in the alias archives do not really seem to talk about
EE> people's ex
Just got an interesting benchmark. I made two zpools:
RAID-10 (9x 2-way RAID-1 mirrors: 18 disks total)
RAID-Z2 (3x 6-way RAIDZ2 group: 18 disks total)
Copying 38.4GB of data from the RAID-Z2 to the RAID-10 took 307
seconds. Deleted the data from the RAID-Z2. Then copying the 38.4GB of
data from
Frank Batschulat wrote:
it seems taking a clone always requires taking a snapshot first and provide
this as a parameter
to the zfs clone command.
now wouldnt it be more natural way of usage when I intend to create a clone,
that by default
the zfs clone command will create the needed snapshot f
[EMAIL PROTECTED] wrote:
which is not the behavior I am seeing..
Show me the output, and I can try to explain what you are seeing.
AFAIK, the manpage is accurate. The space "used" by a snapshot is exactly
the amount of space that will be freed up when you run 'zfs destroy
'. Once that ope
Hi Richard,
Hmmthat's interesting. I wonder if its worth benchmarking RAIDZ2
if those are the results you're getting. The testing is to see the
performance gain we might get for MySQL moving off the FLX210 to an
active/passive pair of X4500s. Was hoping with that many SATA disks
RAIDZ2 would
Get in right after the New Year. Those in the know have begun picking
up
shares before the big announcement. This is your chance to get in while
there's still time!
Physicians Adult Daycare Inc. (PHYA) is at $1.65 right now on solid
volume. Once the new
Jason J. W. Williams wrote:
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what others had seen. Thank you in advance.
I've been using a simple model for small, random reads. In tha
On 03 January, 2007 - Richard Elling sent me these 0,5K bytes:
> Tomas Ögren wrote:
> >df (GNU df) says there are ~850k inodes used, I'd like to keep those in
> >memory.. There is currently 1.8TB used on the filesystem.. The
> >probability of a cache hit in the user data cache is about 0% and the
On 03 January, 2007 - Jason J. W. Williams sent me these 0,4K bytes:
> Hello All,
>
> I was curious if anyone had run a benchmark on the IOPS performance of
> RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
> curious what others had seen. Thank you in advance.
http://blogs.s
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what others had seen. Thank you in advance.
Best Regards,
Jason
___
zfs-discuss mailing list
Tomas Ögren wrote:
df (GNU df) says there are ~850k inodes used, I'd like to keep those in
memory.. There is currently 1.8TB used on the filesystem.. The
probability of a cache hit in the user data cache is about 0% and the
probability that an rsync happens again shortly is about 100%..
Also, b
Hi Torrey, thanks for you response.
I'm not sure if I can create a LUN using a single disk on the 6130.
If I use 6 disks to create 3 LUNS (2 disks per LUN) and create a raidz
pool. I will have stripe w/parity on *BOTH* LUN level and ZFS level,
would this cause a performance issue? How abou
You want to give ZFS multiple LUNs so it can have redundancy within the
pool. (Mirror or RAIDZ) Otherwise, you will not be able to recover from
certain types of errors. A zpool with a single LUN would only let you
detect the errors.
Karen Chau wrote:
Roch - PAE wrote:
I've just generated some data for an upcoming blog entry on
the subject. This is about a small file tar extract :
All times are elapse (single 72GB SAS disk)
Local and memory based filesystems
tmpfs : 0.077 sec
ufs : 0.25 sec
zfs : 0.12
I've just generated some data for an upcoming blog entry on
the subject. This is about a small file tar extract :
All times are elapse (single 72GB SAS disk)
Local and memory based filesystems
tmpfs : 0.077 sec
ufs : 0.25 sec
zfs : 0.12 sec
NFS service th
--
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any unauthorized
review, use, disclosure or distribution is prohibited. If you ar
Ah yes! Thank you Casper. I knew this looked familiar! :-)
Yes, this is almost certainly what is happening here. The
bug was introduced in build 51 and fixed in build 54.
[EMAIL PROTECTED] wrote:
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could yo
Tan -
ZFS is designed to grow devices, but due to a bug in ZFS as well as a
larger design flaw when dealing with labelled disks, this will not
happen. The changes required to make this work were recently approved
as part of:
PSARC/2006/373 Dynamic LUN Expansion
This involves the driver catches
>Hmmm, so there is lots of evictable cache here (mostly in the MFU
>part of the cache)... could you make your core file available?
>I would like to take a look at it.
Isn't this just like:
6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system
Which was introduced in b51(or 52
On Wed, 3 Jan 2007, Darren Dunham wrote:
We have some HDS storage that isn't supported by mpxio, so we have to
use veritas dmp to get multipathing.
Whats the recommended way to use DMP storage with ZFS. I want to use
DMP but get at the multipathed virtual luns at as low a level as
possible to
Anders,
Have you considered something like the following:
http://www.newegg.com/Product/Product.asp?Item=N82E16816133001
I realize you're having issues sticking more HDD's internally, this
should solve that issue. Running iSCSI volumes is going to get real
ugly in a big hurry and I strongly sugg
I agree this needs to be corrected and am glad to see that a bug was open
for it. Do you know what the bugid is for it?
On 1/2/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Nicholas,
Tuesday, January 2, 2007, 10:10:29 PM, you wrote:
>
You may want to check some of the past posti
write cache was enabled on all the ZFS drives, but disabling it gave a
negligible speed improvement: (FWIW, the pool has 50 drives)
(write cache on)
/bin/time tar xf /tmp/vbulletin_3-6-4.tar
real 51.6
user0.0
sys 1.0
(write cache off)
/bin/time tar xf /tmp/vbulletin_
> We have some HDS storage that isn't supported by mpxio, so we have to
> use veritas dmp to get multipathing.
> Whats the recommended way to use DMP storage with ZFS. I want to use
> DMP but get at the multipathed virtual luns at as low a level as
> possible to avoid using vxvm as much as possibl
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
-Mark
Tomas Ögren wrote:
On 03 January, 2007 - Mark Maybee sent me these 5,0K bytes:
Tomas,
There are a couple of things going o
On 03 January, 2007 - Mark Maybee sent me these 5,0K bytes:
> Tomas,
>
> There are a couple of things going on here:
>
> 1. There is a lot of fragmentation in your meta-data caches (znode,
> dnode, dbuf, etc). This is burning up about 300MB of space in your
> hung kernel. This is a known probl
Tomas,
There are a couple of things going on here:
1. There is a lot of fragmentation in your meta-data caches (znode,
dnode, dbuf, etc). This is burning up about 300MB of space in your
hung kernel. This is a known problem that we are currently working
on.
2. While the ARC has set its desired
On 03 January, 2007 - Robert Milkowski sent me these 0,2K bytes:
> Hello Tomas,
>
>
> Give us output of ::kmastat on crashdump.
Ok, attached.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu
Hello Tomas,
Give us output of ::kmastat on crashdump.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
On 03 January, 2007 - Robert Milkowski sent me these 3,0K bytes:
> Hello Tomas,
>
> Wednesday, January 3, 2007, 10:32:39 AM, you wrote:
>
> TÖ> The tweaks I have are:
> TÖ> set ncsize = 50
> TÖ> set nfs:nrnode = 50
> TÖ> set zfs:zil_disable=1
> TÖ> set zfs:zfs_vdev_cache_bshift=14
> TÖ> set
Ideally you should add 3-5 disks at a time so you can add
raidz(like raid5) groups so the failure of a disk won't cause lost of
data.
Actually, I usually add them 8 at a time, it just average out to one every 1-2
months.
with ZFS its easier if you keep all disks on one server, just buy
the
Hi,
There was an old thread on whether ZFS can handle resized LUNs
(specifically from a NetApp filer), but somehow I can't seem to be able to
access the newly made-available space.
In my setup, I have created a zpool from a single Netapp-exported LUN, and
a single zfs in this zpool.
After
Get in right after the New Year. Those in the know have begun picking
up
shares before the big announcement. This is your chance to get in while
there's still time!
Physicians Adult Daycare Inc. (PHYA) is at $1.65 right now on solid
volume. Once the n
Hello Tomas,
Wednesday, January 3, 2007, 10:32:39 AM, you wrote:
TÖ> Hello.
TÖ> Having some hangs on a snv53 machine which is quite probably ZFS+NFS
TÖ> related, since that's all the machine do ;)
TÖ> The machine is a 2x750MHz Blade1000 with 2GB ram, using a SysKonnect
TÖ> 9821 GigE card (with
Anton B. Rang wrote:
Good point. Verifying that the new überblock is readable isn’t actually
sufficient, since it might become unreadable in the future. You’d need to wait
for several transaction groups, until the block was unreachable by the oldest
remaining überblock, to be safe in this se
Hello Nicholas,
Tuesday, January 2, 2007, 10:10:29 PM, you wrote:
>
You may want to check some of the past postings to this list as I believe what you are seeing has already been discussed already. If I remember correctly this is a "feature" of zfs and is designed to protect the integrit
Hello.
Having some hangs on a snv53 machine which is quite probably ZFS+NFS
related, since that's all the machine do ;)
The machine is a 2x750MHz Blade1000 with 2GB ram, using a SysKonnect
9821 GigE card (with their 8.19.1.3 skge driver) and two HP branded MPT
SCSI cards. Normal load is pretty mu
54 matches
Mail list logo