On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
> w/s Mr/s Mw/s wait actv svc_t %w %b
> 34 2.0 395.20.10.6 0.0 34.8 87.7
> 0 100
> 3521.0 312.21.22.9 0.0 26.0
Harley:
> Old 36GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real2m31.991s
> | user0m0.007s
> | sys 0m0.923s
>
> Newer 300GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real0m8.425s
> | user0m0.010
On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
Are you just trying to measure ZFS's read performance here?
That is what I started looking at. We scrounged around
and found a set of 300GB drives to replace the old ones we
started with. Comparing these new drives to the old ones:
Old 36GB dr
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
> w/s Mr/s Mw/s wait actv svc_t %w %b
> 34 2.0 395.20.10.6 0.0 34.8 87.7
> 0 100
> 3521.0 312.21.22.9 0.0 26.0 78.0
> 0 79
> 362
Update ...
iostat output during "zpool scrub"
extended device statistics
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100
sd3521.0 312.21.22.9 0.0 26.0 78.0 0
Harley:
>I had tried other sizes with much the same results, but
> hadnt gone as large as 128K. With bs=128K, it gets worse:
>
> | # time dd if=zeros-10g of=/dev/null bs=128k count=102400
> | 81920+0 records in
> | 81920+0 records out
> |
> | real2m19.023s
> | user0m0.105s
> | sys
> Haik,
>
> Thank you very much. 'zpool list' yeilds
> NAMESIZEUSEDAVAILCAPHEALTH
>ALTROOT
> z 74.5G 22.9G 51.6G30%ONLINE -
>
> How do I confirm that /fitz is not currently a zfs
> mountpoint? 'zfs mount' yeilds
>
> fitz/home/fitz/home
On Fri, 22 Sep 2006, johansen wrote:
ZFS uses a 128k block size. If you change dd to use a
bs=128k, do you observe any performance improvement?
I had tried other sizes with much the same results, but
hadnt gone as large as 128K. With bs=128K, it gets worse:
| # time dd if=zeros-10g of=/de
The history is quite simple:
1) Installed nv_b32 or around there on a zeroed drive. Created this
ZFS pool for the first time.
2) Non-live upgraded to nv_b42 when it came out, zpool upgrade on the
zpool in question from v2 to v3.
3) Tried to non-live upgrade to nv_b44, upgrade failed every time, so
Wow! I solved a tricky problem this morning thanks to Zones & ZFS integration.
We have a SAS SPDS database environment running on Sol10 06/06. The SPDS
database is unique in that when a table is being updated by one user it is
unavailable to the rest of the user community. Our nightly update job
ZFS uses a 128k block size. If you change dd to use a bs=128k, do you observe
any performance improvement?
> | # time dd if=zeros-10g of=/dev/null bs=8k
> count=102400
> | 102400+0 records in
> | 102400+0 records out
>
> | real1m8.763s
> | user0m0.104s
> | sys 0m1.759s
It's also wor
I have set up a small box to work with zfs. (2x 2.4GHz
xeons, 4GB memory, 6x scsi disks) I made one drive the boot
drive and put the other five into a pool with the "zpool
create tank" command right out of the admin manual.
The administration experience has been very nice and most
everythi
On September 22, 2006 10:26:01 AM -0700 Alexei Rodriguez
<[EMAIL PROTECTED]> wrote:
Alexei Rodriguez wrote:
Unless they break the spec, yes, it should work. PCI
Excellent to know! I will verify that the motherboard and the PCI-X cards
play well together.
You might run into a problem with 3.3
> Check out this blog:
>
> http://blogs.sun.com/PlasticPixel/entry/build_your_own
> _multi_terabyte
This is pretty much exactly what I want to do. These SYBA PCI cards look like
they do the job, so now I need to see if I can find them locally (Bay Area) for
instant gratification. If not, NewEgg
> It sounds like you can resolve this issue by simply
> booting into the new BE and deleting the /fitz
> directory and then rebooting and going back into the
> new BE. I say this because from your message it
> sounds like the data from your zfs filesystem in
> /fitz was copied to /fitz in the new B
> Alexei Rodriguez wrote:
> Unless they break the spec, yes, it should work. PCI
Excellent to know! I will verify that the motherboard and the PCI-X cards play
well together.
Thanks!
Alexei
This message posted from opensolaris.org
___
zfs-discus
Apologies for any confusion, but I am now able to give more output
regarding the zpool fitz.
unknown# zfs list --> returns list of zfs file system fitz and related
snapshots
unknown# zpool status
pool: fitz
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool
can s
> I believe I am experiencing a similar, but more
> severe issue and I do
> not know how to resolve it. I used liveupgrade from
> s10u2 to NV b46
> (via solaris express release). My second disk is zfs
> with the file
> system fitz. I did a 'zpool export fitz'
>
> Reboot with init 6 into new env
On Fri, Sep 22, 2006 at 03:36:36AM -0400, Rich wrote:
> ...huh.
>
> So /etc/zfs doesn't exist. At all.
>
> Creating /etc/zfs using mkdir, then importing the pool with zpool
> import -f, then rebooting, the behavior vanishes, so...yay.
>
> Problem solved, I guess, but shouldn't ZFS be smarter abo
I believe I am experiencing a similar, but more severe issue and I do
not know how to resolve it. I used liveupgrade from s10u2 to NV b46
(via solaris express release). My second disk is zfs with the file
system fitz. I did a 'zpool export fitz'
Reboot with init 6 into new environment, NV b46,
> > You mean pull it out? Does your hardware support hotswap?
>
> As far as I know D1000 support itdoes it?
I'm sure the D1000 is fine with the concept. It's probably something in
the software stack that is upset.
I was told that a similar issue that I once had when testing was likely
due
On 9/22/06, Dick Davies <[EMAIL PROTECTED]> wrote:
On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:
> 2) I mirrored 2 disks within the same D1000 and while I was putting a
> big tar ball in the FS I tried to physically remove one mirror and
You mean pull it out? Does your hardware support hotswap
Alf writes:
> Hi James,
> I agree. with you but I think it could take a while
>
> cheers
>
> Alf
>
>
> James C. McPherson wrote:
> > Alf wrote:
> >> Hi Michael,
> >> I completely agree with you. I was just wondering about the
> >> differences between ZFS and others VM and als
Hi James,
I agree. with you but I think it could take a while
cheers
Alf
James C. McPherson wrote:
Alf wrote:
Hi Michael,
I completely agree with you. I was just wondering about the
differences between ZFS and others VM and also if I got the essence
of it.
Also customers could ask thes
http://blogs.sun.com/roch/entry/zfs_and_oltp
Performance, Availability & Architecture Engineering
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior Performance An
Alf wrote:
Hi Michael,
I completely agree with you. I was just wondering about the differences
between ZFS and others VM and also if I got the essence of it.
Also customers could ask these things and if they can use ZFS
filesystems like old fashion mode setting a specific size.
That is part o
1.
But size is not checked on all devices if open() or fstat() return error - it's
just skipped. So if rist device will report size larger than the rest one and
you couldn't get size for all the others I guess zpool will assume the size of
each vdev to be that of first device, right?
2. At th
Sergey wrote:
Please read also http://docs.info.apple.com/article.html?artnum=303503.
So this can be a justifier for 6460889 - zil shouldn't send
write-cache-flush command to busted devices
- Victor
___
zfs-discuss mailing list
zfs-discuss@opensolari
Darren J Moffat <[EMAIL PROTECTED]> wrote:
> There is no such thing as POSIX ACLs. The draft never made it to
> standard. Veritas NetBackup and Legato Networker both use the Solaris
> acl(2) system call to get POSIX draft ACLs from UFS. ZFS and NFSv4 use
> a more modern and much more express
Alf wrote:
What do you thing about pulling out a mirror on D1000 and the completely
hang of the system?
I on purpose left that for others to answer - I don't know HW well enough by
far :-)
--
Michael Schuster +49 89 46008-2974 / x62974
visit the online support center: http
Hi Michael,
I completely agree with you. I was just wondering about the differences
between ZFS and others VM and also if I got the essence of it.
Also customers could ask these things and if they can use ZFS
filesystems like old fashion mode setting a specific size.
What do you thing about
Alf wrote:
Hi,
Dick Davies wrote:
On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for t
Alf wrote:
Hi all,
as I am newbie in ZFS, yesterday I played with it a little bit and there
are so many good things but I've notes few things I couldn't explain
so.
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't de
Nicolas Dorfsman wrote:
I am using Netbackup 6.0 MP3 on several ZFS systems
just fine. I
think that NBU won't back up some exotic ACLs of ZFS,
but if you
are using ZFS like other filesystems (UFS, etc) then there aren't any issues.
Hum. ACLs are not so "exotic".
This IS a really BIG issu
Hi,
Dick Davies wrote:
On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I
Thanks for the quick response Eric!
Nice to know I wasn't completely misunderstanding what was going on :D
Cheers,
Liam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create
Hi all,
as I am newbie in ZFS, yesterday I played with it a little bit and there
are so many good things but I've notes few things I couldn't explain so.
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give
> I am using Netbackup 6.0 MP3 on several ZFS systems
> just fine. I
> think that NBU won't back up some exotic ACLs of ZFS,
> but if you
> are using ZFS like other filesystems (UFS, etc) then there aren't any issues.
Hum. ACLs are not so "exotic".
This IS a really BIG issue. If you are us
Please read also http://docs.info.apple.com/article.html?artnum=303503.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
...huh.
So /etc/zfs doesn't exist. At all.
Creating /etc/zfs using mkdir, then importing the pool with zpool
import -f, then rebooting, the behavior vanishes, so...yay.
Problem solved, I guess, but shouldn't ZFS be smarter about creating
its own config directory?
- Rich
On 9/21/06, Eric Schro
41 matches
Mail list logo