I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS)
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result
whether it is single zfs drive
Shweta Krishnan wrote:
I ran zpool with truss, and here is the system call trace. (again, zfs_lyr is
the layered driver I am trying to use to talk to the ramdisk driver).
When I compared it to a successful zpool creation, the culprit is the last
failing ioctl
i.e. ioctl(3, ZFS_IOC_CREATE_POOL,
On Mon, 14 May 2007, Marko Milisavljevic wrote:
[ ... reformatted ]
> I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
> deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null
> gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me
> only 3
> If I was to replace vxfs with zfs I could utilize
> raidz(2) instead of
> the built-in hardware raid-controller.
If you are 99.95 ore more then think about it twice.
ZFS still needs to solve a few bugs ...
> Are there any jbod-only storage systems that I can
> add in batches of
> 40-50 TB?
W
Manoj Joseph wrote:
Hi,
This is probably better discussed on zfs-discuss. I am CCing the list.
Followup emails could leave out opensolaris-discuss...
Shweta Krishnan wrote:
Does zfs/zpool support the layered driver interface?
I wrote a layered driver with a ramdisk device as the underlying
Marko Milisavljevic wrote:
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS)
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result
w
Hi Lori et al,
I am not 100% sure that it breaks in zfs. I have managed to attach a serial
console to the machine and I see the following in the boot process after
booting into the kernel debugger:
Welcome to kmdb
kmdb: unable to determine terminal type: assuming `vt100'
Loaded modules: [ uni
A few of us used the same b62 boot images, with some having the boot loop
problems and some not. It seems it may have been related to the specific
profile used. Could you post the contents of your pfinstall profile?
Thanks,
Malachi
On 5/14/07, Steffen Weinreich <[EMAIL PROTECTED]> wrote:
Hi L
This is likely because ldi_get_size() is failing for your device. We've
seen this before on 3rd party devices, and have been meaning to create a
special errno (instead of EINVAL) to give a more helpful message in this
case.
- Eric
On Sun, May 13, 2007 at 11:54:45PM -0700, Shweta Krishnan wrote:
On Mon, May 14, 2007 at 11:55:28AM -0500, Swetha Krishnan wrote:
> Thanks Eric and Manoj.
>
> Here's what ldi_get_size() returns:
> bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool
> create adsl-pool /dev/layerzfsminor1' dtrace: description
> 'fbt::ldi_get_size:return' mat
Thanks Eric and Manoj.
Here's what ldi_get_size() returns:
bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool create
adsl-pool /dev/layerzfsminor1' dtrace: description 'fbt::ldi_get_size:return'
matched 1 probe
cannot create 'adsl-pool': invalid argument for this pool operat
Hi All,
My mate Chris posed me the following; rather than flail about with
engineering friends trying to get a "definitive-de-jour" answer,
I thought instead to introduce him to the relevant opensolaris forum
in the hope of broadening the latter's appeal.
So: RAID-Z hot swap to larger disks an
I did this on Solaris 10u3. 4 120GB -> 4 500GB drives. Replace, resilver;
repeat until all all drives replaced.
On 5/14/07, Alec Muffett <[EMAIL PROTECTED]> wrote:
Hi All,
My mate Chris posed me the following; rather than flail about with
engineering friends trying to get a "definitive-de-
[EMAIL PROTECTED] wrote on 05/14/2007 02:10:28 PM:
> I did this on Solaris 10u3. 4 120GB -> 4 500GB drives. Replace,
> resilver; repeat until all all drives replaced.
Just beware of the long resilver times -- on a 500gb x 6 raidz2 group at
70% used space a resilver takes 7+days where snaps
I was wondering if this was a good setup for a 3320 single-bus,
single-host attached JBOD. There are 12 146G disks in this array:
I used:
zpool create pool1 \
raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t8d0 c2t9d0
c2t10 \
spare c2t11d0 c2t12d0
(or something very similar)
This
On Mon, 14 May 2007, Alec Muffett wrote:
> I suspect the proper thing to do would be to build the six new large
> disks into a new RAID-Z vdev, add it as a mirror of the older,
> smaller-disk RAID-Z vdev, rezilver to zynchronize them, and then break
> the mirror.
The 'zpool replace' command is a
To reply to my own message this article offers lots of insight into why dd
access directly through raw disk is fast, while accessing a file through the
file system may be slow.
http://www.informit.com/articles/printerfriendly.asp?p=606585&rl=1
So, I guess what I'm wondering now is, does it
This certainly isn't the case on my machine.
$ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k
count=1
1+0 records in
1+0 records out
real1.3
user0.0
sys 1.2
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0 re
On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:
> I was wondering if this was a good setup for a 3320 single-bus,
> single-host attached JBOD. There are 12 146G disks in this array:
>
> I used:
>
> zpool create pool1 \
> raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t8d0 c2t9
Thank you for those numbers.
I should have mentioned that I was mostly interested in single disk or small
array performance, as it is not possible for dd to meaningfully access
multiple-disk configurations without going through the file system. I find
it curious that there is such a large slowdow
i've seen this ldi_get_size() failure before and it usually occurs on
drivers that don't implement their prop_op(9E) entry point correctly
or that don't implement the dynamic [Nn]blocks/[Ss]size property correctly.
what does your layered driver do in it's prop_op(9E) entry point?
also, what driver
I missed an important conclusion from j's data, and that is that single disk
raw access gives him 56MB/s, and RAID 0 array gives him 961/46=21MB/s per
disk, which comes in at 38% of potential performance. That is in the
ballpark of getting 45% of potential performance, as I am seeing with my
puny
On Mon, 14 May 2007, Marko Milisavljevic wrote:
> To reply to my own message this article offers lots of insight into why
> dd access directly through raw disk is fast, while accessing a file through
> the file system may be slow.
>
> http://www.informit.com/articles/printerfriendly.asp?p=60
Marko Milisavljevic wrote:
I missed an important conclusion from j's data, and that is that single
disk raw access gives him 56MB/s, and RAID 0 array gives him
961/46=21MB/s per disk, which comes in at 38% of potential performance.
That is in the ballpark of getting 45% of potential performance
Thank you, Al.
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
On 5/14/07, Al Hopper <[EMAIL PROTECTED]> wrote:
# ptime dd if=./allhomeal20061209_01.tar of=/dev/null bs=128k count=1
1+0 records
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my
different jumpstart
installations. This system is continuously
installed and
reinstalled with different system builds.
For some b
Marko Milisavljevic wrote:
I missed an important conclusion from j's data, and that is that single
disk raw access gives him 56MB/s, and RAID 0 array gives him
961/46=21MB/s per disk, which comes in at 38% of potential performance.
That is in the ballpark of getting 45% of potential performance
Thanks Edward.
Currently my layered driver does not implement the prop_op(9E) entry point - I
didn't realize this was necessary since my layered driver worked fine without
it when used over UFS.
My layered driver sits above a ramdisk driver.
I realized the same problem that you've mentioned whe
Marko Milisavljevic wrote:
> To reply to my own message this article offers lots of insight into why
> dd access directly through raw disk is fast, while accessing a file through
> the file system may be slow.
>
> http://www.informit.com/articles/printerfriendly.asp?p=606585&rl=1
>
> So, I g
I have to drives with the same slices and I get an odd error if I try
and create a pool on one drive, but not the other:
Part TagFlag Cylinders SizeBlocks
0 swapwu 3 - 2642.01GB(262/0/0) 4209030
1 rootwm 265 - 23
Right now, the AthlonXP machine is booted into Linux, and I'm getting same
raw speed as when it is in Solaris, from PCI Sil3114 with Seagate 320G (
7200.10):
dd if=/dev/sdb of=/dev/null bs=128k count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 16.7756 seconds, 7
Thank you, Ian,
You are getting ZFS over 2-disk RAID-0 to be twice as fast as dd raw disk
read on one disk, which sounds more encouraging. But, there is something odd
with dd from raw drive - it is only 28MB/s or so, if I divided that right? I
would expect it to be around 100MB/s on 10K drives, o
Try 'trace((int)arg1);' -- 4294967295 is the unsigned representation of -1.
Adam
On Mon, May 14, 2007 at 09:57:23AM -0700, Shweta Krishnan wrote:
> Thanks Eric and Manoj.
>
> Here's what ldi_get_size() returns:
> bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool
> create a
Don't know how much this will help, but my results:
Ultra 20 we just got at work:
# uname -a
SunOS unknown 5.10 Generic_118855-15 i86pc i386 i86pc
raw disk
dd if=/dev/dsk/c1d0s6 of=/dev/null bs=128k count=1 0.00s user 2.16s system
14% cpu 15.131 total
1,280,000k in 15.131 seconds
84768k/
Marko,
I tried this experiment again using 1 disk and got nearly identical
times:
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0 records in
1+0 records out
real 21.4
user0.0
sys 2.4
$ /usr/bin/time dd if=/test/filebench/testfile of=/dev/
On Mon, 14 May 2007, Marko Milisavljevic wrote:
> Thank you, Al.
>
> Would you mind also doing:
>
> ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
# ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
real 20.046
user0.013
sys 3.568
> to see the raw
I am very grateful to everyone who took the time to run a few tests to help
me figure what is going on. As per j's suggestions, I tried some
simultaneous reads, and a few other things, and I am getting interesting and
confusing results.
All tests are done using two Seagate 320G drives on sil3114.
On 5/15/07, eric kustarz <[EMAIL PROTECTED]> wrote:
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
>>
>> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
>>
>>> Hi,
>>>
>>> I have a test server that I use for testing my
>> different jumpstart
>>> installations. This system is continu
38 matches
Mail list logo