There is another benchmark tool named "iozone" (http://www.iozone.org/).
Hope this help.
Cesare
On Sun, Jun 15, 2008 at 6:43 AM, Will Murnane <[EMAIL PROTECTED]> wrote:
> On Sun, Jun 15, 2008 at 04:30, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
>> So - using plain dd to the zfs filesystem on sai
> Yeah. The command line works fine. Thought it to be a
> bit curious that there was an issue with the HTTP
> interface. It's low priority I guess because it
> doesn't impact the functionality really.
>
> Thanks for the responses.
I was receiving the same stacktrace:
[b]No enum const class
com.s
On Mon, Jun 16, 2008 at 6:09 PM, Aaron Moore <[EMAIL PROTECTED]> wrote:
> For the drives I am looking at using a LSI SAS3081E-R
The only snag that you might run into is the cable that comes with the
card. The cables included with the 3081 go from mini-SAS (sff-8087) to
SAS with power. If you're us
Aaron Moore wrote:
> I am new to open solaris and am trying to setup a ZFS based storage
> solution.
>
> I am looking at setting up a system with the following specs:
>
> Intel BOXDG33FBC Intel Core 2 Duo 2.66Ghz 2 or 4 GB ram
>
> For the drives I am looking at using a LSI SAS3081E-R
>
> I've b
Hello,
I am new to open solaris and am trying to setup a ZFS based storage solution.
I am looking at setting up a system with the following specs:
Intel BOXDG33FBC
Intel Core 2 Duo 2.66Ghz
2 or 4 GB ram
For the drives I am looking at using a
LSI SAS3081E-R
I've been reading around and it so
Richard Elling wrote:
> Matthew C Aycock wrote:
>
>> Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror
>> of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I
>> was new with ZFS and wanted to ensure that my data would survive an array
>> failu
Tried zpool replace. Unfortunately that takes me back into the cycle where as
soon as the resilver starts the system hangs, not even CAPS Lock works. When I
reset the system I have about a 10 second window to detach the device again to
get the system back before it freezes. Finally detached it s
Matthew C Aycock wrote:
> Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror
> of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I
> was new with ZFS and wanted to ensure that my data would survive an array
> failure. It turns out that I was smart
Remind me again what a veritas license is. If you can't find ram for
less than that you need to find a new var/disti
On 6/16/08, Chris Siebenmann <[EMAIL PROTECTED]> wrote:
> | I guess I find it ridiculous you're complaining about ram when I can
> | purchase 4gb for under 50 dollars on a desk
| I guess I find it ridiculous you're complaining about ram when I can
| purchase 4gb for under 50 dollars on a desktop.
|
| Its not like were talking about a 500 dollar purchase.
'On a desktop' is an important qualification here. Server RAM is
more expensive, and then you get to multiply it by t
Brian H. Nelson wrote:
Andrius wrote:
That is true, but
# kill -HUP `pgrep vold`
usage: kill [ [ -sig ] id ... | -l ]
I think you already did this as per a previous message:
# svcadm disable volfs
As such, vold isn't running. Re-enable the service and you should be fine.
-Brian
Co
Andrius wrote:
>
> That is true, but
> # kill -HUP `pgrep vold`
> usage: kill [ [ -sig ] id ... | -l ]
>
>
I think you already did this as per a previous message:
# svcadm disable volfs
As such, vold isn't running. Re-enable the service and you should be fine.
-Brian
___
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
After commenting
# kill -HUP 'pgrep vold'
kill: invalid id
It looks like you used forward quotes rather than backward quotes.
I did just try this procedure myself with my own USB drive and it works
fine.
Bob
Andrius wrote:
> Bob Friesenhahn wrote:
>> On Mon, 16 Jun 2008, Andrius wrote:
>>> Thanks! It works. Volume managagement is that thing that does not
>>> exist in zfs perhaps and made disk managemet more easy. Thanks for
>>> everybody for advices.
>>>
>>> Volume Manager should be off before creati
On Mon, 16 Jun 2008, Andrius wrote:
> After commenting
> # kill -HUP 'pgrep vold'
> kill: invalid id
We're in the 21st century, so
# pkill -HUP vold
should work just fine.
--
Rich Teer, SCSA, SCNA, SCSECA
CEO,
My Online Home Inventory
URLs: http://www.rite-group.com/rich
http://ww
I guess I find it ridiculous you're complaining about ram when I can
purchase 4gb for under 50 dollars on a desktop.
Its not like were talking about a 500 dollar purchase.
On 6/16/08, Peter Tribble <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk <[EMAIL PROTECTE
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not
exist in zfs perhaps and made disk managemet more easy. Thanks for
everybody for advices.
Volume Manager should be off before creating pools in removable disks.
Proba
Why would you have to buy smaller disks? You can replace the 320's
with 1tb drives and after the last 320 is out of the raidgroup, it
will grow automatically.
On 6/16/08, Miles Nordin <[EMAIL PROTECTED]> wrote:
> Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
> happen with
On Mon, 16 Jun 2008 20:04:47 +0100
"Peter Tribble" <[EMAIL PROTECTED]> wrote:
> Hogwash. What is the reasonable minimum? I'm suspecting it's well
> over 2G.
2Gb is perfectly alright.
> And as for being unable to get machines with less than 2G, just look
> at Sun's price list
I'm not saying you c
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same file
On Mon, 16 Jun 2008, Andrius wrote:
> Thanks! It works. Volume managagement is that thing that does not exist in
> zfs perhaps and made disk managemet more easy. Thanks for everybody for
> advices.
>
> Volume Manager should be off before creating pools in removable disks.
Probably it will work t
Paul Gress wrote:
Since Volume Management has control and eject didn't work, just turning
off Volume Management will do the trick.
# svcadm disable volfs
Now you can remove it safely.
Paul
Thanks! It works. Volume managagement is that thing that does not exist
in zfs perhaps and made disk
Since Volume Management has control and eject didn't work, just turning
off Volume Management will do the trick.
# svcadm disable volfs
Now you can remove it safely.
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk <[EMAIL PROTECTED]> wrote:
> On Mon, 16 Jun 2008 16:21:26 +0100
> "Peter Tribble" <[EMAIL PROTECTED]> wrote:
>
>> The *real* common thread is that you need ridiculous amounts
>> of memory to get decent performance out of ZFS
>
> That's FUD. Older sys
This is actually quite a tricky fix as obviously data and meta data have
to be relocated. Although there's been no visible activity in this bug
there has been substantial design activity to allow the RFE to be easily
fixed.
Anyway, to answer your question, I would fully expect this RFE would
be f
Martin Winkelman wrote:
On Mon, 16 Jun 2008, Andrius wrote:
# eject /rmdisk/unnamed_rmdisk
No such file or directory
# eject /dev/rdsk/c5t0d0s0
/dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
# eject rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
#
On Mon, 16 Jun 2008, Andrius wrote:
> # eject /rmdisk/unnamed_rmdisk
> No such file or directory
> # eject /dev/rdsk/c5t0d0s0
> /dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
> # eject rmdisk
> /vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
> # eject /vol/de
Martin Winkelman wrote:
On Mon, 16 Jun 2008, Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius <[EMAIL PROTECTED]> wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris v
On Mon, Jun 16, 2008 at 6:42 PM, Steffen Weiberle
<[EMAIL PROTECTED]> wrote:
> Has anybody stored 1/2 billion small (> 50KB) files in a ZFS data store?
> If so, any feedback in how many file systems [and sub-file systems, if
> any] you used?
I'm not quite there yet, although I have a thumper with
On Mon, 16 Jun 2008, Andrius wrote:
> dick hoogendijk wrote:
>> On Mon, 16 Jun 2008 19:10:18 +0100
>> Andrius <[EMAIL PROTECTED]> wrote:
>>
>>> /rmdisk/unnamed_rmdisk
>> umount /rmdisk/unnamed_rmdisk should do the trick
>>
>> It's probably also mounted on /media depending on your solaris version
Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within the next year?
My use-case is home user. I have 16 disks spinning, two towers of
eight disks each, exporting some of them as iSCSI targets. Four disks
are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320
dick hoogendijk wrote:
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius <[EMAIL PROTECTED]> wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk t
dick hoogendijk wrote:
On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk <[EMAIL PROTECTED]> wrote:
Unmount it if neccessary (umount /dev/dsk/c5t0d0)
Should be /dev/dsk/c5t1d0 <--
Still the same
# umount /dev/rdsk/c5t1d0
umount: warning: /dev/rdsk/c5t1d0 not in mnttab
umount: /dev/rdsk/c5t
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> /rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk too.
--
Dick Hoogendijk -
Try 'zpool replace'.
- Eric
On Mon, Jun 16, 2008 at 10:57:40AM -0700, Peter Hawkins wrote:
> Thanks to the help in a previous post I have imported my pool. However I
> would appreciate some help with my next problem.
>
> This all arose because my motherboard failed while my zpool was resilverin
On Mon, 16 Jun 2008, Vincent Fox wrote:
> Also the array has SAN connectivity and caching and
> dual-controllers that just don't exist in the JBOD world.
As a clarification, you can convince your StorageTek 2540 to appear as
JBOD on the SAN. Then you obtain the SAN connectivity and caching and
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:54:04 +0100
Andrius <[EMAIL PROTECTED]> wrote:
That is true, disc is detected automatically. But
# umount /dev/rdsk/c5t0d0p0
umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
umount /dev/dsk/c5t0d0 should do it.
The same
# umount /dev/dsk/c5t0
Miles Nordin wrote:
"a" == Andrius <[EMAIL PROTECTED]> writes:
a> # umount /dev/rdsk/c5t0d0p0
maybe there is another problem, too, but this is wrong. type 'df -k'
as he suggested and use the device or pathname listed there.
This is end of df -k
/vol/dev/dsk/c5t0d0/unnamed_rmdisk:c
On Mon, 16 Jun 2008 20:04:08 +0200
dick hoogendijk <[EMAIL PROTECTED]> wrote:
> Should be /dev/dsk/c5t1d0 <--
Sh***t! No it should not. rmformat showed c5t0d0, didn't it?
So be careful. A typo is quickly made (see my msgs) ;-)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ +
On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk <[EMAIL PROTECTED]> wrote:
> Unmount it if neccessary (umount /dev/dsk/c5t0d0)
Should be /dev/dsk/c5t1d0 <--
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
Andrius wrote:
> Neal Pollack wrote:
>> Andrius wrote:
>>> dick hoogendijk wrote:
>>>
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> zpool does not to create a pool on USB disk (formatted in FAT32).
>
It's already been formatted.
On Mon, 16 Jun 2008 18:54:04 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> That is true, disc is detected automatically. But
> # umount /dev/rdsk/c5t0d0p0
> umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
umount /dev/dsk/c5t0d0 should do it.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http:
On Mon, 16 Jun 2008 18:38:11 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> The device is on, but it is empty. It is not a stick, it is a mobile
> hard disk Iomega 160 GB.
Like Neal writes: check if the drive is mounted. "Do a df -h"
Unmount it if neccessary (umount /dev/dsk/c5t0d0) and then do a zp
Thanks to the help in a previous post I have imported my pool. However I would
appreciate some help with my next problem.
This all arose because my motherboard failed while my zpool was resilvering
from a failed disk. I moved the disks to a new motherboard and imported the
pool with the help of
Neal Pollack wrote:
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius <[EMAIL PROTECTED]> wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
The same stor
I'm not sure why people obsess over this issue so much. Disk is cheap.
We have a fair number of 3510 and 2540 on our SAN. They make RAID-5 LUNs
available to various servers.
On the servers we take RAID-5 LUNs from different arrays and ZFS mirror them.
So if any array goes away we are still u
Has anybody stored 1/2 billion small (> 50KB) files in a ZFS data store?
If so, any feedback in how many file systems [and sub-file systems, if
any] you used?
How were ls times? And insights in snapshots, clones, send/receive, or
restores in general?
How about NFS access?
Thanks
Steffen
_
Andrius wrote:
> dick hoogendijk wrote:
>
>> On Mon, 16 Jun 2008 18:10:14 +0100
>> Andrius <[EMAIL PROTECTED]> wrote:
>>
>>
>>> zpool does not to create a pool on USB disk (formatted in FAT32).
>>>
>> It's already been formatted.
>> Try zpool create -f alpha c5t0d0p0
>>
>>
>
> T
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:23:35 +0100
Andrius <[EMAIL PROTECTED]> wrote:
The same story
# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
Are you sure you're not "on" that device?
Are you also sure your usb stick is called c5t0d0p0?
W
On Mon, 16 Jun 2008 18:23:35 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> The same story
>
> # /usr/sbin/zpool create -f alpha c5t0d0p0
> cannot open '/dev/dsk/c5t0d0p0': Device busy
Are you sure you're not "on" that device?
Are you also sure your usb stick is called c5t0d0p0?
What does rmformat (
dick hoogendijk wrote:
> On Mon, 16 Jun 2008 18:10:14 +0100
> Andrius <[EMAIL PROTECTED]> wrote:
>
>> zpool does not to create a pool on USB disk (formatted in FAT32).
>
> It's already been formatted.
> Try zpool create -f alpha c5t0d0p0
>
The same story
# /usr/sbin/zpool create -f alpha c5t0d
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius <[EMAIL PROTECTED]> wrote:
> zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
Hi - I'm interested in your solution as my current ZFS/vmware experiment is
stalled.
I have a 6-disk SCSI rack ( 6 @ 9GB/ea ) attached as Raw disks to the VM
(Workstation 6), and have been getting ZFS pool corruption on reboot. Vmware
is allowing the Solaris guest to write a disklabel that is (
Hi,
zpool does not to create a pool on USB disk (formatted in FAT32).
# /usr/sbin/zpool create alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
or
# /usr/sbin/zpool create alpha /dev/rdsk/c5t0d0p0
cannot use '/dev/rdsk/c5t0d0p0': must be a block device or regular file
What is gonna
Added an vdev using rdm and that seems to be stable over reboots
however the pools based on a virtual disk now also seems to be stable after
doing an export and import -f
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
I'm doing some simple testing of ZFS block reuse and was wondering when
deferred frees kick in. Is it on some sort of timer to ensure data
consistency? Does an other routine call it? Would something as simple as
sync(1M) get the free block list written out so future allocations could
use the sp
On Mon, 16 Jun 2008 16:21:26 +0100
"Peter Tribble" <[EMAIL PROTECTED]> wrote:
> The *real* common thread is that you need ridiculous amounts
> of memory to get decent performance out of ZFS
That's FUD. Older systems might not have enough memory, but newer ones
can't hardly be bought with less the
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror of
a T3B lun and a corresponding lun of a SE3511 brick. I did this since I was new
with ZFS and wanted to ensure that my data would survive an array failure. It
turns out that I was smart for doing this :)
I had a hard
Answer is:
# zpool import
(which will pick up the zpool on the HDD and lists its name and id)
# zpool import rpool
(rpool is default opensolaris zpool)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
On Mon, Jun 16, 2008 at 12:05 PM, Matthew Gardiner
<[EMAIL PROTECTED]> wrote:
>
> I think that if you notice the common thread; those who run SPARC's
> are having performance issues vs. those who are running x86.
Not that simple. I'm seeing performance issues on x86 just as
much as sparc. My sparc
On Mon, 16 Jun 2008, Kaiwai Gardiner wrote:
>
> I think that if you notice the common thread; those who run SPARC's
> are having performance issues vs. those who are running x86. I know
Especially those who run SPARCs with hardly any memory installed. :-)
Hobbyists are likely to test OpenSolaris
I am seeing the same problem using a seperate virtual disk for the pool.
This is happening with Solaris 10 U3, U4 and U5
SCSI reservations is know to be an issue with clustered solaris
http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run
I wonder if this is the same problem. Maybe w
Peter Hawkins wrote:
> Can zpool on U3 be patched to V4? I've applied the latest cluster and it
> still seems to be V3.
>
>
Yes, you can patch your way up to the Sol 10 U4 kernel (or even U5
kernel) which will give you zpool v4 support. The particular patch you
need is 120011-14 or 120012-14
Hi,
How can I access a zfs partition on the HDD, if I boot up via the 2008.05
LiveCD?
I installed opensolaris on my computer. But the system has gone screwy and will
no longer boot. I have to retrieve some files from the filesystem...
Cheers!
This message posted from opensolaris.org
__
> I think that if you notice the common thread; those who run SPARC's
> are having performance issues vs. those who are running x86.
I would not say that. For example, my T1000 with 2GB RAM had fair
performance. Now that it has 16GB RAM it has improved a lot. :-)
Also, I would not call it "perf
> > I've got a couple of identical old sparc boxes
> running nv90 - one
> > on ufs, the other zfs. Everything else is the same.
> (SunBlade
> > 150 with 1G of RAM, if you want specifics.)
> >
> > The zfs root box is significantly slower all
> around. Not only is
> > initial I/O slower, but it seems
> > I've got a couple of identical old sparc boxes
> running nv90 - one
> > on ufs, the other zfs. Everything else is the same.
> (SunBlade
> > 150 with 1G of RAM, if you want specifics.)
> >
> > The zfs root box is significantly slower all
> around. Not only is
> > initial I/O slower, but it seems
Hi,
I've got an external hard disk and I've done the stuff with zpool - so
its all working.
The problem I have, however, is whether it is possible to actually set
it up so that zfs devices mount just like cd's and drives formatted as
fat.
___
zfs-discus
One thing I should mention on this is that I've had _very_ bad
experience with using single-LUN ZFS filesystems over FC.
that is, using an external SAN box to create a single LUN, export that
LUN to a FC-connected host, then creating a pool as follows:
zpool create tank
It works fine, up unt
Hello Richard,
Thursday, June 12, 2008, 6:54:29 AM, you wrote:
RE> Oracle bails out after 10 minutes (ORA-27062) ask me how I know... :-P
So how do you know?
--
Best regards,
Robert Milkowskimailto:[EMAIL PROTECTED]
http://
70 matches
Mail list logo