Some of us are still using Solaris 10 since it is the version of
Solaris released and supported by Sun. The 'filebench' software from
SourceForge does not seem to install or work on Solaris 10. The
'pkgadd' command refuses to recognize the package, even when it is set
to Solaris 2.4 mode.
I
jason wrote:
> -bash-3.2$ zfs share tank
> cannot share 'tank': share(1M) failed
> -bash-3.2$
>
> how do i figure out what's wrong?
>
>
Create a file system and share that.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
-bash-3.2$ zfs share tank
cannot share 'tank': share(1M) failed
-bash-3.2$
how do i figure out what's wrong?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
On Sat, 16 Feb 2008, Joel Miller wrote:
> Here is how you can tell the array to ignore cache sync commands and
> the force unit access bits...(Sorry if it wraps..)
Thanks to the kind advice of yourself and Mertol Ozyoney, there is a
huge boost in write performance:
Was: 154MB/second
Now: 279MB
Yes, it does replicate data between controllers. Usualy it slows that a lot
espacialy on wirte heavy environments. If you properly tune ZFS you may not
need this feature for consistency...
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobi
On Sat, 16 Feb 2008, Mertol Ozyoney wrote:
>
> Please try to distribute Lun's between controllers and try to benchmark by
> disabling cache mirroring. (it's different then disableing cache)
By the term "disabling cache mirroring" are you talking about "Write
Cache With Replication Enabled" in the
On Sat, 16 Feb 2008, Richard Elling wrote:
> "ls -l" shows the length. "ls -s" shows the size, which may be
> different than the length. You probably want size rather than du.
That is true. Unfortunately 'ls -s' displays in units of disk blocks
and does not also consider the 'h' option in ord
On Sat, 16 Feb 2008, Peter Tribble wrote:
> Agreed. My 2530 gives me about 450MB/s on writes and 800 on reads.
> That's zfs striped across 4 LUNs, each of which is hardware raid-5
> (24 drives in total, so each raid-5 LUN is 5 data + 1 parity).
Is this single-file bandwidth or multiple-file/thread
I was indeed using the new CIFS, I tried making Samba go last night but it
wasn't working out for whatever reason, and CIFS wouldn't work with my Windows
Vista x64. I just BFUed to the 81 release which fixed my Vista connection
problems, going to see if it fixes read/write too.
Sam
This me
On Feb 16, 2008, at 06:43, Ross wrote:
> It may not be relevant, but I've seen ZFS add weird delays to
> things too. I deleted a file to free up space, but when I checked
> no more space was reported. A second or two later the space appeared.
This also happens on FreeBSD's UFS if you have S
Bob,
Here is how you can tell the array to ignore cache sync commands and the force
unit access bits...(Sorry if it wraps..)
On a Solaris CAM install, the 'service' command is in "/opt/SUNWsefms/bin"
To read the current settings:
service -d arrayname -c read -q nvsram region=0xf2 host=0x00
sav
Bob Friesenhahn wrote:
> I have a script which generates a file and then immediately uses 'du
> -h' to obtain its size. With Solaris 10 I notice that this often
> returns an incorrect value of '0' as if ZFS is lazy about reporting
> actual disk use. Meanwhile, 'ls -l' does report the correct s
Hi Tim;
2540 controler can achieve maximum 250 MB/sec on writes on the first 12
drives. So you are pretty close to maximum throughput already.
Raid 5 can be a little bit slower.
Please try to distribute Lun's between controllers and try to benchmark by
disabling cache mirroring. (it's di
On Feb 15, 2008 10:20 PM, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Hi Bob,
>
> On 2/15/08 12:13 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:
>
> > I only managed to get 200 MB/s write when I did RAID 0 across all
> > drives using the 2540's RAID controller and with ZFS on top.
>
> Ridiculousl
Are you using the new CIFS server, or Samba? If you're using CIFS, it might be
worth giving Samba a try. Enabling samba, plus the swat web management, is now
just a case of:
svcadm enable samba
svcadm enable swat
http://blogs.sun.com/timthomas/entry/samba_and_swat_in_solaris
This message
It may not be relevant, but I've seen ZFS add weird delays to things too. I
deleted a file to free up space, but when I checked no more space was reported.
A second or two later the space appeared.
And I'm also seeing zpool status report that drives are ok when one is
disconnected. I have to
16 matches
Mail list logo