The intended use is NFS storage to back some VMWare servers running a
range of different VM's, including Exchange, Lotus Domino, SQL Server
and Oracle. :-) It's a very random workload, and all the research I've
done points to mirroring as the better option for providing a better
total IOP/s. The se
On Wed, Aug 26, 2009 at 12:09 AM, Duncan Groenewald <
dagroenew...@optusnet.com.au> wrote:
> That was a typo, missing an s - I copied the incorrect line from the
> terminal...
>
> sbdadm create-lu /dev/zvol/rdsk/storagepool/backups/isci/macbook_dg
>
> Blog is here...
>
> http://www.cuddletech.com/
On Wed, Aug 26, 2009 at 12:27 AM, Tristan Ball <
tristan.b...@leica-microsystems.com> wrote:
> The remaining drive would only have been flagged as dodgy if the bad
> sectors had been found, hence my comments (and general best practice) about
> data scrub’s being necessary. While I agree it’s poss
On Wed, Aug 26, 2009 at 12:22 AM, thomas wrote:
> > I'll admit, I was cheap at first and my
> > fileserver right now is consumer drives. You
> > can bet all my future purchases will be of the enterprise grade.
> And
> > guess what... none of the drives in my array are less than 5 years old,
> s
The remaining drive would only have been flagged as dodgy if the bad
sectors had been found, hence my comments (and general best practice)
about data scrub's being necessary. While I agree it's possibly likely
that the enterprise drive would flag errors earlier, I wouldn't
necessarily bet on it. Ju
> I'll admit, I was cheap at first and my
> fileserver right now is consumer drives. You
> can bet all my future purchases will be of the enterprise grade. And
> guess what... none of the drives in my array are less than 5 years old, so
> even
> if they did die, and I had bought the enterprise v
That was a typo, missing an s - I copied the incorrect line from the terminal...
sbdadm create-lu /dev/zvol/rdsk/storagepool/backups/isci/macbook_dg
Blog is here...
http://www.cuddletech.com/blog/pivot/entry.php?id=968
--
This message posted from opensolaris.org
On Tue, Aug 25, 2009 at 11:38 PM, Tristan Ball <
tristan.b...@leica-microsystems.com> wrote:
> Not upset as such J
>
>
>
> What I’m worried about that time period where the pool is resilvering to
> the hot spare. For example: one half of a mirror has failed completely, and
> the mirror is being r
Not upset as such :-)
What I'm worried about that time period where the pool is resilvering to
the hot spare. For example: one half of a mirror has failed completely,
and the mirror is being rebuilt onto the spare - if I get a read error
from the remaining half of the mirror, then I've lost dat
On Tue, Aug 25, 2009 at 11:14 PM, Duncan Groenewald <
dagroenew...@optusnet.com.au> wrote:
> Oops I left that bit out...
>
> dun...@osshsrvr:~# itadm create-target
> Target iqn.1986-03.com.sun:02:7af8d188-b1e8-4d98-fee1-f4da18bbe46f
> successfully created
> dun...@osshsrvr:~# itadm list-target -v
Oops I left that bit out...
dun...@osshsrvr:~# itadm create-target
Target iqn.1986-03.com.sun:02:7af8d188-b1e8-4d98-fee1-f4da18bbe46f successfully
created
dun...@osshsrvr:~# itadm list-target -v
TARGET NAME STATESESSIONS
iqn.1986-03.com.sun:02:
On Tue, Aug 25, 2009 at 10:56 PM, Tristan Ball <
tristan.b...@leica-microsystems.com> wrote:
> I guess it depends on whether or not you class the various "Raid
> Edition" drives as "consumer"? :-)
>
> My one concern with these RE drives is that because they will return
> errors early rather than r
On Tue, Aug 25, 2009 at 10:54 PM, Duncan Groenewald <
dagroenew...@optusnet.com.au> wrote:
> OK, I found a blog on COMSTAR and tried creating the iSCSI target using the
> "new" method...
> Seemed to be ok until sbdadm failed - see below...any ideas?
>
> dun...@osshsrvr:~# itadm create-target
> Tar
I guess it depends on whether or not you class the various "Raid
Edition" drives as "consumer"? :-)
My one concern with these RE drives is that because they will return
errors early rather than retry is that they may fault when a "normal"
consumer drive would have returned the data eventually. If
OK, I found a blog on COMSTAR and tried creating the iSCSI target using the
"new" method...
Seemed to be ok until sbdadm failed - see below...any ideas?
dun...@osshsrvr:~# itadm create-target
Target iqn.1986-03.com.sun:02:7af8d188-b1e8-4d98-fee1-f4da18bbe46f successfully
created
dun...@osshsrvr:
On Tue, Aug 25, 2009 at 10:38 PM, Duncan Groenewald <
dagroenew...@optusnet.com.au> wrote:
> Ok, I just completed the upgrade to snv 118 and everything still works
> except the iSCSI is still sloowww...
>
> It is still unclear to me what the COMSTAR iscsi command set is vs the
> older method !!
>
Ok, I just completed the upgrade to snv 118 and everything still works except
the iSCSI is still sloowww...
It is still unclear to me what the COMSTAR iscsi command set is vs the older
method !!
I presume one cannot use ZFS commands, so could someone point me to a
description of the new way of
Are there *any* consumer drives that don't respond for a long time trying to
recover from an error? In my experience they all behave this way which has been
a nightmare on hardware raid controllers.
--
This message posted from opensolaris.org
___
zfs-d
I am going to upgrade to snv118 and see what happens.
In the meantime would you mind explaining what the new way of configuring
targets is vs the old way.
Thanks
Duncan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On Tue, Aug 25, 2009 at 5:07 AM, Darren J Moffat wrote:
> A reasonably safe and simple way to do it is like this, however this
> assumes you have sufficient space.
>
> First set the new value for compression.
Can this be used to enable normalization on a filesystem too? I have a
filesystem that sh
On an OpenSolaris 2009.06 I have a zpool of 12 x WD10EACS disks plus 2 spares
One disk is reported as Faulted due to corrupted data.
The drive tests ok, but won't let me reuse it.
The drive passes the manufacturers diagnostic tests, and doesn't show issues
with hdat2 diags or smart.
zeroing and
Hi Dick,
I'm testing root pool recovery from remotely stored snapshots rather
than from files.
I can send the snapshots to a remote pool easily enough.
The problem I'm having is getting the snapshots back while the
local system is booted from the miniroot to simulate a root pool
recovery. I don
Matthew Stevenson wrote:
It does lead to another question though: is there a way to see how much data is
shared between any two given snapshots
only if these are the only two snapshots (and no clones) as then the
difference in a total used space for all snapshots (two of them) and a
sum of t
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
> [[ snip snip ]]
>
> After the resilver completed:
> # zpool status tww
> pool: tww
> state: DEGRADED
> status: One or more devices has experienced an error resulting in data
> corruption. Applications may be affected.
> a
On Tue, Aug 25, 2009 at 12:02:51PM +1000, LEES, Cooper wrote:
> Hi Duncan,
>
> I also do the same with my Mac for timemachine and get the same WOEFUL
> performance to my x4500 filer.
>
> I have mounted ISCSI zvols on a linux machine and it performs as expected
> (50 mbytes a second) as apposed to
2009/8/25 Haudy Kazemi :
> Christian Wattengård wrote:
>>
>> Hi.
>> I recently transferred a zfs drive containing one pool from a FreeNAS
>> 0.7RC1 box to a FreeNAS 0.7RC2 box.
>>
>> I had earlier reinstalled the first box several times, and managed to get
>> the pool back just using the FreeNAS gu
Did you ever get a response from support?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 08/25/09 05:29 AM, Gary Gendel wrote:
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right
after upgrading SXCE to build 121. They seem to be randomly occurring on all 5
disks, so it doesn't look like a disk failure situation.
Repeatingly running a scrub on the p
Hello,
On 25 aug 2009, at 14.29, Gary Gendel wrote:
I have a 5-500GB disk Raid-Z pool that has been producing checksum
errors right after upgrading SXCE to build 121. They seem to be
randomly occurring on all 5 disks, so it doesn't look like a disk
failure situation.
Repeatingly runnin
On Tue, Aug 25, 2009 at 09:05:47AM -0400, Peter Cudhea wrote:
> The ZFS set shareiscsi=on mechanism is only used with the iscsitgt and not
> with COMSTAR iscsi/target. Since you shifted to using iscsi/target, it
> should not be working for you now.
> Could it be that somehow you ended up with
The ZFS set shareiscsi=on mechanism is only used with the iscsitgt and
not with COMSTAR iscsi/target. Since you shifted to using
iscsi/target, it should not be working for you now.
Could it be that somehow you ended up with both kinds of target
(iscsitgt and comstar iscsi/target) running at
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right
after upgrading SXCE to build 121. They seem to be randomly occurring on all 5
disks, so it doesn't look like a disk failure situation.
Repeatingly running a scrub on the pools randomly repairs between 20 and a few
Jérôme Warnier wrote:
Hi guys,
You can enable compression on a (ZFS) filesystem when already filled
with files. However, it is my understanding that the previously-existing
files are not compressed, only the file added afterwards.
Is there any way to force a "recompression" of the existing fil
Hi guys,
You can enable compression on a (ZFS) filesystem when already filled
with files. However, it is my understanding that the previously-existing
files are not compressed, only the file added afterwards.
Is there any way to force a "recompression" of the existing files not
already compres
Mmm - OK I think I managed to start things by using the "itadm create-target"
command. Anyone's guess as to how this knows to share the ZFS iscsi shares but
it seems to do so...
Anyway performance is still really bad !!! Perhaps it is the GlobalSAN OSX
initiator that is not good !
--
This me
Duncan Groenewald wrote:
Thanks - what is the chance of something breaking if I do this ??
It should work just fine but do read the release notes here:
http://mail.opensolaris.org/pipermail/opensolaris-announce/2009-July/002240.html
Plus you will be able to go back and boot your snv_111 build
Thanks - what is the chance of something breaking if I do this ??
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
BTW I fixed the /iscsi/target problem by starting the stmf service ...
Still I don't see the iSCSI targets listed and when I run ZFS set shareiscsi=on
I get an error complaining that the /iscsitgt service is not running ??
I presume the ZFS iscsi commands don't work with the /iscsi/target servi
$ cat /etc/release
Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 15 December 2008
$ zpool status tww
pool: tww
s
Duncan Groenewald wrote:
Is there an easy way to update to snv_118 ? I am using 2009.06 (snv_111).
# pkg set-authority -O http://pkg.opensolaris.org/dev opensolaris.org
# pkg image-update
# init 6
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
> On Tue, Aug 25, 2009 at 12:02:51PM +1000, LEES,
> Cooper wrote:
> > Hi Duncan,
> >=20
> > I also do the same with my Mac for timemachine and
> get the same WOEFUL
> > performance to my x4500 filer.
> >=20
> > I have mounted ISCSI zvols on a linux machine and
> it performs as expected
> > (50 mbyt
On Tue, Aug 25, 2009 at 12:02:51PM +1000, LEES, Cooper wrote:
> Hi Duncan,
>=20
> I also do the same with my Mac for timemachine and get the same WOEFUL
> performance to my x4500 filer.
>=20
> I have mounted ISCSI zvols on a linux machine and it performs as expected
> (50 mbytes a second) as appose
Hi Volker,
On Fri, Aug 21, 2009 at 5:42 PM, Volker A. Brandt wrote:
>> > Can you actually see the literal commands? A bit like MySQL's 'show
>> > create table'? Or are you just intrepreting the output?
>>
>> Just interpreting the output.
>
> Actually you could see the commands on the "old" serv
43 matches
Mail list logo