Howdy.
My plan:
I'm planning an ESX-iSCSI target/NFS serving box.
I'm planning on using an Areca RAID card, as I've heard mixed things about
hot-swapping with Solaris/ZFS, and I'd like the stability of a hardware RAID.
My question is this: I'll be using 8 750GB SATA drives, and I''m trying to
ID 107833 kern.notice]Sense Key:
Illegal_Request
Aug 2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Any insights would be greatly appreciated.
Thanks
Matt
No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version:
Ross wrote:
> What does zpool status say?
zpool status says everythings fine, i've run another scrub and it hasn't
found any errors, so can i just consider this harmless? its filling up
my log quickly though
thanks
Matt
No virus found in this outgoing message.
Checked
Matt Harrison wrote:
> Ross wrote:
>> What does zpool status say?
>
> zpool status says everythings fine, i've run another scrub and it hasn't
> found any errors, so can i just consider this harmless? its filling up
> my log quickly though
>
I've just
g corrupted thanks to ZFS. The thing I'm worried about is if the
entire batch is failing slowly and will all die at the same time.
Hopefully some ZFS/hardware guru can comment on this before the world
ends for me :P
Thanks
Matt
No virus found in this outgoing message.
Checked by AVG - http://
Miles Nordin wrote:
>>>>>> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
>
> mh> I'm worried about is if the entire batch is failing slowly
> mh> and will all die at the same time.
>
> If you can download smartctl, you c
Johan Hartzenberg wrote:
> On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison
> <[EMAIL PROTECTED]>wrote:
>
>> Miles Nordin wrote:
>>>>>>>> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
>>> mh> I'm worried about is
Richard Elling wrote:
> Matt Harrison wrote:
>> Aug 2 14:46:06 exodus Error for Command: read_defect_data
>> Error Level: Informational
>>
>
> key here: "Informational"
>
>> Aug 2 14:46:06 exodus scsi: [ID 107833 kern.notice]Reques
nplugging a drive (actually pulling the cable out) does not simulate a
drive failure, it simulates a drive getting unplugged, which is
something the hardware is not capable of dealing with.
If your drive were to suffer something more realistic, along the lines
of how you would normally expect a drive to die, then the system should
cope with it a whole lot better.
Unfortunately, hard drives don't come with a big button saying "simulate
head crash now" or "make me some bad sectors" so it's going to be
difficult to simulate those failures.
All I can say is that unplugging a drive yourself will not simulate a
failure, it merely causes the disk to disappear. Dying or dead disks
will still normally be able to communicate with the driver to some
extent, so they are still "there".
If you were using dedicated hotswappable hardware, then I wouldn't
expect to see the problem, but AFAIK off the shelf SATA hardware doesn't
support this fully, so unexpected results will occur.
I hope this has been of some small help, even just to explain why the
system didn't cope as you expected.
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Anyone know of a SATA and/or SAS HBA with battery backed write cache?
Seems like using a full-blown RAID controller and exporting each individual
drive back to ZFS as a single LUN is a waste of power and $$$. Looking for any
thoughts or ideas.
Thanks.
-Matt
--
This message posted from
t; drive (which isn't really broken, but the zpool would be inconsistent
at some level with the other "online" drives) and get back to "full speed"
quickly? or will I always have to wait until one of the servers resilvers
itself (from scratch?), and re-replicates its
xposes itself as a system drive (ala iRAM,
but PCIe not SATA 150) for slog and read cache... say $150 price point?
heehee... there is an SSD based option out there, but it has 80GB available,
and starts at $2500 (overkill for my requirement)
-Matt
--
This message posted fr
nce that some of the data would be (obviously) stale, my concern is whether
or not ZFS stayed consistent, or does AVS know how to "bundle" ZFS's atomic
writes properly?
-Matt
--
This message posted from opensolaris.org
___
zfs-discuss maili
etwork or
other problem?
Grateful for any ideas
Thanks
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
working and transfers to/from
the server seem ok except when there's video being moved.
I will do some testing and see if I can come up with a more definite
reason to the performance problems.
Thanks
Matt
___
zfs-discuss mailing
ment cable).
I will make up some new cables, and also place an order for an Intell
Pro100, as they are supposed to be really reliable.
Thanks
Matt
pgpgGq6AmGFWW.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Oct 25, 2008 at 06:50:46PM -0700, Nigel Smith wrote:
> Hi Matt
> What chipset is your PCI network card?
> (obviously, it not Intel, but what is it?)
> Do you know which driver the card is using?
I believe it's some sort of Realtek (8139 probably). It's coming up as r
e.
It's a little too large to send in a mail so I've posted it at
http://distfiles.genestate.com/_1_20081027010354.zip
> Another thing you could try is measuring network performance
> with a utility called 'iperf'.
Thanks for pointing this program out, I've just
Nigel Smith wrote:
> Ok on the answers to all my questions.
> There's nothing that really stands out as being obviously wrong.
> Just out of interest, what build of OpenSolaris are you using?
Damn forgot to add that, I'm running SXCE sn
On Mon, Oct 27, 2008 at 06:18:59PM -0700, Nigel Smith wrote:
> Hi Matt
> Unfortunately, I'm having problems un-compressing that zip file.
> I tried with 7-zip and WinZip reports this:
>
> skipping _1_20081027010354.cap: this file was compressed using an unknown
On Tue, Oct 28, 2008 at 05:30:55PM -0700, Nigel Smith wrote:
> Hi Matt.
> Ok, got the capture and successfully 'unzipped' it.
> (Sorry, I guess I'm using old software to do this!)
>
> I see 12840 packets. The capture is a TCP conversation
> between two host
On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
> I replied to Matt directly, but didn't hear back. It may be a driver issue
> with checksum offloading. Certainly the symptoms are consistent.
> To test with a workaround see
> http://bugs.opensolaris.org/v
On Wed, Oct 29, 2008 at 10:01:09AM -0700, Nigel Smith wrote:
> Hi Matt
> Can you just confirm if that Ethernet capture file, that you made available,
> was done on the client, or on the server. I'm beginning to suspect you
> did it on the client.
That capture was done from the
On Wed, Oct 29, 2008 at 05:32:39PM -0700, Nigel Smith wrote:
> Hi Matt
> In your previous capture, (which you have now confirmed was done
> on the Windows client), all those 'Bad TCP checksum' packets sent by the
> client,
> are explained, because you must be do
Nigel Smith wrote:
> Hi Matt
> Well this time you have filtered out any SSH traffic on port 22 successfully.
>
> But I'm still only seeing half of the conversation!
Grr this is my day, I think I know what the problem was...user error as
I'm not used to snoop.
> I see
thread to cifs-discuss and provide them some
captures, maybe they have a clue why this might happen.
I'm also going to switch back to the snv_95 BE I still have on the server,
it's possible it might have some effect.
Thanks
Matt
pgpdK9z30ugR8.pgp
De
On Fri, Oct 31, 2008 at 11:52:09AM +, Matt Harrison wrote:
> Ok, I have recieved a new set of NICs and a new switch and the problem still
> remains.
>
> Just for something to do I ran some tests:
>
> Copying a 200Mb file over scp from the main problem workstation to a t
me
organise my thoughts.
Matt
pgpVCuBqFE9Pi.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
txg=327816
pool_guid=6981480028020800083
hostid=95693
hostname='opensolaris'
top_guid=5199095267524632419
guid=5199095267524632419
vdev_tree
type='disk'
id=0
guid=5199095267524632419
path='/dev/dsk/c4t0d0s0
ed it by mounting it and peeking inside.
I also tried both with and without /etc/hostid. I still get the same
behavior.
Any thoughts?
Thanks in advance,
- Matt
[EMAIL PROTECTED] wrote:
> Hi,
>
> After a recent pkg image-update to OpenSolaris build 100, my system
> booted
will not happen in time
> to affect our current SAN migration.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discus
on the roadmap?
Thanks,
Matthew
--
Matt Walburn
http://mattwalburn.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, you just can't.
To access those child filesystems, you will have to share them
individually. It's a pain, but thats how I have to do it atm.
HTH
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to enlighten me as
to the problem, and I'm ashamed to say I have not taken any steps.
Except to replace the entire machine, I have no idea what to try.
I just wanted to note that although the fault detection is very good, it
isn't always possible to work out what the f
gt; ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Is this guy seriously for real? It's getting hard to stay on the list
with all this going on. No list etiq
n to say, but it's done that ever since
I've used ZFS. zpool list shows the total space regardless of
redundancy, whereas zfs list shows the actual available space. I was
confusing at first but now I just ignore it.
Matt
___
zfs-disc
;
> It may not be a ZFS problem, but it is a OpenSolaris problem. The
> drivers for hardware Realtek and other NICs are ... not so great.
>
> -B
>
+1, I was having terrible problems with the onboard RTL nics..but on
changing to a decent e1000 all is peachy in my world.
Matt
vious long-time linux user who came over for ZFS, I totally
agree. I much preferred to learn the solaris way and do things right
than try and think it was still linux.
Now I'm comfortable working on both despite their differences, and I'm
sure I can perform tasks a lot better f
g the disk cover on the chassis and see if you
have a blue eject LED lit, or yellow fault LED.
..Matt Snow
From: Matthew Arguin
Date: Mon, 2 Feb 2009 07:15:05 -0800
To:
Subject: Re: [zfs-discuss] Zfs and permissions
Actually, the issue seems to be more than what I
=ZldIBe.pxW0AAAEfRnkk4z3e&OrderID=9ENIBe.pyJEAAAEf.Xgk4z3e&ProductID=ldpIBe.o9ykAAAEctThSCJEY&FileName=/X4500_Tools_And_Drivers_solaris_42606a.tar.bz2
Extract and install solaris/tools/hdtool/SUNWhd-1.07.pkg.
..Matt
From: Matthew Arguin
Date: Tue, 3 Feb 2
y way to recover the dataset and any/all of the data?
Very grateful someone can give me some good news :)
Thanks
~Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dick hoogendijk wrote:
On Mon, 22 Jun 2009 21:42:23 +0100
Matt Harrison wrote:
She's now desperate to get it back as she's realised there some
important work stuff hidden away in there.
Without snapshots you're lost.
Ok, thanks. It was worth a shot. Guess she'll
Simon Breden wrote:
Yep, normally you can't get the data back, especially if new files have been
written to the drives AND the files were written over the old ones.
You have a slight chance, or big chance, depending on how many files have been
written since deletion of files, and if ZFS tries
Simon Breden wrote:
Hi Matt!
As kim0 says, that s/w PhotoRec looks like it might work, if it can work with
ZFS... would be interested to hear if it works.
Good luck,
Simon
I'll give it a go as soon as I get a chance. I've had a very quick look
and ZFS isn't in the list o
solaris 10 box. If I boot a belenix live
CD will it be
able to mount this ZFS root?
Thanks,
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e failed device without all your data going bye bye at the
same time.
Or maybe that wasn't the part you wanted clarified. My bad :)
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/etc/system, however I've just
found by corresponding with the EC2 support folks that it is not supported to
modify /etc/system (and it doesn't work... it keeps the system from booting).
Thanks in advance,
- Matt
--
This message posted from opensolaris.org
_
to be smaller than the default.
Like I said, I have a use case where I would like to pre-allocate as many large
pages as possible. How can I constrain or shrink it before I start my other
applications?
Thanks in advance,
- Matt
p.s.: I just found there may not be any large pages on domUs, so mayb
boot, manually import the pools
> after the
> application starts, so you get your pages first.
Sounds good... except this is OpenSolaris distro we're talking about so I have
ZFS root with no other options. It'll always have at least the rpool.
Good thought thoug
g covers
topics relating to what goes on in his sausage making duties.
- Matt
p.s.: The web says a German word for colloquialism is umgangssprachlich.
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron
We have a system with two drives in it, part UFS, part ZFS. It's a software
mirrored system with slices 0,1,3 setup as small UFS slices, and slice 4 on
each drive being the ZFS slice.
One of the drives is failing and we need to replace it.
I just want to make sure I have the correct order of t
Now I have a very stupid question.
I put this thread on my watch list. I've gotten four emails saying that the
thread was updated by various people, yet there are no replies in it. How do I
see the replies?
This message posted from opensolaris.org
__
I am trying to determine the best way to move forward with about 35 x86 X4200's
Each box has 4x 73GB internal drives.
All the boxes will be built using Solaris 10 11/06. Additionally, these boxes
are part of a highly available production environment with an uptime
expectation of 6 9's ( just a f
Thanks for responses. There is a lot there I am looking forward to digesting.
Right off the bat though I wanted to bring up something I found just before
reading this reply as the answer to this question would automatically answer
some other questinos
There is a ZFS best practices wiki at
http:
So it sounds like the consensus is that I should not worry about using slices
with ZFS
and the swap best practice doesn't really apply to my situation of a 4 disk
x4200.
So in summary(please confirm) this is what we are saying is a safe bet for
using in a highly available production environment
Autoreplace is currently the biggest advantage that H/W raid controllers have
over ZFS and other less advanced forms of S/W raid.
I would even go so far as to promote this issue to the forefront as a leading
deficiency that is hindering ZFS adoption.
Regarding H/W raid controllers things are k
Did you try using ZFS compression on Oracle zsystems? (filesystems)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have about a dozen two disk systems that were all setup the same using a
combination of SVM and ZFS.
s0 = / SMV Mirror
s1 = swap
s3 = /tmp
s4 = metadb
s5 = zfs mirror
The system does boot, but once it gets to zfs, zfs fails and all subsequent
services fail as well (including ssh)
/home,/tmp,
Is this something that should work? The assumption is that there is a dedicated
raw SWAP slice and after install /tmp (which will be on /) will be unmounted
and mounted to zpool/tmp (just like zpool/home)
Thoughts on this?
This message posted from opensolaris.org
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is
really fast as most all writes go straight to memory, but this is of little to
no value to the server in question.
The server in question is running 2 enterprise third party applications. No
compilers are installed...in
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB
swap slice and then put in the VM limit on /tmp? Will that limit only affect
users writing data to /tmp or will it also affect the systems use of swap?
This message posted from opensolaris.org
__
For reference...here is my disk layout currently (one disk of two, but both are
identical)
s4 is for the MetaDB
s5 is dedicated for ZFS
partition> print
Current partition table (original):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part TagFlag CylindersSi
Ok, since I already have an 8GB swap slice i'd like to use, what would be the
best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply
the 1GB quota limit?
I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how
to actually get to the above in a pos
And just doing this will automatically target my /tmp at my 8GB swap slice on
s1 as well as placing the quota in place?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Worked great. Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the comments in the code (arc.c I think) for info on this.
With the Eclipse project, I'd expect somewhat better results, but it
really depends on the workload. Perhaps some emperical data comparing
the two would be useful.
Hope that helps,
- Matt
_
Is it a supported configuration to have a single LUN presented to 4 different
Sun servers over a fiber channel network and then mounting that LUN on each
host as the same ZFS filesystem?
We need any of the 4 servers to be able to write data to this shared FC disk.
We are not using NFS as we do
That is what I was afraid of.
In regards to QFS and NFS, isnt QFS something that must be purchased? I looked
on the SUN website and it appears to be a little pricey.
NFS is free, but is there a way to use NFS without traversing the network? We
already have our SAN presenting this disk to each o
Cant use the network because these 4 hosts are database servers that will be
dumping close to a Terabyte every night. If we put that over the network all
the other servers would be starved
This message posted from opensolaris.org
___
zfs-discuss mai
the 4 database servers are part of an Oracle RAC configuration. 3 databases are
hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
littledb2 on the last two. The oracle backup system spawns db backup jobs that
could occur on any node based on traffic and load. All nodes are
Im not sure what you mean
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Here is what seems to be the best course of action assuming IP over FC is
supported by the HBA's (which I am pretty sure they so since this is all brand
new equipment)
Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a
non-redundant ZFS volume.
On node 1 RMAN (oracle bac
, cr_txg 4, last_txg 1759562, 406M, 362 objects
Is anyone able to shed any light on where this error might be and what I might
be able to do about it? I do not have a backup of this data so restoring is not
an option.
Any advice appreciated.
Thanks,
Matt
This message posted from
S) that is sliced up
Any help or pointing to good documentation would be much appreciated.
Thanks
Matt B
Below I included a metastat dump
d3: Mirror
Submirror 0: d13
State: Okay
Submirror 1: d23
State: Needs maintenance
Submirror 2: d33
State: Okay
Submirro
Anyone? Really need some help here
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ilesystem is now reported as being around 12MB total
with the snapshot around 6MB.
So the snapshot is being reported as being the size of the changed *files* and
not the changed *blocks*. Is this corret (that the snapshot is consuming this
space) or is that just how it's being reported with t
OK, to answer my own question (with a little help from Eric!) ...
I was using vi to edit the file which must be rewriting the entire file back
out to disk - hence the larger than expected growth of the snapshot.
Matt
This message posted from opensolaris.org
oller card. I've never
measured this or seen it measured-- any pointers would be useful. I
believe the IOs are 8KB, the application is MySQL.
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blo
Hi. We have a hard drive failing in one of our production servers.
The server has two drives, mirrored. It is split between UFS with SVM, and ZFS.
Both drives are setup as follows. The drives are c0t0d0 and c0t1d0. c0t1d0 is
the failing drive.
slice 0 - 3.00GB UFS (root partition)
slice 1
n data errors
# zpool status oldspace
^C
(process not responding...)
# zfs list -r space
NAME USED AVAIL REFER MOUNTPOINT
space 1.39G 51.8G19K /space
space/homes 1.39G 51.8G18K /space/homes
space/homes/mi109165 792M 51.8G 792M
nd.
Another attempt at the zfs send/receive again failed with this in messages:
Mar 22 19:38:47 hancock zfs: [ID 664491 kern.warning] WARNING: Pool 'oldspace'
has encountered an uncorrectable I/O error. Manual intervention is required.
Any pointers on what "manual inte
I destroy this snapshot as well, they'll show
on the actual underlying filesystem.
- Matt
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> On Sat, Mar 22, 2008
> at 11:33 PM, Matt Ingenthron < href="mailto:[EMAIL PROTECTED]">matt.ingenthron@
> sun.com> wrote: class="gmail_quote" style="border-left: 1px solid
> rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex;
> padding-left: 1ex;&
We have a 32 GB RAM server running about 14 zones. There are multiple
databases, application servers, web servers, and ftp servers running in the
various zones.
I understand that using ZFS will increase kernel memory usage, however I am a
bit concerned at this point.
[EMAIL PROTECTED]:~/zonecf
[EMAIL PROTECTED]:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip
hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto
logindmux ptm nfs ]
::memstat
Page SummaryPagesMB %Tot
-
I can't believe its almost a year later, with a patch provided, and this bug is
still not fixed.
For those of us that cant recompile the sources, it makes solaris useless if we
want to use a firewire drive.
--m
This message posted from opensolaris.org
___
Anyone willing to provide the modified kernel binaries for opensolaris2008.05?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
available for installation?
Thanks,
-matt
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iction and I really can't access individual zfs's
under one share, i guess i will have to have 6 network drives instead of
one but this will of course confuse the users no end.
Thanks
- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
public/audio via the public share, but it doesn't allow detailed
management of audio as it would with individual zfs'.
I hope this is a better explanation,
Thanks
- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9
question, so we can just add the 3 upraded disks, but what is the
recommended procedure to re-create or migrate the pool to the new disks?
thanks
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tomas Ögren wrote:
| On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
|
|> Hi gurus,
|>
|> Just wanted some input on this for the day when an upgrade is necessary.
|>
|> Lets say I have simple pool made up of 3 750gb SATA
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matt Harrison wrote:
| Tomas Ögren wrote:
| | On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
| |
| |> Hi gurus,
| |>
| |> Just wanted some input on this for the day when an upgrade is
necessary.
| |>
| |> Lets say I have s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
James C. McPherson wrote:
| Matt Harrison wrote:
|
|> I seem to have overlooked the first part of your reply, I can just
|> replace the disks one at a time, and of course the pool would rebuild
|> itself onto the new disk. W
o reel off my wish list here at this point ;)
Thanks
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
exactly what I was looking for. I can work my way
around the other snmp problems, like not reporting total space on a zfs :)
Thanks
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James C. McPherson wrote:
> Matt Harrison wrote:
>
>> I seem to have overlooked the first part of your reply, I can just
>> replace the disks one at a time, and of course the pool would rebuild
>> itself onto the new disk. Would this automatically extend the size o
hanks for that, I should've at least tried
a reboot :)
Thanks
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 0 0
c7t6d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
errors: No known data errors
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Systems Practice, Client Solutions
http
101 - 200 of 215 matches
Mail list logo