So I'm in the process of building a ZFS based SAN. After toying with it at
home I've ordered up all the parts to begin my build. That's a completely
different story though.
I'm wondering what the possibilities of two-way replication are for a ZFS
storage pool.
The scenario - the ZFS SAN will
So I'm working up my SAN build, and I want to make sure it's going to behave
the way I expect when I go to expand it.
Currently I'm running 10 - 500GB Seagate Barracuda ES.2 drives as two drive
mirrors added to my tank pool.
I'm going to be using this for virtual machine storage, and have creat
That is exactly what I meant. Sorry for my newbie terminology. I'm so used to
traditional RAID that it's hard to shake.
That's great to know. Time to soldier on with the build!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Working on my ZFS Build, using a SuperMicro 846E1 chassis and an LSI 1068e SAS
controller, I'm wondering how well FM works in OpenSolaris 2009.06.
I'm hoping that if ZFS detects an error with a drive, that it'll light up the
fault light on the corresponding hot-swap drive in my enclosure and any
st way of doing this without
setting up a complex HA service and at the same time minimising load on the
master?
Thanks in advance and sorry for the barrage of questions,
Matt.
--
This message posted from opensolaris.org
___
zfs-discuss mail
n the slave even if not required?
A zfs send/receive every 15 minutes might well have to do.
Matt.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
could use one and configure it to report
writes as being flushed to disk before they actually were. That might give a
slight edge in performance in some cases but I would prefer to have the data
security instead, tbh.
Matt.
--
This message posted from opensolaris.org
things that SAN array products do.
I've not seen an example of that before. Do you mean having two 'head units'
connected to an external JBOD enclosure or a proper HA cluster type
configuration where the entire thing, disks and all, are duplicated?
Matt.
--
This mes
Just wanted to add that I'm in the exact same boat - I'm connecting from a
Windows system and getting just horrid iSCSI transfer speeds.
I've tried updating to COMSTAR (although I'm not certain that I'm actually
using it) to no avail, and I tried updating to the latest DEV version of
OpenSolari
No SSD Log device yet. I also tried disabling the ZIL, with no effect on
performance.
Also - what's the best way to test local performance? I'm _somewhat_ dumb as
far as opensolaris goes, so if you could provide me with an exact command line
for testing my current setup (exactly as it appears
Just out of curiosity - what Supermicro chassis did you get? I've got the
following items shipping to me right now, with SSD drives and 2TB main drives
coming as soon as the system boots and performs normally (using 8 extra 500GB
Barracuda ES.2 drives as test drives).
http://www.acmemicro.com
Responses inline :
> Hi Matt
> Are the seeing low speeds on writes only or on both
> read AND write?
>
Lows speeds both reading and writing.
> Are you seeing low speed just with iSCSI or also with
> NFS or CIFS?
Haven't gotten NFS or CIFS to work properly. Maybe I
> One question though:
> Just this one SAS adaptor? Are you connecting to the
> drive
> backplane with one cable for the 4 internal SAS
> connectors?
> Are you using SAS or SATA drives? Will you be filling
> up 24
> slots with 2 TByte drives, and are you sure you won't
> be
> oversubscribed wit
Also - still looking for the best way to test local performance - I'd love to
make sure that the volume is actually able to perform at a level locally to
saturate gigabit. If it can't do it internally, why should I expect it to work
over GbE?
--
This message posted from opensolaris.org
___
I do not know if ibm has officially said they support ZFS but with the latest
client (5.4.1.2) my file systems show up as ZFS now and a quick test restore
seems to restore ZFS ACL's as well now. They also all appear to be included as
local filesytems so no work arounds are needed to back them a
it be the controller card running in a regular PCI slot on the AMD setup? Will
it be the 32 bit Intel system? Or will using samba overshadow either of the
hardware options? Any suggestions would be greatly appreciated. Thanks.
Matt
This message posted from opensolaris.org
any performance implications ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Everyone,
It looks like I've got something weird going with zfs performance on a
ramdiskZFS is performing not even a 3rd of what UFS is doing.
Short version:
Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren't swapping
Create zpool on it (zpool create ram)
Change zfs op
It can, but doesn't in the command line shown below.
M
On Mar 8, 2010, at 6:04 PM, "ольга крыжановская" wrote:
> Does iozone use mmap() for IO?
>
> Olga
>
> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger
> wrote:
>> Hi Everyone,
>>
>>
>
On Mar 8, 2010, at 6:31 PM, Richard Elling wrote:
>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the
>> UFS forcedirectio (no point in using a buffer cache memory for something
>> that’s already in memory)
>
> Did you also set primarycache=none?
> -- richard
Good
On Mar 8, 2010, at 6:31 PM, Bill Sommerfeld wrote:
>
> if you have an actual need for an in-memory filesystem, will tmpfs fit
> the bill?
>
> - Bill
Very good point bill - just ran this test and started to get the numbers I was
expecting (1.3 GB
Ross is correct - advanced OS features are not required here - just the ability
to store a file - don’t even need unix style permissions
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ross Walker
Sent: Tuesday, M
a significant drain of CPU resource.
>
> -r
>
>
> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
>
>> Hi Everyone,
>>
>> It looks like I¹ve got something weird going with zfs performance on
>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
Ross Walker [mailto:rswwal...@gmail.com]
Sent: Tuesday, March 09, 2010 3:53 PM
To: Roch Bourbonnais
Cc: Matt Cowger; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk
(70% drop)
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
wrote:
>
&
On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:
> Yes, noting the warning.
Is it safe to execute on a live, active pool?
--m
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is totally doable, and a reasonable use of zfs snapshots - we do some
similar things.
You can easily determine if the snapshot has changed by checking the output of
zfs list for the snapshot.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-bou
scuss-boun...@opensolaris.org] On Behalf Of Harry Putnam
Sent: Monday, March 22, 2010 2:23 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] snapshots as versioning tool
Matt Cowger writes:
> This is totally doable, and a reasonable use of zfs snapshots - we
> do some simil
[zfs-discuss] snapshots as versioning tool
|
| Matt Cowger writes:
|
| > zfs list | grep '@'
| >
| > zpool/f...@1154758324G - 461G -
| > zpool/f...@1208482 6.94G - 338G -
| > zpool/f...@daily.net
RAIDZ = RAID5, so lose 1 drive (1.5TB)
RAIDZ2 = RAID6, so lose 2 drives (3TB)
RAIDZ3 = RAID7(?), so lose 3 drives (4.5TB).
What you lose in useable space, you gain in redundancy.
-m
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]
this type of setup ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It probably put an EFI label on the disk. Try doing a wiping the first AND
last 2MB.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of nich romero
Sent: Wednesday, May 05, 2010 1:00 PM
To: zfs-discuss@opensolaris.
ut the USB
attached, laptop failed to boot, I had to connect the USB drive and it
booted up fine.
Key would be to degrade the pool before shutdown, e.g. disconnect USB
drive, might try using zpool offline and see how that works.
If I encounter issues, I'll post again.
cheers
Matt
On 05
nline pool
disk.
Exact steps on what I did :
http://blogs.sun.com/mattman/entry/bootable_usb_mirror
As I find other caveats I'll add them... But it looks like having the
drive connected at all times is preferable.
cheers
Matt
On 05/ 6/10 12:11 PM, Matt Keenan wrote:
Based on comm
On 05/ 7/10 10:07 PM, Bill McGonigle wrote:
On 05/07/2010 11:08 AM, Edward Ned Harvey wrote:
I'm going to continue encouraging you to staying "mainstream,"
because what
people do the most is usually what's supported the best.
If I may be the contrarian, I hope Matt keep
I note in your iostat data below that one drive (sd5) consistently performs
MUCH worse than the others, even when doing less work.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of John J Balestrini
Sent: Tuesday, M
As queried by Ian, the new disk being attached must be at least as big
as the original root pool disk. It can be bigger, but the difference
will not be used in the mirroring.
cheers
Matt
On 05/20/10 10:11 AM, Ian Collins wrote:
On 05/20/10 08:39 PM, roi shidlovsky wrote:
hi.
i am trying to
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi
sh-4.0# zfs create -s -V 10g rpool/iscsi/test
The underlying zpool is a mirror of two SATA drives. I'm connecting from a Mac
client with global SAN i
ake it very hard to recover if the drive was physically dead
Thanks,
Matt
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have an odd setup at present, because I'm testing while still building my
machine.
It's an Intel Atom D510 mobo running snv_134 2GB RAM with 2 SATA drives (AHCI):
1: Samsung 250GB old laptop drive
2: WD Green 1.5TB drive (idle3 turned off)
Ultimately, it will be a time machine backup for my Ma
I have a Opensolaris snv_134 machine with 2 x 1.5TB drives. One is a Samsung
Silencer the other is a dreaded Western Digital Green.
I'm testing the mirror for failure by simply yanking out the SATA cable while
the machine is running. The system never skips a beat, which is great. But the
reconn
with Solaris 10.
Matt
On Fri, Jul 9, 2010 at 3:40 AM, Vladimir Kotal wrote:
> On 07/ 9/10 09:58 AM, Brandon High wrote:
>
>> On Fri, Jul 9, 2010 at 12:42 AM, James Van Artsdalen
>> wrote:
>>
>>> If these 6 Gb/s controllers are based on the Marvell part I would te
message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Matt Urbanowski
Graduate Student
5-51 Medical Sciences Building
Dept. Of Cell Biology
Univ
h : bh...@freaks.com
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
--
Matt Urbanowski
Graduate Student
5-51 Medical Sciences Building
Dept. Of Cell Biology
_id=6967746
>
>
> v.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Matt Urbanowski
Graduate Student
5-51 Medical Sciences Building
On 04/08/2010, at 2:13, Roch Bourbonnais wrote:
>
> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>
>> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
>> wrote:
>>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>>>
>&g
I've been using 10 Samsung eco greens in a raidz2 on freebsd for about 6
months. (Yeah I know it's above 9, the performance is fine for my usage
though)
Haven't had any problems.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
are some
additional tweaks that bring the failover time down significantly.
Depending on pool configuration and load, failover can be done in under 10
seconds based on some of my internal testing.
-Matt Breitbach
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs
re, am I seeing the uncompressed file size, or the
compressed filesize?
My gut tells me that since they inflated _so_ badly when I storage vmotioned
them, that they are the compressed values, but I would love to know for
sure.
-Matt Breitbach
___
zf
Currently using NFS to access the datastore.
-Matt
-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Tuesday, November 22, 2011 11:10 PM
To: Matt Breitbach
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Compression
Hi Matt,
On Nov 22, 2011, at
adventure, and I got
the info that I needed. Thanks to all that took the time to reply.
-Matt Breitbach
-Original Message-
From: Donal Farrell [mailto:vmlinuz...@gmail.com]
Sent: Wednesday, November 23, 2011 10:42 AM
To: Matt Breitbach
Subject: Re: [zfs-discuss] Compression
is this o
I would say that it's a "highly recommended". If you have a pool that needs
to be imported and it has a faulted, unmirrored log device, you risk data
corruption.
-Matt Breitbach
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@open
on top of it.
_
From: Garrett D'Amore [mailto:garrett.dam...@nexenta.com]
Sent: Sunday, December 11, 2011 10:35 PM
To: Frank Cusack
Cc: Matt Breitbach; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] does log device (ZIL) require a mirror setup?
Loss only.
Sent from
I'd look at iostat -En. It will give you a good breakdown of disks that
have seen errors. I've also spotted failing disks just by watching an
iostat -nxz and looking for the one who's spending more %busy than the rest
of them, or exhibiting longer than normal service times.
-Matt
On Tue, Jan 10, 2012 at 2:21 PM, Garrett D'Amore
wrote:
> put the configuration in /etc/hostname.if0 (where if0 is replaced by the
> name of your interface, such as /etc/hostname.e1000g0)
>
> Without an IP address in such a static file, the system will default to DHCP
> and hence override other
x27;s rare that the l2arc (14Gb) hits double
digits in %hit whereas the ARC (3Gb) is frequently >80% hit.
TIA
matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
How long have you let the box sit? I had to offline the slog device, and it
took quite a while for it to come back to life after removing the device
(4-5 minutes). It's a painful process, which is why ever since I've used
mirrored slog devices.
-Original Message-
From: zfs-discuss-boun..
this
will simply re-silver everything from the already attached device back
onto this device.
If I attached this device to a different pool it will simply get
overwritten.
Any ideas ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss
Cindy/Casper,
Thanks for the pointer, luckily I'd not done the detach before sending
the email, split seems the way to go.
thanks again
Matt
On 03/29/12 05:13 PM, Cindy Swearingen wrote:
Hi Matt,
There is no easy way to access data from a detached device.
You could try to force impo
disk. and format/partition shows slice 0 on both disks also
consuming the entire disk respectively.
So how does one force the pool with the larger disk to increase in size ?
cheers
Matt
On 03/30/12 12:55 AM, Daniel Carosone wrote:
On Thu, Mar 29, 2012 at 05:54:47PM +0200, casper@oracle.com
Casper,
Yep that's the lad, I set it to on and split pool expands..
thanks
Matt
On 03/30/12 02:15 PM, casper@oracle.com wrote:
Hi,
As an addendum to this, I'm curious about how to grow the split pool in
size.
Scenario, mirrored pool comprising of two disks, one 200GB and the o
exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.
I'm somewhat stumped... any i
ere online. Is
this a know issue with ZFS ? bug ?
cheers
Matt
On 04/16/12 10:05 PM, Cindy Swearingen wrote:
Hi Matt,
I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S1
On 04/17/12 01:00 PM, Jim Klimov wrote:
2012-04-17 14:47, Matt Keenan wrote:
- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.
Might be, I've got little experience with those beside LiveUSB
imagery ;)
My reason for splitting th
hat "old" data every few weeks to make sure a bit or two
hasn't flipped?
FYI - I personally scrub once per month. Probably overkill for my data, but
I'm paranoid like that.
-Matt
-Original Message-
How often do you normally run a scrub, before this happened? It&
djust arc_c
and resulted in significantly fewer xcalls.
-Matt Breitbach
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
Sent: Tuesday, June 12, 2012 10:14 AM
To: Richard Elling
Cc: zfs-discuss
Subject: Re:
iminishing returns. Any opinions on this?
Cheers,
Matt Hardy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Pool is 6x striped Stec ZEUSRam as ZIL, 6x OCZ Talos C 230GB drives L2ARC,
and 24x 15k SAS drives striped (no parity, no mirroring) - I know, terrible
for reliability, but I just want to see what kind of IO I can hit.
Checksum is ON - can't recall what default is right now.
Compression is off
Dedup
NFS - iSCSI and FC/FCoE to come once I get it into the proper lab.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Tuesday, July 24, 2012 11:36 PM
To: matth...@flash.shanje.com
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] IO load questions
Important question, wha
etups utilizing RDMA.
-Original Message-
From: Palmer, Trey [mailto:trey.pal...@gtri.gatech.edu]
Sent: Wednesday, July 25, 2012 8:22 PM
To: Richard Elling; Matt Breitbach
Cc: zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] IO load questions
BTW these SSD
Stec ZeusRAM for Slog - it's exensive and small, but it's the best out
there. OCZ Talos C for L2ARC.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Sent: Friday, August 03, 2012 8:40 PM
To: Karl Rossin
2011, put into service in September, used for approx. 1 year.
We have 6x disks available - part number Z4RZF3D-8UC. If anyone is
interested, please email me off-list.
-Matt Breitbach
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
We actually did some pretty serious testing with SATA SLCs from Sun directly
hosting zpools (not as L2ARC). We saw some really bad performance - as though
there were something wrong, but couldn't find it.
If you search my name on this list you'll find the description of the problem.
--m
m a t
You can truncate a file:
Echo "" > bigfile
That will free up space without the 'rm'
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
Sent: Wednesday, September 29, 2010 12:59 PM
To: zfs-discuss@opens
Hi,
Can someone shed some light on what this ZPOOL_CONFIG is exactly.
At a guess is it a bad sector of the disk, non writable and thus ZFS
marks it as a hole ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
The nodes (x4240's) are identical and would have identical storage installed,
so the paths would be the same.
Has anyone done anything similar to this? I'd love something more than "it
should work" before dropping $25k on SSD's...
TIA,
matt
___
On Nov 15, 2010, at 4:15 PM, Erik Trimble wrote:
> On 11/15/2010 2:55 PM, Matt Banks wrote:
>> I asked this on the x86 mailing list (and got a "it should work" answer),
>> but this is probably more of the appropriate place for it.
>>
>> In a 2 node Sun Cl
Hi, I have a low-power server with three drives in it, like so:
matt@vault:~$ zpool status
pool: rpool
state: ONLINE
scan: resilvered 588M in 0h3m with 0 errors on Fri Jan 7 07:38:06 2011
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0
Thanks, Marion.
(I actually got the drive labels mixed up in the original post... I edited it
on the forum page:
http://opensolaris.org/jive/thread.jspa?messageID=511057#511057 )
My suspicion was the same: the drive doing the slow i/o is the problem.
I managed to confirm that by taking the oth
Except for meta data which seems to be written in small pieces, wouldn't having
a zfs record size being a multiple of 4k on a vdev that is 4k aligned work ok?
Or can the start of a zfs record that's 16kb for example start at any sector in
the vdev?
--
This message posted from opensolaris.org
__
Thanks Richard - interesting...
The c8 controller is the motherboard SATA controller on an Intel D510
motherboard.
I've read over the man page for iostat again, and I don't see anything in there
that makes a distinction between the controller and the device.
If it is the controller, would it m
be doing anything.
Any tips greatly appreciated,
thanks
Matt Harrison
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/04/2011 05:25, Brandon High wrote:
On Sun, Apr 10, 2011 at 9:01 PM, Matt Harrison
wrote:
I had a de-dup dataset and tried to destroy it. The command hung and so did
anything else zfs related. I waited half and hour or so, the dataset was
only 15G, and rebooted.
How much RAM does the
On 11/04/2011 10:04, Brandon High wrote:
On Sun, Apr 10, 2011 at 10:01 PM, Matt Harrison
wrote:
The machine only has 4G RAM I believe.
There's your problem. 4G is not enough memory for dedup, especially
without a fast L2ARC device.
It's time I should be heading to bed so I
February/037300.html
Thanks for the info guys.
I decided that the overhead involved in managing (esp deleting) deduped
datasets far outweighed the benefits it was bringing me. I'm currently
remaking datasets non-dedup and now I know about the "hang", I am a
e "open storage"
marketing and whatnot... I guess Im asking if it looks like the
situation has changed. Apologies for the "fuzzy" question
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tion about mypool. I would expect this
file to contain some reference to mypool. So I tried :
zpool set -o cachefile=/a/etc/zfs/zpool.cache
Which fails.
Any advice would be great.
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolari
t simply add a "-f"
to force the import, any ideas on what else I can do here ?
cheers
Matt
On 05/27/11 13:43, Jim Klimov wrote:
Did you try it as a single command, somewhat like:
zpool create -R /a -o cachefile=/a/etc/zfs/zpool.cache mypool c3d0
Using altroots and cachefile(=no
solution.
I even tried simply copying /etc/zfs/zpool.cache to
/a/etc/zfs/zpool.cache and not exporting/importing the data pool at all,
however this gave the same hostid problem.
thanks for your help.
cheers
Matt
Jim Klimov wrote:
Actually if you need beadm to "know" about t
Dan,
Tried export data after beadm umount, but on reboot zpool data is simply
not imported at all...
So exporting data before reboot dosen;t appear to help..
thanks
Matt
On 06/01/11 01:35, Daniel Carosone wrote:
On Tue, May 31, 2011 at 05:32:47PM +0100, Matt Keenan wrote:
Jim,
Thanks
Hi list,
I've got a pool thats got a single raidz1 vdev. I've just some more
disks in and I want to replace that raidz1 with a three-way mirror. I
was thinking I'd just make a new pool and copy everything across, but
then of course I've got to deal with the name change.
Basically, what is th
On 01/06/2011 20:45, Eric Sproul wrote:
On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
wrote:
Hi list,
I've got a pool thats got a single raidz1 vdev. I've just some more disks in
and I want to replace that raidz1 with a three-way mirror. I was thinking
I'd just make a ne
On 01/06/2011 20:52, Eric Sproul wrote:
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
wrote:
Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
have to name the new one something else. I believe I will be able to rename
it afterwards, but I just w
On 01/06/2011 20:53, Cindy Swearingen wrote:
Hi Matt,
You have several options in terms of migrating the data but I think the
best approach is to do something like I have described below.
Thanks,
Cindy
1. Create snapshots of the file systems to be migrated. If you
want to capture the file
Hi list,
I want to monitor the read and write ops/bandwidth for a couple of pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.
According to the ZFS admin guide, running zpool iostat without any
parameters should show the activit
On 28/06/2011 16:44, Tomas Ögren wrote:
Matt Harrison wrote:
Hi list,
I want to monitor the read and write ops/bandwidth for a couple of
pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.
According to the ZFS a
Hi list,
I've got a system with 3 WD and 3 seagate drives. Today I got an email
that zpool status indicated one of the seagate drives as REMOVED.
I've tried clearing the error but the pool becomes faulted again. Taken
out the offending drive and plugged into a windows box with seatools
insta
On 11/09/2011 18:32, Krunal Desai wrote:
On Sep 11, 2011, at 13:01 , Richard Elling wrote:
The removed state can be the result of a transport issue. If this is a
Solaris-based
OS, then look at "fmadm faulty" for a diagnosis leading to a removal. If none,
then look at "fmdump -eV" for errors rel
t; method mentioned above
worked swimmingly for me. I was nervous doing this during production hours, but
the release command returned in about 5-7 seconds with no apparent adverse
effects. I was then able to destroy the snap.
I was initially afraid that it was somehow the "memory bug" mentioned in the
current thread (when things are fresh in your mind, they seem more likely), so
I'm glad this thread was out there.
matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
D-Z2 pool really wouldn't/couldn't
recover (resilver) from a drive failure? That seems to fly in the face of the
x4500 boxes from a few years ago.
matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lly went with exactly
the same as the author. I can confirm that after 3 months of running
there hasn't even been a hint of a problem with the hardware choice.
You can see the hardware post here
http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/
Hope this helps you decide a bit mor
1 - 100 of 215 matches
Mail list logo