/recv the latest snapshot. As the data is received the
gzip compression will be applied. Since the new filesystem already
exists you will have to do a "zfs receive -Fv" to force it.
--chris
___
zfs-discuss mailing list
zfs-discuss@o
ward Ned Harvey (opensolarisisdeadlongliveopensolaris)"
To: "Chris Dunbar - Earthside, LLC" ,
zfs-discuss@opensolaris.org
Sent: Wednesday, November 28, 2012 10:14:59 PM
Subject: RE: [zfs-discuss] Question about degraded drive
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> b
this situation I would like to know if I can hold off on physically
replacing the drive. Is there a safe method to test it or put it back into
service and see if it fails again?
Thank you,
Chris
- Original Message -
From: "Chris Dunbar - Earthside, LLC"
To: zfs-discuss@
2012, at 9:08 PM, Freddie Cash wrote:
> You don't use replace on mirror vdevs.
>
> 'zpool detach' the failed drive. Then 'zpool attach' the new drive.
>
> On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside, LLC"
> wrote:
>> Hel
decision to yank and replace?
Thank you!
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
K 2491.5K 14 148G371G
23K 21K 1.4K931.4K 381.4K 2 148G371G
My only guess is that the large zfs send / recv streams were affecting
the cache when they started and finished.
Thanks for the responses and hel
am
not sure how snapshots and send/receive affect the arc. Does anyone
else have any ideas?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
st 60 minutes of snapshots (which are sent every minute or
less) as well as one snapshot per day for 30 days. All snapshots are
created on the primary host and pulled to the remote host to avoid
issues.
If you are interested we can push the code to github once we tes
, which is why our main benchmarks have
been using sample sets of production data.
My main reason for this post was to find out if I am getting expected
(or usual) results compared to others. I say this because:
* Chris from DDRDrive was able to get better results with the X1 (well
over 9500 iops)
d LSI 9211-8i connected to the SM backplane.
Networking is 10gbe (x510-DA2) directly connected via SFP+ twin-axial.
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n certainly edit ZFS ACLs when they're exposed to
it over CIFS.
;-)
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ement (?) are planning such a thing, although I have no idea
> on their pricing. The software is still in development.
They have announced pricing for 2 of their 4 ZFS products: see
<http://tenscomplement.com/our-products>.
Chris
___
zfs-discuss m
On 17 Dec 2011, at 19:35, Edmund White wrote:
> On 12/17/11 8:27 PM, "Chris Ridd" wrote:
>
>
>>
>> Can you explain how you got the SSDs into the HP sleds? Did you buy blank
>> sleds from somewhere, or cannibalise some "cheap" HP drives?
>
lem?
We've got an HP D2700 JBOD attached to an LSI SAS 9208 controller in a DL360G7,
and I'm keen on getting a ZIL into the mix somewhere - either into the JBOD or
the spare bays in the DL360.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/www.c0t0d0s0.org/uploads/vscanclamav.pdf>
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you 4k align your partition table and is ashift=12?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
esystem help?
I'm not sure, but you may still need to do a chmod -R on each filesystem to set
the ACLs on each existing directory.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 22 Jul 2011, at 21:29, Chris Dunbar - Earthside, LLC wrote:
> It's resilvering now - thanks for the help!
I think the command you were trying to recall was prtvtoc.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
It's resilvering now - thanks for the help!
- Original Message -
From: "Cindy Swearingen"
To: "Chris Dunbar - Earthside, LLC"
Cc: zfs-discuss@opensolaris.org
Sent: Friday, July 22, 2011 4:26:00 PM
Subject: Re: [zfs-discuss] Replacing failed drive
C
past. I just have to find the command again. Once
that is done, do I need to detach the spare before I run the replace command or
does running the replace command automatically bump the spare out of service
and put it back to being just a spare?
Thanks!
Chris
- Original Message
n use
How do to bring the replaced drive back online and get it into the array? Do I
make the new drive the spare or do I bring the new drive online in the mirror
and return the original spare to spare status? Any advice and/or actual
commands would be greatly appreciated!
Thank you,
Chris D
, I'm just wondering if all
h3ll will break loose when I inevitably reboot.
Any help appreciated,
Chris Twa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Solaris variants "zfs list" doesn't show snapshots by default; you
need to add "-t snapshot" (or "-t all") to see them.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am running 4 of the 128GB version in our DR environment as L2ARC. I don't
have anything bad to say about them. They run quite well.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tomas Ă–gren
Sent: Wednesday, June 0
Original Message-
From: Frank Van Damme
Sent: Friday, May 20, 2011 6:25 AM
>Op 20-05-11 01:17, Chris Forgeron schreef:
>> I ended up switching back to FreeBSD after using Solaris for some time
>> because I was getting tired of weird pool corruptions and the like.
>
&g
>-Original Message-
>From: zfs-discuss-boun...@opensolaris.org
>[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
>
>Over the past few months I have seen mention of FreeBSD a couple time in
> regards to ZFS. My question is how stable (reliable) is ZFS on this platf
On 19 May 2011, at 14:44, Evaldas Auryla wrote:
> Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo
> -dV works nice, although it's a bit "overkill" with manually parsing the
> output :)
You need to install pkg:
s in
> enclosure ?
Does /usr/lib/scsi/sestopo or /usr/lib/fm/fmd/fmtopo help? I can't recall how
you work out the arg to pass to sestopo :-(
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have a ways
to go in the packing dept. I still love their prices!
-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>On May 9, 2010 at 5:29PM, Richard Elling wrote:
>>On May 9, 2011, at 12:29 PM, Chris Forgeron wrote:
>>[..]
>> Q1 - Doesn't this behavior mean that the L2ARC can never get data objects if
>> the ARC doesn't hold them?
>
>Yes.
>
>> Is setting pr
not ZFS per se. So I
think the OP might consider how best to add GPU support to the crypto framework.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I'm on FreeBSD 9 with ZFS v28, and it's possible this combination is causing
my issue, but I thought I'd start here first and will cross-post to the FreeBSD
ZFS threads if the Solaris crowd thinks this is a FreeBSD problem.
The issue: From carefully watching my ARC/L2ARC size and acti
I've got a system with 24 Gig of RAM, and I'm running into some interesting
issues playing with the ARC, L2ARC, and the DDT. I'll post a separate thread
here shortly. I think even if you add more RAM, you'll run into what I'm
noticing (and posting about).
-Original Message-
From: zfs-d
I see your point, but you also have to understand that sometimes too many
helpers/opinions are a bad thing. There is a set "core" of ZFS developers who
make a lot of this move forward, and they are the key right now. The rest of us
will just muddy the waters with conflicting/divergent opinions
Can anyone comment on Solaris with zfs on HP systems? Do things work
reliably? When there is trouble how many hoops does HP make you jump
through (how painful is it to get a part replaced that isn't flat out
smokin')? Have you gotten bounced between vendors?
Thanks,
Chris
Erik Tri
port folks, finger pointing between vendors, or have lots
of grief from an untested combination of parts. If this isn't possible
we'll certainly find a another solution. I already know it won't be the
7000 series.
Thank you,
Chris Banal
Marion Hakanson wrote:
jp...@cam.ac.uk said
hese type of vendors will be at NAB this year? I'd like to talk to a
few if they are...
--
Thank you,
Chris Banal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ah, that's all I really need to know. I expected it to be public, but I
completely understand the need to keep it private so it can move forward
properly. This should hopefully provide enough record for other ZFS
well-wishers who are searching for signs of post-Oracle development.
-Origin
I'm curious where ZFS development is going.
I've been reading through the lists, and watching Oracle, Nexenta, Illumos, and
OpenIndiana for signs of life.
The feeling I get is that while there is plenty of userland work being done,
there is next to nothing on ZFS development outside of the Orac
I have old pool skeletons with vdevs that no longer exist. Can't import them,
can't destroy them, can't even rename them to something obvious like junk1.
What do I do to clean up?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Are you running CIFS with any AD integration, or is it functioning in
work-group mode?
Do you have lockups only when you transfer a lot of data, or will it lock up
without any machine working on the CIFS share?
How long is "time to time"?
-Original Message-
From: zfs-discuss-boun...@op
ls
goes down.
Many thanks to George and his continued efforts.
From: haak...@gmail.com [mailto:haak...@gmail.com] On Behalf Of Mark Alkema
Sent: Tuesday, February 08, 2011 4:38 PM
To: Chris Forgeron
Subject: Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't
recognize the pool
Quick update;
George has been very helpful, and there is progress with my zpool. I've got
partial read ability at this point, and some data is being copied off.
It was _way_ beyond my skillset to do anything.
Once we have things resolved to a better level, I'll post more details (with a
lot of
devfsadm
> really create the apporpirate /dev/dsk and etc. files based on what's present?
Is reviewing the source code to devfsadm helpful? I bet it hasn't changed much
from:
<http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/devfsadm/>
Chris
_
no trouble with it as far as I could tell. Would only resliver the
data that was changed while that drive was offline. We had no data loss.
Thank you,
Chris Banal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On 15 Jan 2011, at 13:57, Achim Wolpers wrote:
> Am 15.01.11 14:52, schrieb Chris Ridd:
>> What are the normalization properties of these filesystems? The zfs man page
>> says they're used when comparing filenames:
> The normalization properties are set to none. Is this t
e other than none,
and the utf8only property was left unspecified, the
utf8only property is automatically set to on. The
default value of the normalization property is none.
This property cannot be changed after th
On 6 January 2011 20:02, Chris Murray wrote:
> On 5 January 2011 13:26, Edward Ned Harvey
> wrote:
>> One comment about etiquette though:
>>
>
>
> I'll certainly bear your comments in mind in future, however I'm not
> sure what happened to the su
KB worth in some, to 640KB in others.
Unfortunately it appears that the bad parts are in critical parts of
the filesystem, but it's not a ZFS matter so I'll see what can be done
by way of repair with Windows/NTFS inside each affected VM. So
whatever went wrong, it was only a small amount of
ther the
content can be repaired. It's not the end of the world if they're gone, but I'd
like to satisfy my own curiosity with this little exercise in recovery.
Thanks again for the input,
Chris
--
This message posted from opensolaris.org
tand there's the potential to get into a mess.
On this occasion there wasn't any power loss, and the event itself
reported success .. ?
Thank you in advance,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> Well, googling for '.$EXTEND' and '$QUOTA' does give some results,
> especially when combined with 'NTFS'. :-)
Aha! Foolishly I'd used zfs in my search string :-)
> Check out the table on "Metafiles" here:
>
&
s doesn't really work too well :-(
I don't think they're doing any harm, but I'm curious. Someone's bound to
notice and ask me as well :-)
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-vmware-iscsi-connections
-Chris
On Sun, Dec 12, 2010 at 12:47 AM, Martin Mundschenk <
m.mundsch...@mundschenk.de> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi!
>
> I have configured two LUs following this guide:
>
> http://thegreyblog.blogspot.
Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
here it is properly formatted!
>--NAME STATE READ WRITE CKSUM
>--tank DEGRADED 0 0 0
>--ad4s1d ONLINE 0 0 0
>--raidz1 DEGRADED 0 0 0
>-ad6s1d ONLINE 0 0 0
>-ad8s1d UNAVAIL 0 0 0 cannot annot o
I'm trying to individually upgrade drives in my raid z configuration, but I
accidentally added my replacement drive to the root rank instead of the raidz1
under it..
Right now things look like this..
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0
iver
works great for you in a L2ARC role after extensive testing, then by all
means use it, I just wanted to pass along my experience.
-Chris
The RevoDrive should not require a custom device driver as it is based on
> the
> Silicon Image 3124 PCI-X RAID controller connected to a Pericom PCI-X
Thanks for your response(s). I was able to find someone here in another group
that will setup ZFS for me. Whew !!!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
I just got a new system and want to use ZFS. I want it look like it did on the
old system. I'm not a systems person and I did not setup the current system.
The guy who did no longer works here. Can I do zfs list and zpool list and get
ALL the information I need to do this?
Thanks,
ble faster transfer speeds is enable
blowfish-cbc in your /etc/ssh/sshd_config and then modify the script to use
that cipher.
Cheers,
-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but it's a lot more expensive.
Regards,
Chris Dunbar
On Nov 1, 2010, at 4:44 PM, Roy Sigurd Karlsbakk wrote:
> - Original Message -
> > Hello,
> >
> > I realize this is a perpetual topic, but after re-reading all the
> > messages I had saved on the sub
planning to buy two Intel
X25-E 32 GB drives, but then read something about OCZ Vertex 2 drives and the
benefits of supercapicitors. The server sits in a colo center so the power is
reasonably reliable, but certainly not foolproof.
Thank you,
Chris Dunbar
been
rebooted about five or six times since the pool version upgrade. One should
not have to reboot six times! More mystery to this pool upgrade behavior!!
-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
t; display the true and
updated versions, I'm not convinced that the problem is zdb, as the label
config is almost certainly set by the zpool and/or zfs commands. Somewhere,
something is not happening that is supposed to when initiating a zpool
upgrade, but since I
We have two Intel X25-E 32GB SSD drives in one of our servers. I'm using
one for ZIl and one for L2ARC, we are having great results so far.
Cheers,
-Chris
On Wed, Sep 15, 2010 at 9:43 AM, Richard Elling wrote:
> On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:
>
> > We are
So are there now any methods to achieve the scenario I described to shrink a
pools size with existing ZFS tools? I don't see a definitive way listed on
the old shrinking
thread<http://www.opensolaris.org/jive/thread.jspa?threadID=8125>
.
Thank you,
-Chris
On Mon, Sep 13, 2010
server, remove the 6 SATA disks.
Put in the 6 SAS disks.
Power on the server.
echo | format to get the disk ID's of the new SAS disks.
zpool create speed raidz disk1 disk2 disk3 disk4 disk5 disk6
Thanks in advance,
-Chris
On Sat, Sep 11, 2010 at 4:37 PM, besson3c wrote:
> Ah
o show as version 27 in zdb? Why does zdb
-D rpool give me can't open on the host bob?
Thank you in advance,
-Chris
ch...@weston:~# zdb
rpool:
version: 22
name: 'rpool'
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname
Absolutely spot on George. The import with -N took seconds.
Working on the assumption that esx_prod is the one with the problem, I bumped
that to the bottom of the list. Each mount was done in a second:
# zfs mount zp
# zfs mount zp/nfs
# zfs mount zp/nfs/esx_dev
# zfs mount zp/nfs/esx_hedgehog
7; in one of the windows to
see if I could even see a pool on another disk, and that hasn't returned me
back to the prompt yet. Also tried to SSH in with another session, and that
hasn't produced the login prompt.
Thanks in advance,
Chris
--
This message posted from opensolari
Thank you everyone for your answers.
Cost is a factor, but the main obstacle is that the chassis will only support
four SSDs (and that's with using the spare 5.25 bay for a 4x2.5 hotswap bay).
My plan now is to buy the ssd's and do extensive testing. I want to focus my
performance efforts on
I have three zpools on a server and want to add a mirrored pair of ssd's for
the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or
is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
___
zfs-discuss
> You can also use the "zpool split" command and save
> yourself having to do the zfs send|zfs recv step -
> all the data will be preserved.
>
> "zpool split rpool preserve" does essentially
> everything up to and including the "zpool export
> preserve" commands you listed in your original email.
>
>
> So, after rebuilding, you don't want to restore the
> same OS that you're
> currently running. But there are some files you'd
> like to save for after
> you reinstall. Why not just copy them off somewhere,
> in a tarball or
> something like that?
It's about 200+ gigs of files. If I had a
called on the new SATA controller)
3. Run zpool import against "preserve", copy over data that should be migrated.
4. Rebuild the mirror by destroying the "preserve" pool and attaching c7d0s0 to
the rpool mirror.
Am I missing anything?
--
Chris
--
This messa
here my existing zpool is called "tank" and the new disk is c4t0d0, would the
command be something like:
zpool create newtank raidz tank c4t0d0?
Many thanks,
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Just to close this. It turns out you can't get the crtime over NFS so without
access to the NFS server there is only limited checking that can be done.
I filed
CR 6956379 Unable to open extended attributes or get the crtime of files in
snapshots over NFS.
--chris
--
This message p
usion/question. Is it possible to
share the same ZFS file system with multiple ESX hosts via iSCSI? My belief is
that an iSCSI connection is sort of like having a dedicated physical drive and
therefore does not lend itself to sharing between multiple systems. Please set
me straight.
Thank you,
.
If they are able to be reused then when an inode number matches I would also
have to compare the real creation time which requires looking at the extended
attributes.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be created
with the same inode number as the deleted file?
--chris
--
This message posted from opensolaris.org
SAS: full duplex
SATA: half duplex
SAS: dual port
SATA: single port (some enterprise SATA has dual port)
SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read
SATA: 1 active channel - 1 read or 1 write
SAS: Full error detection and recovery on both read and write
SATA: err
ut I don't htink Mac OS comes with that!
>
> Use Wireshark (formerly Ethereal); works great for me. It does require X11
> on your machine.
Macs come with the command-line tcpdump tool. Wireshark (recommended anyway!)
can read files saved by tcpdump and snoop.
Cheers,
Chris
_
> One of my pools (backup pool) has a disk which I
> suspect may be going south. I have a replacement disk
> of the same size. The original pool was using one of
> the partitions towards the end of the disk. I want to
> move the partition to the beginning of the disk on
> the new disk.
>
> Does ZF
fixes in build 132 related to destroying
snapshots while sending replication streams. I'm unable to reproduce
the 'zfs holds -r' issue on build 133. I'll try build 134, but I'm
not aware of any changes in that area.
-Chris
___
this case, I realize
that Jason also needs to maximize the space he has in order to store all of
those legitimately copied Blu-Ray movies. ;-)
Regards,
Chris
On Apr 7, 2010, at 3:09 PM, Jason S wrote:
> Thank you for the replies guys!
>
> I was actually already planning to get another
can recreate the pool,
> but it's going to take me several days to get all the data back. Is there any
> known workaround?
Charles,
Can you 'zpool export' and 'zpool import' the pool, and then
try destroying the snapshot again?
-Chris
On 31 Mar 2010, at 17:50, Bob Friesenhahn wrote:
> On Wed, 31 Mar 2010, Chris Ridd wrote:
>
>>> Yesterday I noticed that the Sun Studio 12 compiler (used to build
>>> OpenSolaris) now costs a minimum of $1,015/year. The "Premium" service
>>> plan
se copy" for SDN members; the
$1015 you quote is for the standard Sun Software service plan. Is a service
plan now *required*, a la Solaris 10?
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Brandon,
Thank you for the explanation. It looks like I will have to share out each file
system. I was trying to keep the number of shares manageable, but it sounds
like that won't work.
Regards,
Chris
On Mar 24, 2010, at 9:36 PM, Brandon High wrote:
> 2010/3/24 Chris Dunbar
> I
e a snapshot of tank/nfs, does it include the data in foo1 and foo2
or are they excluded since they are separate ZFS file systems?
Thanks for your help.
Regards,
Chris Dunbar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
ll be doing the same thing. I think
the 6 x 2-way mirror configuration gives me the best mix of performance and
fault tolerance.
Regards,
Chris Dunbar
On Mar 19, 2010, at 5:44 PM, Erik Trimble wrote:
> Chris Dunbar - Earthside, LLC wrote:
> > Hello,
> >
> > After being imme
the process in the following link:
http://www.tuxyturvy.com/blog/index.php?/archives/59-Aligning-Windows-Partitions-Without-Losing-Data.html
With any luck I'll then see a smaller dedup table, and better performance!
Thanks to those for feedback,
Chris
--
This message posted
>
> I'll say it again: neither 'zfs send' or (s)tar is an
> enterprise (or
> even home) backup system on their own one or both can
> be components of
> the full solution.
>
Up to a point. zfs send | zfs receive does make a very good back up scheme for
the home user with a moderate amount of s
do have a 13th disk available as a hot spare. Would
it be available for either pool if I went with two? Finally, would I be better
off with raidz2 or something else instead of the striped mirrored sets?
Performance and fault tolerance are my highest priorities.
Tha
Please excuse my pitiful example. :-)
I meant to say "*less* overlap between virtual machines", as clearly
block "AABB" occurs in both.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Murray
4---
ZFS blocks for this VM would be " CC", "CCAA", "AABB" etc. So, no overlap
between virtual machines, and no benefit from dedup.
I may have it wrong, and there are indeed 30,785,627 unique blocks in my setup,
but if there's a mechanism for checking align
OK I have a very large zfs snapshot I want to destroy. When I do this, the
system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with
128GB of memory. Now this may be more of a function of the IO device, but let's
say I don't care that this zfs destroy finishes quickly. I actual
Basically, it boils down to this: upgrade your pools ONLY when you are sure
> the new BE is stable and working for you, and you have no desire to revert to
> the old pool. I run a 'zpool upgrade' right after I do a 'beadm destroy
> '
I'd also add that for disas
reatest place to look when utilizing zfs.
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 450 matches
Mail list logo