property, so the SMF service doesn't constantly
> scan all the filesystems and volumes for their zfs properties. It just checks
> the conf file and knows instantly which ones need to be chown'd.
>
> ___
> zfs-discuss mailing list
> zf
ing that
are - from what Karl said about balancing the data out as one example.
Cheers,
Brian
>
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> ___
> zfs-discuss mail
your SAN HBA.
Summary - my experience on FC SANs (previous and ongoing) is that ZFS is great
in that it doesn't tell me what LUN sizes are the best to use. It's a
combination of what my storage array limitations and strengths are, as well as
my OS configuration and application workload t
On 07/ 9/12 04:36 PM, Ian Collins wrote:
On 07/10/12 05:26 AM, Brian Wilson wrote:
Yep, thanks, and to answer Ian with more detail on what TruCopy does.
TruCopy mirrors between the two storage arrays, with software running on
the arrays, and keeps a list of dirty/changed 'tracks'
On 07/06/12, Richard Elling wrote:
First things first, the panic is a bug. Please file one with your OS
supplier.More below...
Thanks! It helps that it recurred a second night in a row.
On Jul 6, 2012, at 4:55 PM, Ian Collins wrote:
> On 07/ 7/12 11:29 AM, Brian Wilson wr
On 07/ 6/12 04:17 PM, Ian Collins wrote:
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity check from people more knowledgeable than myself.
I'm managing backups on a production system. Previously I was using
another volume manager and filesystem on Solaris, and
uns go read-only, but I could be wrong.
Anyway, am I off my rocker? This should work with ZFS, right?
Thanks!
Brian
--
---
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S608-263-8047
brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
---
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S608-263-8047
brian.wil
n straight
sequential IO, where on something more random I would bet they won't
perform as well as they do in this test. The tool I've seen used for
that sort of testing is iozone - I'm sure there are others as well, and
I can't attest what's better or worse.
cheers,
B
x27;s redundancy), and in every case I've had it repair data
automatically via a scrub. The one case where it didn't was when the
disk controller both drives happened to share (bad design, yes) started
erroring and corrupting writes to both disks in parallel, so there was
no good data
On 10/18/11 11:46 AM, Mark Sandrock wrote:
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
I just wanted to add something on fsck on ZFS - because for me that used to
make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime en
issing aren't required for my 24x7 5+ 9s application
to run (e.g. log files), I can get it rolling again without them
quickly, and then get those files recovered from backup afterwards as
needed, without having to recover the entire pool from backup.
cheers,
Brian
--
--
e all my drives
available. I cannot move these drives to any other box because they are
consumer drives and my servers all have ultras.
Most modern boards will be boot from a live USB stick.
--
---
Brian Wilson,
Thanks for the input.
On Sat, May 28, 2011 at 1:35 PM, Richard Elling
wrote:
> On May 28, 2011, at 10:15 AM, Edward Ned Harvey wrote:
>
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Brian
>>>
>&
On Sat, May 28, 2011 at 1:15 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Brian
>>
>> I have a raidz2 pool with one disk that seems to be going bad, several
> errors
>> ar
I have a raidz2 pool with one disk that seems to be going bad, several errors
are noted in iostat. I have an RMA for the drive, however - no I am wondering
how I proceed. I need to send the drive in and then they will send me one
back. If I had the drive on hand, I could do a zpool replace.
I have a situation coming up soon in which I will have to migrate some iSCSI
backing stores setup with comstar. Are there steps published anywhere on how
to move these between pools? Does one still use send/receive or do I somehow
just move the backing store? I have moved filesystems before us
I am trying to understand the various error conditions reported by iostat. I
noticed during a recent scrub that my transport errors were increasing.
However, after a fair amount of searching I am unsure if that indicates a drive
failure or not. I also have a lot of illegal request errors. No
I had not really considered that. I was going under the assumption that 2TB
drives are still "too big" for a single vdev in terms of resilver times if
there is a failure. I also have a 20 bay case, so I have plenty of room to
expand. So I wold keep my 1TB drives around anyhow.
Thanks for the
Thanks. I hadn't come across the Hitachi's They certainly seem to have a
price premium associated with them - but I suppose that is to be expected. I
was sort of looking towards 'greener' drives since performance wasn't a large
factor for either of these vdevs.
Seems too bad all the others
The time has come to expand my OpenSolaris NAS.
Right now I have 6 1TB Samsung Spinpoints in a Raidz2 configuration. I also
have a mirrored root pool.
The Raidz2 configuration should be for my most critical data - but right now it
is holding everything so I need to add some more pools and mo
I've been having the same problems, and it appears to be from a remote
monitoring app that calls zpool status and/or zfs list. I've also found
problems with PERC and I'm finally replacing the PERC cards with SAS5/E
controllers (which are much cheaper anyway). Every time I reboot, the PERC
tel
Thanks, that did it. I thought "detach" was only for mirrors and I have a
raidz2, so I didn't think to use that there. I tried replace/remove.
I guess the "spare" is actually a mirror of the disk and the spare disk and is
treated as such.
Thanks again,
Brian
On
c10t22d0 INUSE currently in use
errors: No known data errors
How can I get the spare out of the pool?
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That seems to have done the trick. I was worried because in the past I've had
problems importing faulted file systems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
"missing". What is the proper procedure to deal
with this?
-brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've posted a post-mortem followup thread:
http://opensolaris.org/jive/thread.jspa?threadID=133472
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
I am afraid I can't describe the exact procedure that eventually fixed the file
system as I merely observed it while Victor was logged into my system. I am
quoting from the explanation he provided but if he reads this perhaps he could
add whatever details seem pertinent.
--
This message posted
was.
3) Could this error be recovered from automatically? This was the root of a
zfs file system and regardless of the mode bits it was probably clear that it
should be treated as a directory.
Thanks for everyone's help with diagnosing this.
-brian
--
This message posted
error they somehow introduced or perhaps
I've found a unique codepath that is also relevant pre-134 as well.
Earlier today I was able to send some zdb dump information to Cindy which
hopefully will shed some light on the situation (I would be happy to send to
you as well)
-brian
On Tue, Aug 17,
On Aug 10, 2010, at 4:07 PM, Cindy Swearingen wrote:
> Hi Brian,
>
> Is the pool exported before the update/upgrade of PowerPath software?
Yes, that's the standard procedure.
> This recommended practice might help the resulting devices to be more
> coherent.
>
> If t
On some machines running PowerPath, there's sometimes issues after an
update/upgrade of the PowerPath software. Sometimes the pseudo devices get
remapped and change names. ZFS appears to handle it OK, however sometimes it
then references half native device names and half the emcpower pseudo d
t on the filesystem and
it was working well when I gracefully shutdown (to physically move the
computer).
I am a bit at a loss. With copy-on-write and a clean pool how can I have
corruption?
-brian
On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> B
Thanks Preston. I am actually using ZFS locally, connected directly to 3 sata
drives in a raid-z pool. The filesystem is ZFS and it mounts without complaint
and the pool is clean. I am at a loss as to what is happening.
-brian
--
This message posted from opensolaris.org
h the filesystem (cd, chown, etc)
-brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ill any of these processes.
Time for hard-reboot.
/Brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
recognizable until I restart the enclosure.
This same demo works fine when using USB sticks, and maybe that's because each
USB stick has its own controller.
Thanks for your help,
Brian
--
This message posted from opensolaris.org
___
zfs-discuss
Hi,
I'm currently trying to work with a quad-bay USB drive enclosure. I've created
a raidz pool as follows:
bleon...@opensolaris:~# zpool status r5pool
pool: r5pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
r5poolONLINE
On 7/6/2010 10:37 AM, Victor Latushkin wrote:
On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote:
Well, I see no takers or even a hint...
I've been playing with zdb to try to examine the pool, but I get:
# zdb -b pool4_green
zdb: can't open pool4_green: Bad exchange descriptor
s in the logs and it just
"disappeared" without a trace.
The only logs are from subsequent reboots where it says a ZFS pool failed to
open.
It does not give me a warm & fuzzy about using ZFS as I've relied on it heavily
in the past 5 years.
Any advice would be well appreciate
type='disk'
id=6
guid=14740659507803921957
path='/dev/dsk/c10t6d0s0'
devid='id1,s...@n60026b9040e26100139d854a09957d56/a'
phys_path='/p...@0
below - but the backup did complete as the pool remained online.
Thanks for your help Cindy,
Brian
Cindy Swearingen wrote:
I reviewed the zpool clear syntax (looking at my own docs) and didn't
remember that a one-device pool probably doesn't need the device
specified. For pools with ma
Interesting, this time it worked! Does specifying the device to clear
cause the command to behave differently? I had assumed w/out the device
specification, the clear would just apply to all devices in the pool
(which are just the one).
Thanks,
Brian
Cindy Swearingen wrote:
Hi Brian
Hi Cindy,
The scrub didn't help and yes, this is an external USB device.
Thanks,
Brian
Cindy Swearingen wrote:
Hi Brian,
You might try running a scrub on this pool.
Is this an external USB device?
Thanks,
Cindy
On 06/29/10 09:16, Brian Leonard wrote:
Hi,
I have a zpool whi
r to destroy and recreate the pool?
Thanks,
Brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
never had to restore a
>> whole file system. I get requests for a few files,
>> or somebody's mailbox or somebody's http document
>> root.
>> You can directly install it from CSW (or blastwave).
>
> Thanks for your comments, Brian. I should look at Bacula i
I use Bacula which works very well (much better than Amanda did).
You may be able to customize it to do direct zfs send/receive, however I find
that although they are great for copying file systems to other machines, they
are inadequate for backups unless you always intend to restore the whole f
device names
from the sending hardware?
On 06/23/10 18:15, Lori Alt wrote:
Cindy Swearingen wrote:
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub
Ok -
So I unmounted all the directories, and then deleted them from /media, then I
rebooted and everything remounted correctly and the system is functioning
again..
Ok. time for a zpool scrub, then I will try my export and import..
whew :-)
--
This message posted from opensolaris.org
_
Did some more reading.. Should have exported first... gulp...
So, I powered down and moved the drives around until the system came back up
and zpool status is clean..
However, now I can't seem to boot. During boot it finds all 17 ZFS filesystems
and starts mounting them.
I have several file
Did a search, but could not find the info I am looking for.
I built out my OSOL system about a month ago and have been gradually making
changes before I move it into production. I have setup a mirrored rpool and a
6 drive raidz2 pool for data. In my system I have 2 8-port SAS cards and 6
port
On Jun 1, 2010, at 2:43 PM, Steve D. Jost wrote:
Definitely not a silly question. And no, we create the pool on
node1 then set up the cluster resources. Once setup, sun cluster
manages importing/exporting the pool into only the active cluster
node. Sorry for the lack of clarity.. not
Silly question - you're not trying to have the ZFS pool imported on
both hosts at the same time, are you? Maybe I misread, had a hard
time following the full description of what exact configuration caused
the scsi resets.
On Jun 1, 2010, at 2:22 PM, Steve Jost wrote:
Hello All,
We are
Ok. What worked for me was booting with the live CD and doing:
pfexec zpool import -f rpool
reboot
After that I was able to boot with AHCI enabled. The performance issues I was
seeing are now also gone. I am getting around 100 to 110 MB/s during a scrub.
Scrubs are completing in 20 minutes
Not completely. I noticed my performance problem in my "tank" rather than my
rpool. But my rpool was sharing a controller (the motherboard controller) with
some devices in both the rpool and tank.
--
This message posted from opensolaris.org
___
zfs-d
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
--
This message posted from opensolaris.org
__
Thanks -
I can give reinstalling a shot. Is there anything else I should do first?
Should I export my tank pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
I am not sure I fully understand the question... It is setup as raidz2 - is
that what you wanted to know?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this
particular motherboard is fairly confusing on the AHCI settings. The only
setting I have is actually in the raid section, and it
Following up with some more information here:
This is the output of "iostat -xen 30"
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
296.82.9 36640.27.5 7.8 2.0 26.1
I am new to OSOL/ZFS but have just finished building my first system.
I detailed the system setup here:
http://opensolaris.org/jive/thread.jspa?threadID=128986&tstart=15
I ended up having to add an additional controller card as two ports on the
motherboard did not work as standard Sata port.
Very helpful. I just started to setup my system and have run into a problem
where my SATA port 7/8 aren't really SATA ports they are behind an unsupported
RAID controller, so I am in the market for a compatible controller.
Very helpful post.
--
This message posted from opensolaris.org
(3) Was more about the size than the Green vs. Black issue. This is all
assuming most people are looking at green drives for the cost benefits
associated with their large sizes. You are correct Green and Black would most
likely have the same number of platters per size.
--
This message posted
I am new to OSOL/ZFS myself -- just placed an order for my first system last
week.
However, I have been reading these forums for a while - a lot of the data seems
to be anecdotal, but here is what I have gathered as to why the WD green drives
are not a good fit for a RAIDZ(n) system.
(1) They s
could be a cause of the problem you
are describing.
This doc from VMware is aimed at block-based storage but it has some
concepts that might be helpful as well as info on aligning guest OS
partitions:
http://www.vmware.com/pdf/esx3_partition_align.pdf
-Brian
Chris Murray wrote:
Good evenin
On Mar 2, 2010, at 11:09 AM, Bob Friesenhahn wrote:
> On Tue, 2 Mar 2010, Brian Kolaci wrote:
>>
>> What is probability of corruption with ZFS in Solaris 10 U6 and up in a SAN
>> environment? Have people successfully recovered?
>
> The probability of corruption in
s, they require redundancy at the hardware level, and they won't budge on
that and won't do additional redundancy at the ZFS level.
So given the environment, would it be better for lots of small pools, or a
large shared pool?
Thanks,
Brian
___
re RAIDs.
I'm not too sure what to do with zdb to see anything.
Any ideas as to what I can do to recover the rest of the data?
There's still some database files on there I need.
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks everyone who has tried to help. this has gotten a bit crazier, I
removed the 'faulty' drive and let the pool run in degraded mode. It would
appear that now another drive has decided to play up;
de-bash-4.0# zpool status
pool: data
state: DEGRADED
status: One or more devices has b
Some more back story. I initially started with Solaris 10 u8, and was getting
40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the
performance I was getting with OpenFiler. I decided to try Opensolaris
2009.06, thinking that since it was more "state of the art & up to dat
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
Where to begin...
The Hardware setup:
Supermicro 4U 24 Drive Bay Chassis
Supermicro X8DT3 Server Motherboard
2x Xeon E5520 Nehalem 2.26 Quad Core CPUs
4
Ok, I changed the cable and also tried swapping the port on the motherboard.
The drive continued to have huge asvc_t and also started to have huge wsvc_t. I
unplugged it and the 'pool' is now operating as per expected performance wise.
See the 'storage' forum for any further updates as I am now
>
>
> I'd say your easiest two options are swap ports and
> see if the problem
> follows the drive. If it does, swap the drive out.
>
>
> --Tim
> ___
Yep, that sounds like a plan.
Thanks for your suggestion.
--
This message posted from opensolaris
While not strictly a ZFS issue as such I thought I'd post here as this and the
storage forums are my best bet in terms of getting some help.
I have a machine that I recently set up with b130, b131 and b132. With each
build I have been playing around with ZFS raidz2 and mirroring to do a little
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Are you saying raidz2 would overwhelm current I/O controllers to where I cou
It sounds like the consensus is more cores over clock speed. Surprising to me
since the difference in clocks speed was over 1Ghz. So, I will go with a quad
core.
I was leaning towards 4GB of ram - which hopefully should be enough for dedup
as I am only planning on dedupping my smaller file sy
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded
or because of multiple streams? It is a pretty big difference in clock speed -
so curious as to why core would be better. Glad to see your 4 core system is
working well for you - so seems like
I am Starting to put together a home NAS server that will have the following
roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD
streams at a time. These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded, co
Got an answer emailed to me that said, "you need to use a second pfexec after
the | like this: pfexec prtvtoc /dev/rdsk/c7d0s2 | pfexec fmthard -s -
/dev/rdsk/c7d1s2"
Thanks for the quick response email'er.
--
This message posted from opensolaris.org
/rdsk/c7d1s2
I get: fmthard: Cannot open device /dev/rdsk/c7d1s2 - Permission denied
Any ideas as to what i might be doing wrong here?
Thanks, Brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
I was frustrated with this problem for months. I've tried different
disks, cables, even disk cabinets. The driver hasn't been updated in
a long time.
When the timeouts occurred, they would freeze for about a minute or
two (showing the 100% busy). I even had the problem with less than 8
L
, but I cannot tell you if any newer versions support later
zfs versions.
John,
You are already running the Update 8 kernel (141444-09). That is the
latest version of ZFS that is available for Solaris 10.
-Brian
___
zfs-discuss mailing list
zfs
I can't answer your question - but I would like to see more details about the
system you are building (sorry if off topic here). What motherboard and what
compact flash adapters are you using?
--
This message posted from opensolaris.org
___
zfs-discus
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several 'lab' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networke
d UFS on the disks.
I had planned on making this system a master database server, however
I'm still getting with it running as a slave, so I don't have any
comfort to promote this system to the master with the timeouts.
Any suggestions?
Thanks,
Brian
Thanks for the help.
I was curious whether the zfs send|receive was considered suitable given a few
things I've read which said somethings along the lines of "don't count on being
able to restore this stuff". Ideally that is what I would use with the
'incremental' option so as to only backup ch
Hello all,
Are there any best practices / recommendations for ways of doing this ?
In this case the ZVOLs would be iSCSI LUNS containing ESX VMs .I am aware
of the of the need for the VMs to be quiesced for the backups to be useful.
Cheers.
--
This message posted from opensolaris.org
_
machine is basically a desktop machine in a rack mount case
(similar to a Blade 100) and is also vintage 2001. I wouldn't expect
much performance out of it regardless.
-Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Thanks all,
It was a government customer that I was talking too and it sounded like a good
idea, however with the certification paper trails required today, I don't think
it would be of such a benefit after all. It may be useful on the disk
evacuation, but they're still going to need their pa
eradication patterns back to the
removed blocks.
By any chance, has this been discussed or considered before?
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of
replacing an entire disk with another formatted clean disk, you would think the
amount of data would be the same each time
Please don't feed the troll.
:)
-brian
On Wed, Oct 21, 2009 at 06:32:42AM -0700, Robert Dupuy wrote:
> There is a debate tactic known as complex argument, where so many false and
> misleading statements are made at once, that it overwhelms the respondent.
>
> I'm just
On Thu, Oct 15, 2009 at 11:09:32AM -0600, Cindy Swearingen wrote:
> Hi Greg,
>
> With two disks, I would start with a mirror. Then, you could add
Additionally, with a two disk RAIDZ1 you are doing parity calculations for
no good reason. I would recommend a mirror.
-brian
--
"
I am have a strange problem with liveupgrade of ZFS boot environment. I found
a similar discussion on the zones-discuss, but, this happens for me on installs
with and without zones, so I do not think it is related to zones. I have been
able to reproduce this on both sparc (ldom) and x86 (phsy
On 10/12/2009 04:38 PM, Paul B. Henson wrote:
I only have ZFS filesystems exported right now, but I assume it would
behave the same for ufs. The underlying issue seems to be the Sun NFS
server expects the NFS client to apply the sgid bit itself and create the
new directory with the parent directo
I had a 50mb zfs volume that was an iscsi target. This was mounted into a
Windows system (ntfs) and shared on the network. I used notepad.exe on a remote
system to add/remove a few bytes at the end of a 25mb file.
--
This message posted from opensolaris.org
__
Just realised I missed a rather important word out there, that could confuse.
So the conclusion I draw from this is that the --incremental-- snapshot simply
contains every written block since the last snapshot regardless of whether the
data in the block has changed or not.
--
This message poste
I took binary dumps of the snapshots taken in between the edits and this showed
that there was actually very little change in the block structure, however the
incremental snapshots were very large. So the conclusion I draw from this is
that the snapshot simply contains every written block since
I am looking to use Opensolaris/ZFS to create an iscsi SAN to provide storage
for a collection of virtual systems and replicate to an offiste device.
While testing the environment I was surprised to see the size of the
incremental snapshots, which I need to send/receive over a WAN connection,
c
e the other day, my first
thought was "Oh cool, they reinvented Prestoserve!"
-Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 413 matches
Mail list logo