On 03/03/2010 15:19, Tomas Ögren wrote:
Memtest doesn't want potential errors to be hidden by ECC, so it
disables ECC to see them if they occur.
still it is valid question - is there a way under OS to check if ECC is
disabled or enabled?
--
Robert Milkowski
http://milek.blogspo
On 03/03/2010 16:33, Darren J Moffat wrote:
Robert Milkowski wrote:
On 03/03/2010 15:19, Tomas Ögren wrote:
Memtest doesn't want potential errors to be hidden by ECC, so it
disables ECC to see them if they occur.
still it is valid question - is there a way under OS to check if EC
On 04/03/2010 09:46, Dan Dascalescu wrote:
Please recommend your up-to-date high-end hardware components for building a
highly fault-tolerant ZFS NAS file server.
2x M5000 + 4x EMC DMX
Sorry, I couldn't resist :)
--
Robert Milkowski
http://milek.blogspo
k. It's dead.
#
But I'm willing to go through more hackery if needed.
(If I need to destroy and re-create these LUNS on the storage array, I can do
that too, but I'm hoping for something more host based)
--Jason
you need to destroy zfs labels.
overwrite with zeros using dd be
l its copies are read and validated.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 09/03/2010 13:18, Tony MacDoodle wrote:
Can I create a devalias to boot the other mirror similar to UFS?
yes
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
panic.
For more information look at:
http://blogs.sun.com/mws/entry/fma_on_x64_and_at
http://milek.blogspot.com/2006/05/psh-smf-less-downtime.html
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
take into account more that just a server where a scrub will be running
as while it might not impact the server it might cause an issue for
others, etc.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss
subdirectory.
So unless you use NFSv4 with mirror mounts or an automounter other NFS
version will show you contents of a directory and not a filesystem. It
doesn't matter if it is a zfs or not.
--
Robert Milkowski
http://milek.blogspot.com
_
On 22/03/2010 08:49, Andrew Gabriel wrote:
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One
server does.
look for mirror mounts feature in NFSv4.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r off getting NetApp
Well, spend some extra money on a really fast NVRAM solution for ZIL and
you will get much faster ZFS environment than NetApp and still you will
spend much less money. Not to mention all the extra flexibity compared
to NetApp.
--
Robert Milkowski
http:
than last 30s if the nfs server would suddenly lost power.
To clarify - if ZIL is disabled it makes no difference at all for a
pool/filesystem level consistency.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
r log is put on
a separate device?
Well, it is actually different. With ZFS you can still guearantee it to
be consistent on-disk while others generally can't and often you will
have to do fsck to even mount a fs in r/w...
--
Robert Milkowski
http://milek.bl
spare to cover the other failed drive? And
can I hotspare it manually? I could do a straight replace, but that
isn't quite the same thing.
It seems like it is even driven. Hmmm.. perhaps it shouldn't be.
Anyway you can do zpool replace and it is the same thing, why wou
o use zpool replace.
Once you fix the failed drive and it re-synchronizes a hot spare will
detach automatically (regardless if you forced it to kick-in via zpool
replace or if it did so due to FMA).
For more details see http://blogs.sun.com/eschrock/entry/zfs_hot_spares
--
Robert Milkowski
http:
ld cause a significant performance problem.
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
really paranoid folks especially with modern ha
need to re-import a database or recover
lots of files over NFS - your service is down and disabling ZIL makes a
recovery MUCH faster. Then there are cases when leaving the ZIL disabled
is acceptable as well.
--
Robert Milkowski
http://milek.blogspot.com
___
Unless you are talking about doing regular snapshots and making sure
that application is consistent while doing so - for example putting all
Oracle tablespaces in a hot backup mode and taking a snapshot...
otherwise it doesn't really make sense.
--
Robert Milkowski
http://mil
On 31/03/2010 16:44, Bob Friesenhahn wrote:
On Wed, 31 Mar 2010, Robert Milkowski wrote:
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
e thing is
well-documented.
I double checked the documentation and you're right - the default has
changed to sync.
I haven't found in which RH version it happened but it doesn't really
matter.
So yes, I was wrong - the current default it seems to be sync on L
sfy a race condition for
the sake of internal consistency. Applications which need to know their
next commands will not begin until after the previous sync write was
committed to disk.
ROTFL!!!
I think you should explain it even further for Casper :) :) :) :) :) :) :)
--
Robert Milk
you can export a share with as sync (default) or
async share while on Solaris you can't really currently force a NFS
server to start working in an async mode.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@
s are part of a
cluster both of them have a full access to shared storage and you can
force zpool import on both nodes at the same time.
When you think about it you need actually such behavior for RAC to work
on raw devices or real cluster volumes or filesystems, etc.
--
Robert Milkowski
http://mil
the pool, resume the resource group and enable the storage resource
The other approach is to keep a pool under a cluster management but
eventually suspend a resource group so there won't be any unexpected
failovers (but it really depends on circumstances and what you are
t
On 02/04/2010 16:04, casper@sun.com wrote:
sync() is actually *async* and returning from sync() says nothing about
to clarify - in case of ZFS sync() is actually synchronous.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
fine.
So for example - on x4540 servers try to avoid creating a pool with a
single RAID-Z3 group made of 44 disks, rather create 4 RAID-Z2 groups
each made of 11 disks all of them in a single pool.
--
Robert Milkowski
http://milek.blogspot.com
__
ris is doing more or less for some time now.
look in the archives of this mailing list for more information.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
letely die as well.
Other than that you are fine even with unmirrored slog device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
normal reboots zfs won't read data from slog.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
while accessing \\filer\arch\myfolder\myfile.txt works.
Any ideas?
We are running snv_130.
you are not using Samba daemon, are you?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
without going through the
process of actually copying the blocks, but just duplicating its meta data like
NetApp does?
I don't know about file cloning but why not put each VM on top of a zvol
- then you can clone a zvol. ?
--
Robert Milkowski
http://milek.blogspo
but it suggests that it had nothing to do with a double slash - rather
some process (your shell?) had an open file within the mountpoint. But
supplying -f you forced zfs to unmount it anyway.
--
Robert Milkowski
http://milek.blogspot.com
On 21/04/2010 06:16, Ryan John wrote:
Thanks. That
size for database vs.
default, atime off vs. on, lzjb, gzip, ssd). Also comparison of
benchmark results with all default zfs setting compared to whatever
setting you did which gave you the best result.
--
Robert Milkowski
http://milek.blogspot.com
__
attach EBS.
That way Solaris won't automatically try to import the pool and your
scripts will do it once disks are available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u can also find some benchmarks with sysbench + mysql or oracle.
I don't remember if I posted or not some of my results but I'm pretty
sure you can find others.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
.
You
will need to power cycle. The system won't boot up again; you'll have to
The system should boot-up properly even if some pools are not accessible
(except rpool of course).
If it is not the case then there is a bug - last time I checked it
worked perfectly fine.
--
Robert
. Then you can "zpool import" I think requiring the -f or -F,
and reboot again normal.
I just did a test on Solaris 10/09 - and system came up properly,
entirely on its own, with a failed pool.
zpool status showed the pool as unavailable (as I removed an underlying
device) which is fi
(and do so with -R). That way you can easily script it so import happens
after your disks ara available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ch means it couldn't discover it. does 'zpool import' (no other
options) list the pool?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s no room for improvement here. All I'm saying is
that it is not as easy problem as it seems.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ution*.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 zil synchronicity
No promise on date, but it will bubble to the top eventually.
So everyone knows - it has been integrated into snv_140 :)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
when it is off it
will give you an estimate of what's the absolute maximum performance
increase (if any) by having a dedicated ZIL device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolari
fails prior to completing a series of
writes and I reboot using a failsafe (i.e. install disc), will the log be
replayed after a zpool import -f ?
yes
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss
nformation on it you might look at
http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
opose that it shouldn't
but it was changed again during a PSARC review that it should.
And I did a copy'n'paste here.
Again, sorry for the confusion.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it shou
ce failover in a
cluster L2ARC will be kept warm. Then the only thing which might affect
L2 performance considerably would be a L2ARC device failure...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
would probably decrease
performance and would invalidate all blocks if only a single l2arc
device would die. Additionally having each block only on one l2arc
device allows to read from all of l2arc devices at the same time.
--
Robert Milkowski
http://milek.blogspo
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction
are very useful at
times.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s/zvol.c#1785)
- but zfs send|recv should replicate it I think.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
: why do you need to do
this at all? Isn't the ZFS ARC supposed to release memory when the
system is under pressure? Is that mechanism not working well in some
cases ... ?
My understanding is that if kmem gets heavily fragmaneted ZFS won't be
able to give back much memory.
0 IOPS to a single SAS port.
It also scales well - I did run above dd's over 4x SAS ports at the same
time and it scaled linearly by achieving well over 400k IOPS.
hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw.
1.27.3.0), connected to F5100.
--
Robert Milkowski
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris
is able to push through a sing
port is nothing unusual and
has been the case for at least several years.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cely coalesce these
IOs and do a sequential writes with large blocks.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/06/2010 10:58, Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 11/06/2010 09:22, sensille wrote:
Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling
mailto:ri
full priority.
Is this problem known to the developers? Will it be addressed?
http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473
--
Robert Milkowski
http://milek.blogspot.com
whole point of having L2ARC is to serve high random read iops from
RAM and L2ARC device instead of disk drives in a main pool.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
? It maps the snapshots so windows
can access them via "previous versions" from the explorers context menu.
btw: the CIFS service supports Windows Shadow Copies out-of-the-box.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discus
.
Previous Versions should work even if you have a one large filesystems
with all users homes as directories within.
What Solaris/OpenSolaris version did you try for the 5k test?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
lly intent to get it integrated into ON? Because if
you do then I think that getting Nexenta guys expanding on it would be
better for everyone instead of having them reinventing the wheel...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discu
rather except all of them to get about the same
number of iops.
Any idea why?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dedup enabled in a pool you
can't really get a dedup ratio per share.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
Hi,
zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0
smaller writes to metadata that will distribute parity.
What is the total width of your raidz1 stripe?
4x disks, 16KB recordsize, 128GB file, random read with 16KB block.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
On 23/06/2010 19:29, Ross Walker wrote:
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
128GB.
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information will end-up on one
On 24/06/2010 14:32, Ross Walker wrote:
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data
performance as a much greater number of disk drives in RAID-10
configuration and if you don't need much space it could make sense.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ndom reads.
http://blogs.sun.com/roch/entry/when_to_and_not_to
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(async or sync) to be written synchronously.
ps. still, I'm not saying it would made ZFS ACID.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
outdone, they've stopped other OS releases as well. Surely,
this is a temporary situation.
AFAIK the dev OSOL releases are still being produced - they haven't been
made public since b134 though.
--
Robert Milkowski
http://milek.blogspot.com
_
han a
regression.
Are you sure it is not a debug vs. non-debug issue?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
"compress" the file much better than a compression. Also
please note that you can use both: compression and dedup at the same time.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hough but it might be that a stripe size
was not matched to ZFS recordsize and iozone block size in this case.
The issue with raid-z and random reads is that as cache hit ratio goes
down to 0 the IOPS approaches IOPS of a single drive. For a little bit
more information see http://blogs.sun.
On 22/07/2010 03:25, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
I had a quick look at your results a moment ago.
The problem is that you used a server with 4GB of RAM + a raid card
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring
fyi
Original Message
Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout
08/06/2010]
Date: Fri, 30 Jul 2010 14:08:38 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring the following fast-track for George Wilson.
recent
build you have zfs set sync={disabled|default|always} which also works
with zvols.
So you do have a control over how it is supposed to behave and to make
it nice it is even on per zvol basis.
It is just that the default is synchronous.
--
Robert Milko
x27;t remember if it offered or not an ability to manipulate zvol's
WCE flag but if it didn't then you can do it anyway as it is a zvol
property. For an example see
http://milek.blogspot.com/2010/02/zvols-write-cache.html
--
Robert Milkowski
http://mil
27;s the main reason behind the scrub - to be able to detect and
repair checksum errors (if any) while a redundant copy is still fine.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensol
Robert Milkowski wrote:
Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . > /dev/null
If you think about it, validating checksums requires reading the data.
So you simply need to read the data.
This should work but
- reserved area).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ith de-dup) would behave the same
here.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Maurilio Longo wrote:
Carson,
the strange thing is that this is happening on several disks (can it be that
are all failing?)
What is the controller bug you're talking about? I'm running snv_114 on this
pc, so it is fairly recent.
Best regards.
Maurilio.
See 'iostat -En' output.
__
d on where an actual bottleneck is.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
btw: IIRC Sun Cluster HAS+ agane will automatically make use of cache files
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the snapshot version).
Out of curiosity, is there an easy way to find such a file?
Find files with modification or creation time after last snapshot was
created.
Files which were modified after may still have most of their blocks
refered by a snapshot though.
--
Robert Milkowski
http:
Stuart Anderson wrote:
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to get
ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible data
on persistent
case as you do have
clones. In your case you are concerned with files you would like do
delete to regain disk space and they are still in a snapshot... in most
cases it is relatively easy to plan for it with a dedicated
filesystem(s) for temporary files, et
Before you do a dd test try first to do:
echo zfs_vdev_max_pending/W0t1 | mdb -kw
and let us know if it helped or not.
iostat -xnz 1
output while you are doing dd would also help.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
uot; of the UFS directio?
No. UFS directio does 3 things:
1. unbuffered I/O
2. allow concurrent writers (no single-writer lock)
3. provide an improved async I/O code path
for the record - iirc UFS will also disable read-aheads with direct
B 0B 0 0
- -- --- -- - -
>
btw: ::memstat and ::kmastat is *very* fast in this build, it used to take a
minute and now it is instantaneous :)
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted from opensolaris.org
_
loads this is desired behavior for many other it is
not (like parsing with grep like tool large log files which are not
getting cached...).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
;
so everything was replicated as expected. However zfs recv -F should not
complain that it can't open snap1.
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
create a dedicated zfs zvol or filesystem for each file representing
your virtual machine.
Then if you need to clone a VM you clone its zvol or the filesystem.
Jeffry Molanus wrote:
I'm not doing anything yet; I just wondered if ZFS provides any methods to
do file level cloning instead of comp
Cyril Plisko wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedup space accounting is charged to all
f
1 - 100 of 1144 matches
Mail list logo