n the output of smbios(1M)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 04/03/2010 21:28, valrh...@gmail.com wrote:
Does this work with dedup? If you have a deduped pool and send it to a file, will it
reflect the smaller size, or will this "rehydrate" things first?
See zfs(1M) for the description of the "-D" flag to 'zfs s
Is it the file format or the way the utility works ?
If it is the format what is wrong with it ?
If it is the utility what is needed to fix that ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 18/03/2010 13:12, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat wrote:
So exactly what makes it unsuitable for backup ?
Is it the file format or the way the utility works ?
If it is the format what is wrong with it ?
If it is the utility what is needed to
connected to the same box as the zpools, but feeding local data via a
network servce seems to me to be just complicating things...
Indeed if the drive is local then you it may be adding a layer you don't
need.
--
Darren J Moffat
___
zfs-discu
ith the tape splitting. Though that may need
additional software that isn't free (or cheap) to drive the parts of
NDMP that are in Solaris. I don't know enough about NDMP to be sure but
I think that should be possible.
--
Darren J Moffat
___
zf
to handle tape media smaller than its stream size.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for a
locally attached tape drive. The backup control software runs on
another machine and talks with the local NDMP to move the data from
local disk to local tape.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
On 19/03/2010 14:57, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat wrote:
That assumes you are writing the 'zfs send' stream to a file or file
like media. In many cases people using 'zfs send' for they backup
strategy are they are writing it back out u
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file
On 19/03/2010 17:19, David Dyer-Bennet wrote:
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the impli
ices_Guide#Storage_Pool_Performance_Considerations
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.109975000 +
Modify: 2010-03-25 09:27:34.110729000 +
Change: 2010-03-25 09:27:34.110729000 +
So maybe I'm missing what the issue for you is, if so can you try and
explain it to me by using an example.
Thanks.
--
Darren J Moffat
arge zfs
sends. Using the DTrace Analytics in an SS7000 makes this very easy.
It really comes down to the size of your working set in the ARC, the
size of your L2ARC and your pattern of data access all that combined
with the volumen of data you are 'zfs send'ing.
--
Darren J Moffat
file either. An L2ARC device must be a physical device.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
must be a physical device.
I could have sworn I did this with a zvol awhile ago. Maybe that was for
something else...
The check for the L2ARC device being a block device has always been there.
--
Darren J Moffat
___
zfs-discuss mailing list
z
using a ZVOL as a cache device for
another pool because of this bug and also because it may actually hurt
performance instead of helping it. It is actually pretty hard to work
out exactly how a ZVOL will act as an L2ARC cache device for another pool.
n/fs/zfs/arc.c
arc_buf_freeze()
arc_buf_thaw()
arc_cksum_verify()
arc_cksum_compute()
It isn't done on every access but it can detect in memory corruption -
I've seen it happen on several occasions but all due to errors in my
code not bad physical memory.
Doing in more frequently coul
laris 10.
beadm doesn't seem to care since I don't believe it stores the pool
names anywhere. Live upgrade on the other hand does, as do all the
other issues you highlighted.
--
Darren J Moffat
___
zfs-discuss mailing li
flushing data or metadata modifying operations to permanent storage,
thus improving performance, but breaking all guarantees about server
reboot recovery.
END QUOTE
For more info the whole of section B4 though B6.
--
Darren J Moffat
_
*
200933 -r-xr-xr-x 1 root bin 159960 Mar 15 10:20
/usr/sbin/i86/zdb*
This means both 32 and 64 bit versions are already available and if the
kernel is 64 bit then the 64 bit version of zdb will be run if you run
/usr/sbin/zdb.
--
Darren J Moffat
;t have lofi do encryption. That would tell you the
overhead of the encryption that lofi does.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x27;t think there is a bug it is just a side effect of what happens
because of the "a pool on lofi on zvol in a pool setup" you have.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e
the kernel destroys and recreates this file when pools
are added and removed, care should be taken when
attempting to access this file. When the last pool using
a cachefile is exported or destroyed, the file is
de- how
do I get the second nodes cache file current without first importing the
disks?
The point of the cachefile zpool option is that there isn't two copies
of the zpool.cache file there is only one.
--
Darren J Moffat
___
zfs-discus
e you trying to use an
already existing cluster framework that already supports ZFS ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
also has a nice GUI compare tool
that uses colour and percentages to show the differences between runs.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e,atime of the top level
directory of the ZFS dataset at the time the snapshot was created.
This RFE if it were to be implemented could give a possible way to get
this information from a remote filesystem client:
6527390 want to read zfs properties over nfs (eg via .zfs/props)
--
Darre
find
the file called stuff.
How do you find what was /foo/bar/stuff in the model where the .snapshot
directory exists at every subdir rather than just at the filesystem root
when the subdirs have been removed ?
What does it look like when the directory hierar
he little clock
icon that is between refresh and home. This only works locally though
not over NFS.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
p but ultimately the question you are asking is one that
the DTrace Analytics in the SS7000 appliance are perfect for.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rage is faster than networking.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited, this changed as a result of the PSARC review.
--
Darren J M
eople (my self included) used to do this back in MS-DOS days with
Stacker and Doublespace.
Also OS images these days have lots of configuration files which tend to
be text based formats and those compress very well.
--
Darren J Moffat
___
zfs-di
ystem. I have turned dedup on for a
few file systems to try it out:
You can't because dedup is per pool not per filesystem. Each file
system gets to choose if it is opting in to the pool wide dedup.
--
Darren J Moffat
___
zfs-discuss mailin
On 10/05/2010 13:35, P-O Yliniemi wrote:
Darren J Moffat skrev 2010-05-10 10:58:
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient
Seems
very redundant to me :)
Signing implies use of a key which ZFS does not use for its block based
checksums.
There is no "quick" way to do this just now because ZFS checksums are
block based not whole file based.
--
Darren J Moffat
dataset is created for the user's homedir?
So if you specify "-m", a regular directory is created, but if you specify
(say) "-z", a new dataset is created. Usermod(1M) would also probably have
this option.
A CR already exists fo
On 17/05/2010 12:41, eXeC001er wrote:
I known that i can view statistics for the pool (zpool iostat).
I want to view statistics for each file system on pool. Is it possible?
See fsstat(1M)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
ingle pool with one side of the mirror in location A and
one side in location B ?
Log devices can be mirrored too, so why not just put a log device in
each "frame" and mirror them just like you do the "normal" pool disks.
What am I missing about your setup that means that wo
o just because there is an s0 on the end doesn't necessarily mean that
there is a non zero s1, s2 etc.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RC vdevs each time the feeder runs:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#4390
It chooses the next L2ARC vdev to use with this function:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3808
--
Da
ortability.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed to be atomic, but neither is the
creation. It is one call from userland to kernel but that doesn't make
it atomic from a ZFS transaction view point.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
On 11/06/2010 10:59, Henu wrote:
Quoting Darren J Moffat :
On 11/06/2010 09:47, Henu wrote:
In another thread recursive snapshot creation was found atomic so that
it is done quickly, and more important, all at once or nothing at all.
Do you know if recursive destroying and renaming of
On 11/06/2010 11:42, Arne Jansen wrote:
Darren J Moffat wrote:
But the following document says "Recursive ZFS snapshots are created
quickly as one atomic operation. The snapshots are created together (all
at once) or not created at all."
http://docs.sun.com/app/docs/doc/819-5461/gd
On 11/06/2010 11:47, Henu wrote:
I'm sorry I keep bothering you, but did you checked what the code says
about recursive rename? Is it atomic too?
Recursive snapshot rename uses the same style of code as create and
destroy so yes.
--
Darren J M
d any more DRAM, so if OP can afford
to put in 128GB of RAM then they should.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Unfortunately there isn't away I know of to create clones using the .zfs
directory.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ting, and audit, that you mentioned. Any
pointers?
Start here:
http://hub.opensolaris.org/bin/view/Project+audit/
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
arren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ke use of an L2ARC device for their pool? I'm
assuming so, since it's both block and metadata that get stored there.
I'm considering adding a couple of very large SSDs to I might be able to
cache most of my DB in the L2ARC, if that works.
Yes, the level that the L2ARC works at doesn
nes_for_unified
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
grub menu as it will usually contain the name of the root pool.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zio.c#zio_write_bp_init
Specifically line 1005:
1005 if (psize == 0) {
1006 zio->io_pipeline = ZIO_INTERLOCK_PIPELINE;
1007 } else {
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-
//blogs.sun.com/ahl/entry/triple_parity_raid_z
The current code base supports raidz1, raidz2, raidz3 (triple parity)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
handy, though.
Media files may not dedup that well either and more importantly that
hardware isn't likely to be sufficient to get good dedup performance
since it is fairly low on DRAM and has no SSD for the L2ARC to keep the
DDT cached at least in L2ARC.
--
Darre
ed to mirror the slog if you want to protect against
loosing synchronous writes (but not pool consistency on disk) on power
outage *and* failure of your slog device at the same time (ie a double
fault).
--
Darren J Moffat
___
zfs-discuss mailing lis
ight headers etc):
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c?r=789%3Ab348f31ed315
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and store it along with your backup media. So you have it available,
if ever there were any confusion about it at all.
PSARC/2010/193 defines a solution to solve that problem without having
to save away a copy of 'zfs get all'.
http://arc.opensolaris.org/caselog/PSAR
ill capture properties better than "get all"?
What is the suggested solution?
If/when the approved changes integrate it will look like:
zfs send -Rb foo | | zfs recv ...
I don't see anything in "man zfs" ... but maybe it's only available in a
later versi
pools, however I
personally don't like giving parts of the same device to multiple pools
if I can help it.
The only vdev types that can be shared between pools are spares, all
others need to be per pool or the physical devices partitioned up.
--
Darre
Joerg Schilling wrote:
Just to prove my information: I invented "fbk" (which Sun now calls "lofi")
Sun does NOT call your fbk by the name lofi. Lofi is a completely
different implementation of the same concept.
--
Darren J Moffat
___
before all of the files are written ???
Set it on the afx01 dataset before you do the receive and it will be
inherited.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pool, and the property is on
the home zfs file system.
doesn't mater if zfs01 is the top level dataset or not.
Before you do the receive do this:
zfs set checksum=sha256 zfs01
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolari
20:a2c635bc0556:73b5ba539e9699:3b4d66984ac9d6b4
0 2048 1 ZFS plain file SHA256 uncompressed
57f1e8168c58e8cf:3b20be148f57852e:f72ee8e3358f:1bfae4ae0599577c
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
could just use
basic UNIX tools like find/diff etc.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
builds recently.
We also depend on the ZFS Fast System Attributes project and can't
integrate until that has done so.
When I can commit to more detailed dates I will do.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Mike DeMarco wrote:
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
None providing the drive is available to the OS by normal means.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
m.
The problem with that though is that today ZFS doesn't know that the
ZVOLs are used for swap and doesn't actually care.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uilds.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nce. Since you can't physically replace either side of the
mirror you will get the same level of protection (maybe even better) and
better performance by setting the property copies to 2 (eg 'zfs set
copies=2 rpool').
--
Darren J Moffat
_
work and shouldn't be attempted (though step 1 will fail).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
want to use. Given what you
have described you probably want to configure one or most host groups
with stmfadm(1M). I'm not a COMSTAR expert so I suggest asking on
storage-discuss if you need more help than that.
--
Darren J Moffat
___
zf
long as that rule is adhered to there is no problem of
legal issues.
That is my personal understanding as well, however this is not legal
advice and I am not qualified to (or even wish to) give it in any case.
Good luck with the port.
--
Darren J Moffat
memory resources to enable compression.
On the other hand if it is full of source code or ASCII text enabling
compression could potentially improved performance - depending on the
read and write access patterns.
--
Darren J Moffat
___
zfs-discuss
edded' processors, and
(Open)Solaris does not readily run on many of them (e.g., PowerPC- and
ARM-based SoCs). Though AFAIK, ReadyNAS actually runs (ran?) on SPARC
(Leon), but used Linux nonetheless.
OpenSolaris is on its way to running on ARM.
http://hub.opensolar
| passthrough
I'm not sure they will help you much but I was curious if you had looked
at this area for help.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike Gerdts wrote:
On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
Enjoy, and let me know if you have any questions or suggestions for
follow-on p
dedup
it is a trade off between IO bandwidth and CPU/memory. Sometimes dedup
will improve performance, since like compression it can reduce IO
requirements, but depending on workload the CPU/memory overhead may or
may not be worth it (same with compression).
pools can be used in a Sun Cluster configuration but will only
imported into a single node of a Sun Cluster configuration at a time.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup=on
--
Darren J M
how much benefit you will get from it, since it is block
not file based, depends on what type of filesystem and/or application is
on the iSCSI target.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Kyle McDonald wrote:
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format
Trevor Pretty wrote:
Darren J Moffat wrote:
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set ded
m" which you can find off the OpenSolaris
Security web page:
http://hub.opensolaris.org/bin/view/Community+Group+security/library
or directly at:
http://www.sun.com/blueprints/0206/819-5507.pdf
--
Darren J Moffat
___
zfs-discuss mailing list
zf
juristictions if the data was always encrypted on disk then
you don't need to write any patterns to erase the blocks. So ZFS Crypto
can help there.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
n delete to ZFS then it is a
completely separate and complementary feature to encryption.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote:
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of
zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear
leveling and bad block relocation kicks in ;)
No I
Bob Friesenhahn wrote:
On Wed, 11 Nov 2009, Darren J Moffat wrote:
note that "eradication" via overwrite makes no sense if the underlying
storage uses copy-on-write, because there's no guarantee that the newly
written block actually will overlay the freed block.
Which is why t
Miles Nordin wrote:
"djm" == Darren J Moffat writes:
>> encrypted blocks is much better, even though
>> encrypted blocks may be subject to freeze-spray attack if the
>> whole computer is compromised
the idea of crypto deletion is to use many ke
Steven Sim wrote:
Hello;
Dedup on ZFS is an absolutely wonderful feature!
Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?
Pass the '-D' argument to 'zfs send'.
--
Darren J Moffat
___
ve by restricting
the number of files that can be created ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
user account.
Agree, creating ZFS per account would solve my problems, but I can't use nfsv4 ,
nor automounter, so I can't export thousand of filesystems right now.
Or using per userquotas, eg:
# zfs set userqu...@bob=1g rpool/mail
# zfs set userqu...@jane=2g rpool/mail
...
--
Jozef Hamar wrote:
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know
this can be solved in another way, but still, I would have to change
many things in my system in order to make it work. And this is
stems/unified_storage/
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke wrote:
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
log devices can be removed as of zpool version 19.
--
Darre
something?
I think that would be great feature.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f performance. I guess what it boils down to is what
is the access time/throughput of a single local 15k SCSI drive vs a GigE
iSCSI volume?
For the L2ARC it is worth at try - what have you got to loose since you
can remove the cache device really easily.
--
Darren J Moffat
___
II is fine) man page changes. Once there is consensus from the
core ZFS developer team I'll submit it to ARC for you.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 819 matches
Mail list logo