hat a similar argument could be made for storing the zfs send
data streams on a zfs file system. However, it is not clear why you
would do this instead of just zfs send | zfs receive.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ific butype_name
strings accessible
via the NDMP_CONFIG_GET_BUTYPE_INFO request.
http://www.ndmp.org/download/sdk_v4/draft-skardal-ndmp4-04.txt
It seems pretty clear from this that an NDMP data stream can contain
most anything and is dependent on the devi
llions of files with relatively few changes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rs,
ARC, etc. If the processes never page in the pages that have been
paged out (or the processes that have been swapped out are never
scheduled) then those pages will not consume RAM.
The best thing to do with processes that can be swapped out forever is
to not run them.
--
Mike Gerdts
http://
then a few tenths of a percent, you are probably short
on CPU.
It could also be that interrupts are stealing cycles from rsync.
Placing it in a processor set with interrupts disabled in that
processor set may help.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sorry, turned on html mode to avoid gmail's line wrapping.
On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness wrote:
> On 05/31/2010 02:52 PM, Mike Gerdts wrote:
> > On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness
> wrote:
> >
> >> On 05/31/2010 01:51 PM, Bob Fri
s=513 count=204401
# repeatedly feed that file to dd
while true ; do cat /tmp/randomdataa ; done | dd of=/my/test/file
bs=... count=...
The above should make it so that it will take a while before there are
two blocks that are identical, thus confounding deduplication as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ent mail
system should already dedup. Or at least that is how I would have
written it for the last decade or so...
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
engineering where group projects were common
and CAD, EDA, and simulation tools could generate big files very
quickly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
y good point. You can use a combination of "zpool iostat" and
fsstat to see the effect of reads that didn't turn into physical I/Os.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ut 32 KB I/O's. I think you can perform a
test that involves mainly the network if you use netperf with options
like:
netperf -H $host -t TCP_RR -r 32768 -l 30
That is speculation based on reading
http://www.netperf.org/netperf/training/Netperf.html. Someone else
(perhaps
etting data 32 KB at a time. How
does 32 KB compare to the database block size? How does 32 KB compare
to the block size on the relevant zfs filesystem or zvol? Are blocks
aligned at the various layers?
http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
--
Mike Gerdts
h
it looks as though znode_t's z_seq may be useful.
While it isn't a checksum, it seems to be incremented on every file
change.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ration choices, and and
a bit of luck.
Note that with Sun Trunking there was an option to load balance using
a round robin hashing algorithm. When pushing high network loads this
may cause performance problems with reassembly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore wrote:
> On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
>> On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore wrote:
>> > On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
>> >>
>> >> I
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin wrote:
>>>>>> "mg" == Mike Gerdts writes:
> mg> it is rather common to have multiple 1 Gb links to
> mg> servers going to disparate switches so as to provide
> mg> resilience in the face of switc
hen I boot on using LiveCD, how can I mount my first drive that has
> opensolaris installed ?
To list the zpools it can see:
zpool import
To import one called rpool at an alternate root:
zpool import -R /mnt rpool
--
Mike Gerdts
http://mgerdts.blogspot.com/
> Current Size: 4206 MB (arcsize)
> Target Size (Adaptive): 4207 MB (c)
That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system
that you have booted from a 32-bit kernel?
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
hms implemented in
software and sha256 implemented in hardware?
I've been waiting very patiently to see this code go in. Thank you
for all your hard work (and the work of those that helped too!).
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
ording to page 35 of
http://www.slideshare.net/ramesh_r_nagappan/wirespeed-cryptographic-acceleration-for-soa-and-java-ee-security,
a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this
keeps the MAU's busy but the rest of the core is still idle for things
like compression, TCP,
;s. It because quite
significant if you have 5000 (e.g. on a ZFS-based file server).
Assuming that the deduped blocks stay deduped in the ARC, it means
that it is feasible to every block that is accessed with any frequency
to be in memory. Oh yeah, and you save a lot of disk space.
--
Mike Gerdts
ht
gt; reportedly good for CIFS based on traffic from this list.
>>
>> --eric
>>
>> --
>> Eric D. Mudama
>> edmud...@mail.bounceswoosh.org
>>
>
>
> Check out The Great Australian Pay Check now Want to know what your boss is
> paid?
> ___
characteristics in
this area?
Is there less to be concerned about from a performance standpoint if
the workload is primarily read?
To maximize the efficacy of dedup, would it be best to pick a fixed
block size and match it between the layers of zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
wrote:
> Good question! Additional thoughts below...
>
> On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
>
>> Suppose I have a storage server that runs ZFS, presumably providing
>> file (NFS) and/or block (iSCSI, FC) service
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
wrote:
> On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote:
>
>> On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
>> wrote:
>>>
>>> Good question! Additional thoughts below...
>>>
>>> On Nov 24, 2
t is small enough that it is somewhat likely that many
of those random reads will be served from cache. A dtrace analysis of
just how random the reads are would be interesting. I think that
hotspot.d from the DTrace Toolkit would be a good starting place.
--
Mike Gerdts
http://mgerdts.blogspo
but creates datasets instead of
> directories.
>
> Thoughts ? Is this useful for anyone else ? My above examples are some
> of the shorter dataset names I use, ones in my home directory can be
> even deeper.
>
> --
> Darren J Moffat
> ___
&g
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain wrote:
>
> On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
>
>> On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
>>>
>>> On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
>>>
>>>> On 2009-Nov-24 14:07:06
used as a starting point.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: Hitachi HTS5425 Revision: Serial No: 080804BB6300HCG Size:
160.04GB <160039305216 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
...
That /should/ be printed on the di
s,
but that would seem to contribute to a higher compressratio rather
than a lower compressratio.
If I disable compression and enable dedup, does it count deduplicated
blocks of zeros toward the dedupratio?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-di
1 Dec 15 14:35 on/a
# du -h */a
95M off/a
3.4M on/a
# zfs get compressratio test/on test/off
NAME PROPERTY VALUE SOURCE
test/off compressratio 1.00x -
test/on compressratio 28.27x -
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
0
/mnt/osolzone/root DEGRADED 0 0 117 too many errors
errors: No known data errors
r...@soltrain19# zlogin osol uptime
5:31pm up 1 min(s), 0 users, load average: 0.69, 0.38, 0.52
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
On Tue, Dec 22, 2009 at 8:02 PM, Mike Gerdts wrote:
> I've been playing around with zones on NFS a bit and have run into
> what looks to be a pretty bad snag - ZFS keeps seeing read and/or
> checksum errors. This exists with S10u8 and OpenSolaris dev build
> snv_129. This is
t; could reclaim those
blocks. This is just a variant of the same problem faced with
expensive SAN devices that have thin provisioning allocation units
measured in the tens of megabytes instead of hundreds to thousands of
kilobytes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ndancy choices then there is no
need for any rocket scientists. :)
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e appreciated.
>
> Thanks,
> Mikko
>
> --
> Mikko Lammi | l...@lmmz.net | http://www.lmmz.net
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
E STATE READ WRITE CKSUM
nfszone ONLINE 0 0 0
/nfszone/root ONLINE 0 0 109
errors: No known data errors
I'm confused as to why this pool seems to be quite usable even with so
many checksum errors.
--
Mike Gerdts
http://m
errors from
"zoneadm install", which under the covers does a pkg image create
followed by *multiple* pkg install invocations. No checksum errors
pop up there.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot a good idea in any sort
> of production environment?"
>
> It sounds like a bug, sure, but the fix might be to remove the option.
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a "NAS Storage Library".
--
Mike Gerd
addcafe00 0x5dcc54647f00 0x1f82a459c2aa00
> 0x7f84b11b3fc7f80
> *G 48 cksum_actual = 0x5d6ee57f00 0x178a70d27f80 0x3fc19c3a19500
> 0x82804bc6ebcfc0
>
> and observe that the values in 'chksum_actual' causing our CHKSUM pool errors
> eventually
> because of missmatchi
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts wrote:
> I've seen similar errors on Solaris 10 in the primary domain and on a
> M4000. Unfortunately Solaris 10 doesn't show the checksums in the
> ereport. There I noticed a mixture between read errors and checksum
> errors -
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon wrote:
> On 1/8/2010 10:04 AM, James Carlson wrote:
>>
>> Mike Gerdts wrote:
>>
>>>
>>> This unsupported feature is supported with the use of Sun Ops Center
>>> 2.5 when a zone is put on a "NAS St
data stream
> compared to other archive formats. In general it is strongly discouraged for
> these purposes.
>
Yet it is used in ZFS flash archives on Solaris 10 and are slated for
use in the successor to flash archives. This initial proposal seems
to imply using the same
56
-rw-r--r-- 1 428411 Jan 22 04:14 sha256.Z
-rw-r--r-- 1 321846 Jan 22 04:14 sha256.bz2
-rw-r--r-- 1 320068 Jan 22 04:14 sha256.gz
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
se gnu tar to extract data. This seems to be
most useful when you need to recover master and/or media servers and
to be able to extract your data after you no longer use netbackup.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
at you should be able to just use mkfile or "dd
if=/dev/zero ..." to create a file that consumes most of the free
space then delete that file. Certainly it is not an ideal solution,
but seems quite likely to be effective.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk
wrote:
> Mike Gerdts wrote:
>>
>> On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk
>> wrote:
>>
>>>
>>> Is there a way to zero out unused blocks in a pool? I'm looking for ways
>>>
On Mon, Jan 25, 2010 at 2:32 AM, Kjetil Torgrim Homme
wrote:
> Mike Gerdts writes:
>
>> John Hoogerdijk wrote:
>>> Is there a way to zero out unused blocks in a pool? I'm looking for
>>> ways to shrink the size of an opensolaris virtualbox VM and using the
On Mon, Feb 8, 2010 at 9:04 PM, grarpamp wrote:
> PS: Is there any way to get a copy of the list since inception
> for local client perusal, not via some online web interface?
You can get monthly .gz archives in mbox format from
http://mail.opensolaris.org/pipermail/zfs-discuss/.
--
impact if an errant command were issued. I'd never do that in
production without some form of I/O fencing in place.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as not updated from Solaris
11 Express), it will have a separate /var dataset.
zfs mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> its thing.
>
> chicken / egg situation? I miss the old fail safe boot menu...
You can mount it pretty much anywhere:
mkdir /tmp/foo
zfs mount -o mountpoint=/tmp/foo ...
I'm not sure when the temporary mountpoint option (-o mountpoint=...)
came in. If it's not valid synt
-
/dev/chassis//SYS/SASBP/HDD0/disk disk c0t5000CCA012B66E90d0
/dev/chassis//SYS/SASBP/HDD1/disk disk c0t5000CCA012B68AC8d0
The text in the left column represents text that should be printed on
the corresponding disk slots.
--
Mike Gerdts
http
2012/3/26 ольга крыжановская :
> How can I test if a file on ZFS has holes, i.e. is a sparse file,
> using the C api?
See SEEK_HOLE in lseek(2).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
r 26 18:25:25 CDT 2012 [ 1332804325.889143166 ]
ct = Mar 26 18:25:25 CDT 2012 [ 1332804325.889143166 ]
bsz=131072 blks=32fs=zfs
Notice that it says it has 32 512 byte blocks.
The mechanism you suggest does work for every other file system that
I've tried it on.
--
Mike Gerdts
http://mge
ing
https://forums.oracle.com/forums/thread.jspa?threadID=2380689&tstart=15
before updating to SRU 6 (SRU 5 is fine, however). The fix for the
problem mentioned in that forums thread should show up in an upcoming
SRU via CR 7157313.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
er I can see the
> following input stream bandwidth (the stream is constant bitrate, so
> this shouldn't happen):
If processing in interrupt context (use intrstat) is dominating cpu
usage, you may be able to use pcitool to cause the device generating
a
ion ./
COMPRESS
on
$ dd if=/dev/zero of=1gig count=1024 bs=1024k
1024+0 records in
1024+0 records out
$ ls -l 1gig
-rw-r--r-- 1 mgerdts staff1073741824 Jul 10 07:52 1gig
$ du -k 1gig
0 1gig
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Feb 20, 2013 at 4:49 PM, Markus Grundmann wrote:
> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
> bad day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the
> administrator for datalost
> (e.g. "zpool destroy tank" or "
s)
Presumably this problem is being worked...
http://hg.genunix.org/onnv-gate.hg/rev/d560524b6bb6
Notice that it implements:
866610 Add SATA TRIM support
With this in place, I would imagine a next step is for zfs to issue
TRIM commands as zil entries have been committed to the data disks.
--
M
around the b137 timeframe.
OpenIndiana, to be released on Tuesday, is based on b146 or later.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Sep 27, 2010 at 6:23 AM, Robert Milkowski wrote:
> Also see http://www.symantec.com/connect/virtualstoreserver
And
http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/
--
Mike Gerdts
http://mgerdts.blogspot.
me that you are comfortable that the zone data moved over ok...
zfs destroy -r oldpool/zones
Again, verify the procedure works on a test/lab/whatever box before
trying it for real.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mail
structions. This sounds like it is a production Solaris
10 system in an enterprise environment. In most places that I've
worked, I would be hesitant to provide the required level of detail on
a public mailing list. Perhaps you should open a service call to get
the assistance y
ms.
Perhaps this belongs somewhere other than zfs-discuss - it has nothing
to do with zfs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
enunix: [ID 877030 kern.notice] Copyright (c) 1983,
> 2010, Oracle and/or its affiliates. All rights reserved.
>
> Can anyone help?
>
> Regards
> Karl
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/zonecfg.export
zoneadm -z attach [-u|-U]
Any follow-ups should probably go to Oracle Support or zones-discuss.
Your problems are not related to zfs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dding a good enterprise SSD would double the
> server cost - not only on those big good systems with
> tens of GB of RAM), and hopefully simplifying the system
> configuration and maintenance - that is indeed the point
> in question.
>
> //Jim
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reated in
757 * a special directory, $EXTEND, at the root of the shared file
758 * system. To hide this directory prepend a '.' (dot).
759 */
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
I suspect that
it doesn't give you exactly the output you are looking for.
FWIW, the best way to achieve what you are after without breaking the
zones is going to be along the lines of:
zlogin z1c1 init 0
zoneadm -z z1c1 detach
zfs rename rpool/zones/z1c1 rpool/new/z1c1
zoneadm -
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
wrote:
> # zpool import -f tank
>
> http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
I encourage you to open a support case and ask for an escalation on CR 7056738.
--
Mike Gerdts
http://mgerdts.blo
plication under a wide variety of
> circumstances.
The key thing here is that distributed applications will not play
nicely. In my best use case, Solaris zones and LDoms are the
"application". I don't expect or want Solaris to form some sort of
P2P storage system across my data
nd" to be a stable format and get
integration with enterprise backup software that can perform restores
in a way that maintains space efficiency.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Prior to build , bug 6668666 causes the following
platform-dependent steps to also be needed:
On sparc systems:
# installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
On x86 systems:
# ...
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
dynamic data that
needs to survive a reboot, it would seem to make a lot of sense to
enable write cache on such disks. This assumes that ZFS does the
flush no matter whether it thinks the write cache is enabled or not.
Am I wrong about this somehow?
--
Mike Gerdts
http://mgerdts.blogspot.com/
53-02 this week. In a separate thread last week
(?) Enda said that it should be out within a couple weeks.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ermail/zfs-code/2007-March/000448.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ive on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
export FAULTED corrupted data
c6t0d0UNAVAIL corrupted data
--
Mike Gerdts
http://mgerdts.blogspot.com/
200807/
See "Flash Storage Memory" by Adam Leventhal, page 47.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
9 0 0 0 0 0 0 0 0 0 543 972 518 0 0 100
>From a free memory standpoint, the current state of the system is very
different than the typical state since boot.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailin
ast year I've lost more ZFS file systems than I have any other
type of file system in the past 5 years. With other file systems I
can almost always get some data back. With ZFS I can't get any back.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
- core developers of dtrace
were quite interested in the kernel crash dump.
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html
Panic during ON build. Pool was lost, no response from list.
--
Mike Gerdts
http://mgerdts.blogspot.com/
I pushed for and got a fix. However, that
pool was still lost.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>> Nevada isn't production code. For real ZFS testing, you must use a
>> production release, currently Solaris 10 (updat
ld be used to deal with cases that prevent
your normal (>4 GB) boot environment from booting.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 9, 2008 at 10:33 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
>> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>>> Nevada isn't production co
or clustered
storage as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
$zone
remove fs dir=/var
zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt <[EMAIL PROTECTED]> wrote:
> On 12/02/08 10:24, Mike Gerdts wrote:
> I follow you up to here. But why do the next steps?
>
> > zonecfg -z $zone
> > remove fs dir=/var
> >
> > zfs set mountpoint=/zones/$zone/root/var r
.png
Try running
svcs -v zfs/auto-snapshot
The last few lines of the log files mentioned in the output from the
above command may provide helpful hints.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discu
ng as the list of zfs
mount points does not overflow the maximum command line length.
$ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@o
atomic operation.
The snapshots are created together (all at once) or not created at
all. The benefit of atomic snapshots operations is that the snapshot
data is always taken at one consistent time, even across descendent
file systems.
--
Mike Gerdts
http://mgerdts.blogspot.com/
/os/about/faq/licensing_faq/#patents.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pp "snapmirror to tape"
- Even having a zfs(1M) option that could list the files that change
between snapshots could be very helpful to prevent file system crawls
and to avoid being fooled by bogus mtimes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
each database may be constrained to
a set of spindles so that each database can be replicated or copied
independent of the various others.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
wrote:
> On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
>> On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
>> wrote:
>> > On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
>> >> &
the
global zone and the dataset is deleted to a non-global zone, display
the UID rather than a possibly mistaken username.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd of it is not overly complicated. Is now
too early to file the RFE? For some reason it feels like the person
on the other end of bugs.opensolars.org will get confused by the
request to enhance a feature that doesn't yet exist.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
1 - 100 of 272 matches
Mail list logo