On 09/14/12 22:39, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Pooser
Unfortunately I did not realize that zvols require disk space sufficient
to duplicate the zvol, and
On 07/19/12 18:24, Traffanstead, Mike wrote:
iozone doesn't vary the blocksize during the test, it's a very
artificial test but it's useful for gauging performance under
different scenarios.
So for this test all of the writes would have been 64k blocks, 128k,
etc. for that particular step.
Just
On 07/11/12 02:10, Sašo Kiselkov wrote:
> Oh jeez, I can't remember how many times this flame war has been going
> on on this list. Here's the gist: SHA-256 (or any good hash) produces a
> near uniform random distribution of output. Thus, the chances of getting
> a random hash collision are around
On 05/28/12 17:13, Daniel Carosone wrote:
There are two problems using ZFS on drives with 4k sectors:
1) if the drive lies and presents 512-byte sectors, and you don't
manually force ashift=12, then the emulation can be slow (and
possibly error prone). There is essentially an interna
On 12/05/11 10:47, Lachlan Mulcahy wrote:
> zfs`lzjb_decompress10 0.0%
> unix`page_nextn31 0.0%
> genunix`fsflush_do_pages 37 0.0%
> zfs`dbuf_free_range
On 09/26/11 12:31, Nico Williams wrote:
> On Mon, Sep 26, 2011 at 1:55 PM, Jesus Cea wrote:
>> Should I disable "atime" to improve "zfs diff" performance? (most data
>> doesn't change, but "atime" of most files would change).
>
> atime has nothing to do with it.
based on my experiences with time
On 06/27/11 15:24, David Magda wrote:
> Given the amount of transistors that are available nowadays I think
> it'd be simpler to just create a series of SIMD instructions right
> in/on general CPUs, and skip the whole co-processor angle.
see: http://en.wikipedia.org/wiki/AES_instruction_set
Prese
On 06/16/11 15:36, Sven C. Merckens wrote:
> But is the L2ARC also important while writing to the device? Because
> the storeges are used most of the time only for writing data on it,
> the Read-Cache (as I thought) isn´t a performance-factor... Please
> correct me, if my thoughts are wrong.
if yo
On 06/14/11 04:15, Rasmus Fauske wrote:
> I want to replace some slow consumer drives with new edc re4 ones but
> when I do a replace it needs to scan the full pool and not only that
> disk set (or just the old drive)
>
> Is this normal ? (the speed is always slow in the start so thats not
> what
On 06/08/11 01:05, Tomas Ögren wrote:
And if pool usage is>90%, then there's another problem (change of
finding free space algorithm).
Another (less satisfying) workaround is to increase the amount of free
space in the pool, either by reducing usage or adding more storage.
Observed behavior i
On 06/06/11 08:07, Cyril Plisko wrote:
zpool reports space usage on disks, without taking into account RAIDZ overhead.
zfs reports net capacity available, after RAIDZ overhead accounted for.
Yup. Going back to the original numbers:
nebol@filez:/$ zfs list tank2
NAMEUSED AVAIL REFER MOU
On 05/31/11 09:01, Anonymous wrote:
> Hi. I have a development system on Intel commodity hardware with a 500G ZFS
> root mirror. I have another 500G drive same as the other two. Is there any
> way to use this disk to good advantage in this box? I don't think I need any
> more redundancy, I would li
On 02/26/11 17:21, Dave Pooser wrote:
While trying to add drives one at a time so I can identify them for later
use, I noticed two interesting things: the controller information is
unlike any I've seen before, and out of nine disks added after the boot
drive all nine are attached to c12 -- and
On 02/16/11 07:38, white...@gmail.com wrote:
Is it possible to use a portable drive to copy the
initial zfs filesystem(s) to the remote location and then make the
subsequent incrementals over the network?
Yes.
> If so, what would I need to do
to make sure it is an exact copy? Thank you,
Ro
On 02/07/11 12:49, Yi Zhang wrote:
If buffering is on, the running time of my app doesn't reflect the
actual I/O cost. My goal is to accurately measure the time of I/O.
With buffering on, ZFS would batch up a bunch of writes and change
both the original I/O activity and the time.
if batching ma
On 02/07/11 11:49, Yi Zhang wrote:
The reason why I
tried that is to get the side effect of no buffering, which is my
ultimate goal.
ultimate = "final". you must have a goal beyond the elimination of
buffering in the filesystem.
if the writes are made durable by zfs when you need them to be
On 01/04/11 18:40, Bob Friesenhahn wrote:
Zfs will disable write caching if it sees that a partition is being used
This is backwards.
ZFS will enable write caching on a disk if a single pool believes it
owns the whole disk.
Otherwise, it will do nothing to caching. You can enable it yourse
On 11/17/10 12:04, Miles Nordin wrote:
black-box crypto is snake oil at any level, IMNSHO.
Absolutely.
Congrats again on finishing your project, but every other disk
encryption framework I've seen taken remotely seriously has a detailed
paper describing the algorithm, not just a list of featu
On 09/09/10 20:08, Edward Ned Harvey wrote:
Scores so far:
2 No
1 Yes
No. resilver does not re-layout your data or change whats in the block
pointers on disk. if it was fragmented before, it will be fragmented after.
C) Does zfs send zfs receive mean it will defrag?
Scor
On 08/21/10 10:14, Ross Walker wrote:
I am trying to figure out the best way to provide both performance and
resiliency given the Equallogic provides the redundancy.
(I have no specific experience with Equallogic; the following is just
generic advice)
Every bit stored in zfs is checksummed
On 07/23/10 02:31, Giovanni Tirloni wrote:
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows<5MB/s writes on the replaced disks.
What build of opensolaris are you running? There were some recent
improv
On 07/22/10 04:00, Orvar Korvar wrote:
Ok, so the bandwidth will be cut in half, and some people use this
configuration. But, how bad is it to have the bandwidth cut in half?
Will it hardly notice?
For a home server, I doubt you'll notice.
I've set up several systems (desktop & home server) as
On 07/20/10 14:10, Marcelo H Majczak wrote:
It also seems to be issuing a lot more
writing to rpool, though I can't tell what. In my case it causes a
lot of read contention since my rpool is a USB flash device with no
cache. iostat says something like up to 10w/20r per second. Up to 137
the perfo
On 07/20/10 14:10, Marcelo H Majczak wrote:
It also seems to be issuing a lot more
writing to rpool, though I can't tell what. In my case it causes a
lot of read contention since my rpool is a USB flash device with no
cache. iostat says something like up to 10w/20r per second. Up to 137
the perfo
On 06/15/10 10:52, Erik Trimble wrote:
Frankly, dedup isn't practical for anything but enterprise-class
machines. It's certainly not practical for desktops or anything remotely
low-end.
We're certainly learning a lot about how zfs dedup behaves in practice.
I've enabled dedup on two desktops
On 05/20/10 12:26, Miles Nordin wrote:
I don't know, though, what to do about these reports of devices that
almost respect cache flushes but seem to lose exactly one transaction.
AFAICT this should be a works/doesntwork situation, not a continuum.
But there's so much brokenness out there. I've
On 05/07/10 15:05, Kris Kasner wrote:
Is ZFS swap cached in the ARC? I can't account for data in the ZFS filesystems
to use as much ARC as is in use without the swap files being cached.. seems a
bit redundant?
There's nothing to explicitly disable caching just for swap; from zfs's
point of vie
On 05/01/10 13:06, Diogo Franco wrote:
After seeing that on some cases labels were corrupted, I tried running
zdb -l on mine:
...
(labels 0, 1 not there, labels 2, 3 are there).
I'm looking for pointers on how to fix this situation, since the disk
still has available metadata.
there are two
On 04/17/10 07:59, Dave Vrona wrote:
1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs
be mirrored ?
L2ARC cannot be mirrored -- and doesn't need to be. The contents are
checksummed; if the checksum doesn't match, it's treated as a cache miss
and the block is re-read from the
On 04/16/10 20:26, Joe wrote:
I was just wondering if it is possible to spindown/idle/sleep hard disks that are
part of a Vdev& pool SAFELY?
it's possible.
my ultra24 desktop has this enabled by default (because it's a known
desktop type). see the power.conf man page; I think you may need
On 04/14/10 19:51, Richard Jahnel wrote:
This sounds like the known issue about the dedupe map not fitting in ram.
Indeed, but this is not correct:
When blocks are freed, dedupe scans the whole map to ensure each block is not
is use before releasing it.
That's not correct.
dedup uses a da
On 04/14/10 12:37, Christian Molson wrote:
First I want to thank everyone for their input, It is greatly appreciated.
To answer a few questions:
Chassis I have:
http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm
Motherboard:
http://www.tyan.com/product_board_detail.aspx?pid=56
On 04/11/10 12:46, Volker A. Brandt wrote:
The most paranoid will replace all the disks and then physically
destroy the old ones.
I thought the most paranoid will encrypt everything and then forget
the key... :-)
Actually, I hear that the most paranoid encrypt everything *and then*
destroy th
On 04/11/10 10:19, Manoj Joseph wrote:
Earlier writes to the file might have left
older copies of the blocks lying around which could be recovered.
Indeed; to be really sure you need to overwrite all the free space in
the pool.
If you limit yourself to worrying about data accessible via a re
On 04/06/10 17:17, Richard Elling wrote:
You could probably live with an X25-M as something to use for all three,
but of course you're making tradeoffs all over the place.
That would be better than almost any HDD on the planet because
the HDD tradeoffs result in much worse performance.
Indeed
On 04/05/10 15:24, Peter Schuller wrote:
In the urxvt case, I am basing my claim on informal observations.
I.e., "hit terminal launch key, wait for disks to rattle, get my
terminal". Repeat. Only by repeating it very many times in very rapid
succession am I able to coerce it to be cached such tha
On 03/22/10 11:02, Richard Elling wrote:
> Scrub tends to be a random workload dominated by IOPS, not bandwidth.
you may want to look at this again post build 128; the addition of
metadata prefetch to scrub/resilver in that build appears to have
dramatically changed how it performs (largely for th
On 03/19/10 19:07, zfs ml wrote:
What are peoples' experiences with multiple drive failures?
1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's
operating temperature. Head crashes. No more need be said.
- Bill
__
On 03/17/10 14:03, Ian Collins wrote:
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
done, but not complete:
scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
Don't panic. If "zpool iostat" still shows active reads from all disks
in the pool, just step
On 03/08/10 17:57, Matt Cowger wrote:
Change zfs options to turn off checksumming (don't want it or need it), atime,
compression, 4K block size (this is the applications native blocksize) etc.
even when you disable checksums and compression through the zfs command,
zfs will still compress and
On 03/08/10 12:43, Tomas Ögren wrote:
So we tried adding 2x 4GB USB sticks (Kingston Data
Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
snapshot times down to about 30 seconds.
Out of curiosity, how much physical memory does this system have?
On 03/03/10 05:19, Matt Keenan wrote:
In a multipool environment, would be make sense to add swap to a pool outside or
the root pool, either as the sole swap dataset to be used or as extra swap ?
Yes. I do it routinely, primarily to preserve space on boot disks on
large-memory systems.
swap
On 03/02/10 12:57, Miles Nordin wrote:
"cc" == chad campbell writes:
cc> I was trying to think of a way to set compression=on
cc> at the beginning of a jumpstart.
are you sure grub/ofwboot/whatever can read compressed files?
Grub and the sparc zfs boot blocks can read lzjb-compr
On 03/02/10 08:13, Fredrich Maney wrote:
Why not do the same sort of thing and use that extra bit to flag a
file, or directory, as being an ACL only file and will negate the rest
of the mask? That accomplishes what Paul is looking for, without
breaking the existing model for those that need/wish
On 03/01/10 13:50, Miles Nordin wrote:
"dd" == David Dyer-Bennet writes:
dd> Okay, but the argument goes the other way just as well -- when
dd> I run "chmod 6400 foobar", I want the permissions set that
dd> specific way, and I don't want some magic background feature
dd>
On 02/28/10 15:58, valrh...@gmail.com wrote:
Also, I don't have the numbers to prove this, but it seems to me
> that the actual size of rpool/ROOT has grown substantially since I
> did a clean install of build 129a (I'm now at build133). WIthout
> compression, either, that was around 24 GB, but
On 02/26/10 17:38, Paul B. Henson wrote:
As I wrote in that new sub-thread, I see no option that isn't surprising
in some way. My preference would be for what I labeled as option (b).
And I think you absolutely should be able to configure your fileserver to
implement your preference. Why shoul
On 02/26/10 11:42, Lutz Schumann wrote:
Idea:
- If the guest writes a block with 0's only, the block is freed again
- if someone reads this block again - it wil get the same 0's it would get
if the 0's would be written
- The checksum of a "all 0" block dan be hard coded for SHA1 / Flec
On 02/26/10 10:45, Paul B. Henson wrote:
I've already posited as to an approach that I think would make a pure-ACL
deployment possible:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037206.html
Via this concept or something else, there needs to be a way to configure
Z
On 02/12/10 09:36, Felix Buenemann wrote:
given I've got ~300GB L2ARC, I'd
need about 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the
L2ARC.
But that would only leave ~800MB free for everything else the server
needs to do.
- Bill
On 02/11/10 10:33, Lori Alt wrote:
This bug is closed as a dup of another bug which is not readable from
the opensolaris site, (I'm not clear what makes some bugs readable and
some not).
the other bug in question was opened yesterday and probably hasn't had
time to propagate.
On 02/06/10 08:38, Frank Middleton wrote:
AFAIK there is no way to get around this. You can set a flag so that pkg
tries to empty /var/pkg/downloads, but even though it looks empty, it
won't actually become empty until you delete the snapshots, and IIRC
you still have to manually delete the conte
On 01/31/10 07:07, Christo Kutrovsky wrote:
I've also experienced similar behavior (short freezes) when running
zfs send|zfs receive with compression on LOCALLY on ZVOLs again.
Has anyone else experienced this ? Know any of bug? This is on
snv117.
you might also get better results after the fi
On 01/27/10 21:17, Daniel Carosone wrote:
This is as expected. Not expected is that:
usedbyrefreservation = refreservation
I would expect this to be 0, since all the reserved space has been
allocated.
This would be the case if the volume had no snapshots.
As a result, used is over twice
On 01/24/10 12:20, Lutz Schumann wrote:
One can see that the degrated mirror is excluded from the writes.
I think this is expected behaviour right ?
(data protection over performance)
That's correct. It will use the space if it needs to but it prefers to
avoid "sick" top-level vdevs if there
On Thu, 2010-01-07 at 11:07 -0800, Anil wrote:
> There is talk about using those cheap disks for rpool. Isn't rpool
> also prone to a lot of writes, specifically when the /tmp is in a SSD?
Huh? By default, solaris uses tmpfs for /tmp, /var/run,
and /etc/svc/volatile; writes to those filesystems w
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
> After
> running for a while (couple of months) the zpool seems to get
> "fragmented", backups take 72 hours and a scrub takes about 180
> hours.
Are there periodic snapshots being created in this pool?
Can they run with atime turne
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote:
> > "sh" == Seth Heeren writes:
>
> sh> If you don't want/need log or cache, disable these? You might
> sh> want to run your ZIL (slog) on ramdisk.
>
> seems quite silly. why would you do that instead of just disabling
> the ZIL
Yesterday's integration of
6678033 resilver code should prefetch
as part of changeset 74e8c05021f1 (which should be in build 129 when it
comes out) may improve scrub times, particularly if you have a large
number of small files and a large number of snapshots. I recently
tested an early version
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote:
> Joerg Moellenkamp wrote:
> > Hi,
> >
> > Well ... i think Darren should implement this as a part of
> zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear
> leveling and bad block relocation kicks in ;)
>
> No I won't b
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote:
> On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote:
> > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
> >>> Any suggestions?
> >>
> >> Let it run for another day.
> > I'll let it ke
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
> Does the dedupe functionality happen at the file level or a lower block
> level?
it occurs at the block allocation level.
> I am writing a large number of files that have the fol structure :
>
> -- file begins
> 1024 lines of random A
zfs groups writes together into transaction groups; the physical writes
to disk are generally initiated by kernel threads (which appear in
dtrace as threads of the "sched" process). Changing the attribution is
not going to be simple as a single physical write to the pool may
contain data and metad
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote:
> Why does resilvering an entire disk, yield different amounts of data that was
> resilvered each time.
> I have read that ZFS only resilvers what it needs to, but in the case of
> replacing an entire disk with another formatted clean disk, you woul
On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
> The list of datasets in a root pool should look something like this:
...
> rpool/swap
I've had success with putting swap into other pools. I believe others
have, as well.
- Bill
_
On Wed, 2009-09-16 at 14:19 -0700, Richard Elling wrote:
> Actually, I had a ton of data on resilvering which shows mirrors and
> raidz equivalently bottlenecked on the media write bandwidth. However,
> there are other cases which are IOPS bound (or CR bound :-) which
> cover some of the postings
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
> Some hours later, here I am again:
> scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
> Any suggestions?
Let it run for another day.
A pool on a build server I manage takes about 75-100 hours to scrub, but
typically starts
On Fri, 2009-08-28 at 23:12 -0700, P. Anil Kumar wrote:
> I would like to know why its picking up amd64 config params from the
> Makefile, while uname -a clearly shows that its i386 ?
it's behaving as designed.
on solaris, uname -a always shows i386 regardless of whether the system
is in 32-bit
On Wed, 2009-07-29 at 06:50 -0700, Glen Gunselman wrote:
> There was a time when manufacturers know about base-2 but those days
> are long gone.
Oh, they know all about base-2; it's just that disks seem bigger when
you use base-10 units.
Measure a disk's size in 10^(3n)-based KB/MB/GB/TB units,
On Mon, 2009-06-22 at 06:06 -0700, Richard Elling wrote:
> Nevertheless, in my lab testing, I was not able to create a random-enough
> workload to not be write limited on the reconstructing drive. Anecdotal
> evidence shows that some systems are limited by the random reads.
Systems I've run which
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
> I still use "disk swap" because I have some bad experiences
> with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to "metadata" instead of the defau
On Wed, 2009-03-04 at 12:49 -0800, Richard Elling wrote:
> But I'm curious as to why you would want to put both the slog and
> L2ARC on the same SSD?
Reducing part count in a small system.
For instance: adding L2ARC+slog to a laptop. I might only have one slot
free to allocate to ssd.
IMHO the
On Thu, 2009-02-12 at 17:35 -0500, Blake wrote:
> That does look like the issue being discussed.
>
> It's a little alarming that the bug was reported against snv54 and is
> still not fixed :(
bugs.opensolaris.org's information about this bug is out of date.
It was fixed in snv_54:
changeset:
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> I vaguely remember a time when UFS had limits to prevent
> ordinary users from consuming past a certain limit, allowing
> only the super-user to use it. Not that I'm advocating that
> approach for ZFS.
looks to me like zfs already provides a
On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
> If I turn zfs compression on, does the recordsize influence the
> compressratio in anyway?
zfs conceptually chops the data into recordsize chunks, then compresses
each chunk independently, allocating on disk only the space needed to
store eac
On Wed, 2008-10-22 at 10:45 -0600, Neil Perrin wrote:
> Yes: 6280630 zil synchronicity
>
> Though personally I've been unhappy with the exposure that zil_disable has
> got.
> It was originally meant for debug purposes only. So providing an official
> way to make synchronous behaviour asynchronous
On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
> I'm assuming this is local filesystem rather than ZFS backed NFS (which
> is what I have).
Correct, on a laptop.
> What has setting the 32KB recordsize done for the rest of your home
> dir, or did you give the evolution directory its ow
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
> I've a report that the mismatch between SQLite3's default block size and
> ZFS' causes some performance problems for Thunderbird users.
I was seeing a severe performance problem with sqlite3 databases as used
by evolution (not thunderbir
On Wed, 2008-10-01 at 11:54 -0600, Robert Thurlow wrote:
> > like they are not good enough though, because unless this broken
> > router that Robert and Darren saw was doing NAT, yeah, it should not
> > have touch the TCP/UDP checksum.
NAT was not involved.
> I believe we proved that the problem
On Fri, 2008-09-05 at 09:41 -0700, Richard Elling wrote:
> > Also does the resilver deliberately pause? Running iostat I see
> that it will pause for five to ten seconds where no IO is done at all,
> then it continues on at a more reasonable pace.
> I have not seen such behaviour during resilver
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote:
> It's sort of like network QoS, but not quite, because:
>
> (a) you don't know exactly how big the ``pipe'' is, only
> approximately,
In an ip network, end nodes generally know no more than the pipe size of
the first hop -- and in
On Sun, 2008-08-31 at 12:00 -0700, Richard Elling wrote:
> 2. The algorithm *must* be computationally efficient.
>We are looking down the tunnel at I/O systems that can
>deliver on the order of 5 Million iops. We really won't
>have many (any?) spare cycles to play with.
On Thu, 2008-08-28 at 13:05 -0700, Eric Schrock wrote:
> A better option would be to not use this to perform FMA diagnosis, but
> instead work into the mirror child selection code. This has already
> been alluded to before, but it would be cool to keep track of latency
> over time, and use this to
On Thu, 2008-08-21 at 21:15 -0700, mike wrote:
> I've seen 5-6 disk zpools are the most recommended setup.
This is incorrect.
Much larger zpools built out of striped redundant vdevs (mirror, raidz1,
raidz2) are recommended and also work well.
raidz1 or raidz2 vdevs of more than a single-digit nu
On Thu, 2008-08-07 at 11:34 -0700, Richard Elling wrote:
> How would you describe the difference between the data recovery
> utility and ZFS's normal data recovery process?
I'm not Anton but I think I see what he's getting at.
Assume you have disks which once contained a pool but all of the
uberb
See the long thread titled "ZFS deduplication", last active
approximately 2 weeks ago.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, 2008-08-05 at 12:11 -0700, soren wrote:
> > soren wrote:
> > > ZFS has detected that my root filesystem has a
> > small number of errors. Is there a way to tell which
> > specific files have been corrupted?
> >
> > After a scrub a zpool status -v should give you a
> > list of files with
>
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote:
> Zfs makes human error really easy. For example
>
>$ zpool destroy mypool
Note that "zpool destroy" can be undone by "zpool import -D" (if you get
to it before the disks are overwritten).
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > errors:
>
> Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> on a system that is running post snv_94 bits: It also found checksum errors
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
pool: r00t
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs to
On Tue, 2008-07-15 at 15:32 -0500, Bob Friesenhahn wrote:
> On Tue, 15 Jul 2008, Ross Smith wrote:
>
> >
> > It sounds like you might be interested to read up on Eric Schrock's work.
> > I read today about some of the stuff he's been doing to bring integrated
> > fault management to Solaris:
>
On Tue, 2008-06-24 at 09:41 -0700, Richard Elling wrote:
> IMHO, you can make dump optional, with no dump being default.
> Before Sommerfeld pounces on me (again :-))
actually, in the case of virtual machines, doing the dump *in* the
virtual machine into preallocated virtual disk blocks is silly.
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:
> > I'm not even trying to stripe it across multiple
> > disks, I just want to add another partition (from the
> > same physical disk) to the root pool. Perhaps that
> > is a distinction without a difference, but my goal is
> > to grow
On Thu, 2008-06-05 at 23:04 +0300, Cyril Plisko wrote:
> 1. Are there any reasons to *not* enable compression by default ?
Not exactly an answer:
Most of the systems I'm running today on ZFS root have compression=on
and copies=2 for rpool/ROOT
> 2. How can I do it ? (I think I can run "zfs set
On Wed, 2008-06-04 at 23:12 +, A Darren Dunham wrote:
> Best story I've heard is that it dates from before the time when
> modifiable (or at least *easily* modifiable) slices didn't exist. No
> hopping into 'format' or using 'fmthard'. Instead, your disk came with
> an entry in 'format.dat' w
On Wed, 2008-06-04 at 11:52 -0400, Bill McGonigle wrote:
> but we got one server in
> where 4 of the 8 drives failed in the first two months, at which
> point we called Seagate and they were happy to swap out all 8 drives
> for us. I suspect a bad lot, and even found some other complaints
On Fri, 2008-05-23 at 13:45 -0700, Orvar Korvar wrote:
> Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large
> zpool? Is it correct?
>
> I have 8 port SATA card. I have 4 drives into one zpool.
zpool create mypool raidz1 disk0 disk1 disk2 disk3
you have a pool consist
On Fri, 2008-04-18 at 09:26 -0500, Tim wrote:
> Correct me if I'm wrong, but just to clarify a bit for those currently
> thinking "WHAT, NEVER IN MAINLINE!?"
The main line of solaris development *is* SunOS 5.11/solaris
express/nevada/opensolaris/whatever we're calling it this week.
Development
On Fri, 2008-03-14 at 18:11 -0600, Mark Shellenbaum wrote:
> > I think it is a misnomer to call the current
> > implementation of ZFS a "pure ACL" system, as clearly the ACLs are heavily
> > contaminated by legacy mode bits.
>
> Feel free to open an RFE. It may be a tough sell with PSARC, but m
On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
> How was it MVFS could do this without any changes to the shells or any
> other programs?
>
> I ClearCase could 'grep FOO /dir1/dir2/file@@/main/*' to see which
> version of 'file' added FOO.
> (I think @@ was the special hidden key. It
1 - 100 of 203 matches
Mail list logo