rives to any other box because they are consumer drives and
> > > my
> > > servers all have ultras.
>
> Ian wrote:
>
> > Most modern boards will be boot from a live USB
> > stick.
>
> True but I haven't found a way to get an ISO onto a USB that my
art the service (smcwebserver), no use.
Anyone have the experience on it, is it a bug?
Regards,
Bill
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When I run the command, it prompts:
# /usr/lib/zfs/availdevs -d
Segmentation Fault - core dumped.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
# /usr/lib/zfs/availdevs -d
Segmentation Fault - core dumped.
# pstack core
core 'core' of 2350:./availdevs -d
- lwp# 1 / thread# 1
d2d64b3c strlen (0) + c
d2fa2f82 get_device_name (8063400, 0, 804751c, 1c) + 3e
d2fa3015 get_disk (8063400, 0, 804751c
ks can read lzjb-compressed blocks in zfs.
I have compression=on (and copies=2) for both sparc and x86 roots; I'm
told that grub's zfs support also knows how to fall back to ditto blocks
if the first copy fails to be readable or has a bad checksum.
f swap on these systems. (when
migrating one such system from Nevada to Opensolaris recently I forgot
to add swap to /etc/vfstab).
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
On 03/08/10 12:43, Tomas Ögren wrote:
So we tried adding 2x 4GB USB sticks (Kingston Data
Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
snapshot times down to about 30 seconds.
Out of curiosity, how much physical memory does this system have?
press and checksum metadata.
the evil tuning guide describes an unstable interface to turn off
metadata compression, but I don't see anything in there for metadata
checksums.
if you have an actual need for an in-memory filesystem, will tmpfs fit
routinely see scrubs
last 75 hours which had claimed to be "100.00% done" for over a day.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/19/10 19:07, zfs ml wrote:
What are peoples' experiences with multiple drive failures?
1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's
operating temperature. Head crashes. No more need be said.
for the better).
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
update activity like atime
updates, mtime updates on pseudo-terminals, etc., ?
I'd want to start looking more closely at I/O traces (dtrace can be very
helpful here) before blaming any specific system component for the
unexpected I/O.
h
roughly half the space, 1GB in s3 for slog, and the rest of the space as
L2ARC in s4. That may actually be overly generous for the root pool,
but I run with copies=2 on rpool/ROOT and I tend to keep a bunch of BE's
around.
s completely -- probably the
biggest single assumption, given that the underlying storage devices
themselves are increasingly using copy-on-write techniques.
The most paranoid will replace all the disks and then physically destroy
the old ones.
- Bill
stem encryption only changes
the size of the problem we need to solve.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ata pool(s).
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
m.
If you can try adding more ram to the system.
Adding a flash-based ssd as an cache/L2ARC device is also very
effective; random i/o to ssd is much faster than random i/o to spinning
rust.
- Bill
___
z
k you may need to add
an "autopm enable" if the system isn't recognized as a known desktop.
the disks spin down when the system is idle; there's a delay of a few
seconds when they spin back up.
- Bill
_
her it reduces the risk depends on precisely *what*
caused your system to crash and reboot; if the failure also causes loss
of the write cache contents on both sides of the mirror, mirroring won't
help.
- Bill
_
he pool.
I think #2 is somewhat more likely.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oesn't seem to clear this
up. Or maybe it does, but I'm not understanding the other thing that's
supposed to be cleared up. This worked back on a 20081207 build, so perhaps
something has changed?
I'm adding format's view of the disks and a zdb list below.
Thanks,
get fixed. His use case is very compelling - I
know lots of SOHO folks who could really use a NAS where this 'just worked'
The ZFS team has done well by thinking liberally about conventional
assumptions.
-Bill
--
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Te
at's about double what I usually get out of a cheap 'desktop' SATA
drive with OpenSolaris. Slower than a RAID-Z2 of 10 of them, though.
Still, the power savings could be appreciable.
-Bill
--
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Telephone: +1.603
irect blocks for the swap device get cached).
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oss from ~30 seconds to a
sub-second value.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
55% 1.25x ONLINE -
# zdb -D z
DDT-sha256-zap-duplicate: 432759 entries, size 304 on disk, 156 in core
DDT-sha256-zap-unique: 1094244 entries, size 298 on disk, 151 in core
dedup = 1.25, compress = 1.44, copies = 1.00, dedup * compress / copies
= 1.80
-
_size=0x1000
* Work around 6965294
set zfs:metaslab_smo_bonus_pct=0xc8
-cut here-
no guarantees, but it's helped a few systems..
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolari
_size=0x1000
* Work around 6965294
set zfs:metaslab_smo_bonus_pct=0xc8
-cut here-
no guarantees, but it's helped a few systems..
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolari
I've been very happy with the results.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ence
between snapshots. Turning off atime updates (if you and your
applications can cope with this) may also help going forward.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote:
> Why does resilvering an entire disk, yield different amounts of data that was
> resilvered each time.
> I have read that ZFS only resilvers what it needs to, but in the case of
> replacing an entire disk with another formatted clean disk, you woul
zfs groups writes together into transaction groups; the physical writes
to disk are generally initiated by kernel threads (which appear in
dtrace as threads of the "sched" process). Changing the attribution is
not going to be simple as a single physical write to the pool may
contain data and metad
locks might overwhelm the savings from
deduping a small common piece of the file.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote:
> On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote:
> > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
> >>> Any suggestions?
> >>
> >> Let it run for another day.
> > I'll let it ke
ocols to
themselves implement the TRIM command -- freeing the underlying
storage).
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
early version of the fix, and saw one pool go from an elapsed
time of 85 hours to 20 hours; another (with many fewer snapshots) went
from 35 to 17.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
hows my two USB sticks of the rpool being at c8t0d0 and c11t0d0... !
How is this system even working? What do I need to do to clear this up...?
Thanks for your time,
-Bill
--
This message posted from opensolaris.org
___
zfs-discuss mailing li
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote:
> > "sh" == Seth Heeren writes:
>
> sh> If you don't want/need log or cache, disable these? You might
> sh> want to run your ZIL (slog) on ramdisk.
>
> seems quite silly. why would you do that instead of just disabling
> the ZIL
to avoid a directory walk?
Thanks,
bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is most likely a naive question on my part. If recordsize is set
to 4k (or a multiple of 4k), will ZFS ever write a record that is less
than 4k or not a multiple of 4k? This includes metadata. Does
compression have any effect on this?
thanks for the help,
bill
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
> After
> running for a while (couple of months) the zpool seems to get
> "fragmented", backups take 72 hours and a scrub takes about 180
> hours.
Are there periodic snapshots being created in this pool?
C
Hi Richard,
How's the ranch? ;-)
This is most likely a naive question on my part. If recordsize is
set to 4k (or a multiple of 4k), will ZFS ever write a record that
is less than 4k or not a multiple of 4k?
Yes. The recordsize is the upper limit for a file record.
This includes metadata
On Dec 15, 2009, at 6:24 PM, Bill Sommerfeld wrote:
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
After
running for a while (couple of months) the zpool seems to get
"fragmented", backups take 72 hours and a scrub takes about 180
hours.
Are there periodic snapshots being
Hi Bob,
On Dec 15, 2009, at 6:41 PM, Bob Friesenhahn wrote:
On Tue, 15 Dec 2009, Bill Sprouse wrote:
Hi Everyone,
I hope this is the right forum for this question. A customer is
using a Thumper as an NFS file server to provide the mail store for
multiple email servers (Dovecot). They
Thanks MIchael,
Useful stuff to try. I wish we could add more memory, but the x4500
is limited to 16GB. Compression was a question. Its currently off,
but they were thinking of turning it on.
bill
On Dec 15, 2009, at 7:02 PM, Michael Herf wrote:
I have also had slow scrubbing on
couple of alternatives without success.
Do you a have a pointer to the "block/parity rewrite" tool mentioned
below?
bill
On Dec 15, 2009, at 9:38 PM, Brent Jones wrote:
On Tue, Dec 15, 2009 at 5:28 PM, Bill Sprouse
wrote:
Hi Everyone,
I hope this is the right forum for this ques
Just checked w/customer and they are using the MailDir functionality
with Dovecot.
On Dec 16, 2009, at 11:28 AM, Toby Thain wrote:
On 16-Dec-09, at 10:47 AM, Bill Sprouse wrote:
Hi Brent,
I'm not sure why Dovecot was chosen. It was most likely a
recommendation by a fellow Unive
Thanks for this thread! I was just coming here to discuss this very same
problem. I'm running 2009.06 on a Q6600 with 8GB of RAM. I have a Windows
system writing multiple OTA HD video streams via CIFS to the 2009.06 system
running Samba.
I then have multiple clients reading back other HD vid
filesystems won't hit the SSD
unless the system is short on physical memory.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-level vdevs if there are healthy ones available.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s were actually used.
If you want to allow for overcommit, you need to delete the refreservation.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ratio.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/06/10 08:38, Frank Middleton wrote:
AFAIK there is no way to get around this. You can set a flag so that pkg
tries to empty /var/pkg/downloads, but even though it looks empty, it
won't actually become empty until you delete the snapshots, and IIRC
you still have to manually delete the conte
propagate.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/12/10 09:36, Felix Buenemann wrote:
given I've got ~300GB L2ARC, I'd
need about 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the
L2ARC.
But that would only leave ~800MB free for everything else the server
needs to do.
filesystem tunables for ZFS which allow the
system to escape the confines of POSIX (noatime, for one); I don't see
why a "chmod doesn't truncate acls" option couldn't join it so long as
it was off by default and left off while conformance tests were run.
?
It's in there. Turn on compression to use it.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
personal experience with both systems, AFS had it more or less right and
POSIX got it more or less wrong -- once you step into the world of acls,
the file mode should be mostly ignored, and an accidental chmod should
*not* destroy carefully crafted acls.
ive upgrade BE
with nevada build 130, and a beadm BE with opensolaris build 130, which
is mostly the same)
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uot; rights on the
filesystem the file is in, they'll be able to read every bit of the file.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to continue to
use it?
While we're designing on the fly: Another possibility would be to use an
additional umask bit or two to influence the mode-bit - acl interaction.
- Bill
___
zfs-discuss ma
)] = count(); }'
(let it run for a bit then interrupt it).
should show who's calling list_next() so much.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/28/12 17:13, Daniel Carosone wrote:
There are two problems using ZFS on drives with 4k sectors:
1) if the drive lies and presents 512-byte sectors, and you don't
manually force ashift=12, then the emulation can be slow (and
possibly error prone). There is essentially an interna
age appliance, it may be useful as you're thinking about
how to proceed:
https://blogs.oracle.com/wdp/entry/comstar_iscsi
- Bill
--
Bill Pijewski, Joyent http://dtrace.org/blogs/wdp/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On 07/11/12 02:10, Sašo Kiselkov wrote:
> Oh jeez, I can't remember how many times this flame war has been going
> on on this list. Here's the gist: SHA-256 (or any good hash) produces a
> near uniform random distribution of output. Thus, the chances of getting
> a random hash collision are around
On 07/19/12 18:24, Traffanstead, Mike wrote:
iozone doesn't vary the blocksize during the test, it's a very
artificial test but it's useful for gauging performance under
different scenarios.
So for this test all of the writes would have been 64k blocks, 128k,
etc. for that particular step.
Just
On 09/14/12 22:39, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Pooser
Unfortunately I did not realize that zvols require disk space sufficient
to duplicate the zvol, and
On 08/21/10 10:14, Ross Walker wrote:
I am trying to figure out the best way to provide both performance and
resiliency given the Equallogic provides the redundancy.
(I have no specific experience with Equallogic; the following is just
generic advice)
Every bit stored in zfs is checksummed
hots
only copy the blocks that change, and receiving an incremental send does
the same).
And if the destination pool is short on space you may end up more
fragmented than the source.
- Bill
___
zfs-discuss ma
So when I built my new workstation last year, I partitioned the one and only
disk in half, 50% for Windows, 50% for 2009.06. Now, I'm not using Windows,
so I'd like to use the other half for another ZFS pool, but I can't figure out
how to access it.
I have used fdisk to create a second Solari
Knowing Darren,
it's very likely that he got it right, but in crypto, all the details
matter and if a spec detailed enough to allow for interoperability isn't
available, it's safest to assume that some of the details are wrong.
> > got it attached to a UPS with very conservative
> shut-down timing. Or
> > are there other host failures aside from power a
> ZIL would be
> > vulnerable too (system hard-locks?)?
>
> Correct, a system hard-lock is another example...
How about comparing a non-battery backed ZIL to running a Z
60GB SSD drives using the SF 1222 controller can be had now for around $100.
I know ZFS likes to use the entire disk to do it's magic, but under X86, is
the entire disk the entire disk, or is it one physical X86 partition?
In the past I have created 2 partitions with FDISK, but format will only
Understood Edward, and if this was a production data center, I wouldn't be
doing it this way. This is for my home lab, so spending hundreds of dollars on
SSD devices isn't practical.
Can several datasets share a single ZIL and a single L2ARC, or much must each
dataset have their own?
--
This
yourself
with the format command and ZFS won't disable it.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed them to be durable,
why does it matter that it may buffer data while it is doing so?
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/07/11 12:49, Yi Zhang wrote:
If buffering is on, the running time of my app doesn't reflect the
actual I/O cost. My goal is to accurately measure the time of I/O.
With buffering on, ZFS would batch up a bunch of writes and change
both the original I/O activity and the time.
if batching ma
storage to something
faster/better/..., then after the mirror completes zpool detach to free
up the removable storage.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
In the last few days my performance has gone to hell. I'm running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I'll upgrade as soon as the desktop hang bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I've changed recent
One of my old pools was version 10, another was version 13.
I guess that explains the problem.
Seems like time-sliderd should refuse to run on pools that
aren't of a sufficient version.
Cindy Swearingen wrote on 02/18/11 12:07 PM:
Hi Bill,
I think the root cause of this problem is that
erent physical controllers).
see stmsboot(1m) for information on how to turn that off if you don't
need multipathing and don't like the longer device names.
- Bill
___
zfs-discuss mailing l
realized what was happening and was able to kill the process.
Bill Rushmore
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a
couple of them.
I want to migrate /, /var, /opt, /export/home and also want swap and /tmp. I
don't care about any of the others.
The first disk, and the one with the UFS filesystems, is c0t0d0 and the 2nd
disk is
it
is a noticeable improvement from a 2-disk mirror.
I used an 80G intel X25-M, with 1G for zil, with the rest split roughly
50:50 between root pool and l2arc for the data pool.
- Bill
___
zfs-discuss ma
On 06/06/11 08:07, Cyril Plisko wrote:
zpool reports space usage on disks, without taking into account RAIDZ overhead.
zfs reports net capacity available, after RAIDZ overhead accounted for.
Yup. Going back to the original numbers:
nebol@filez:/$ zfs list tank2
NAMEUSED AVAIL REFER MOU
On 06/08/11 01:05, Tomas Ögren wrote:
And if pool usage is>90%, then there's another problem (change of
finding free space algorithm).
Another (less satisfying) workaround is to increase the amount of free
space in the pool, either by reducing usage or adding more storage.
Observed behavior i
allocated data (and in
the case of raidz, know precisely how it's spread and encoded across the
members of the vdev). And it's reading all the data blocks needed to
reconstruct the disk to be replaced.
- Bill
___
e wrong.
if you're using dedup, you need a large read cache even if you're only
doing application-layer writes, because you need fast random read access
to the dedup tables while you write.
- Bill
_
On 06/27/11 15:24, David Magda wrote:
> Given the amount of transistors that are available nowadays I think
> it'd be simpler to just create a series of SIMD instructions right
> in/on general CPUs, and skip the whole co-processor angle.
see: http://en.wikipedia.org/wiki/AES_instruction_set
Prese
;s metadata will diverge between the
writeable filesystem and its last snapshot.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ever
increasing error count
and maybe:
6437568 ditto block repair is incorrectly propagated to root vdev
Any way to dig further to determine what's going on?
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ug as soon as I can (I'm travelling at the moment with
spotty connectivity), citing my and your reports.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote:
> Zfs makes human error really easy. For example
>
>$ zpool destroy mypool
Note that "zpool destroy" can be undone by "zpool import -D" (if you get
to it before the disks are overwritten).
ed ZIL
This bug is fixed in build 95. One workaround is to mount the
filesystems and then unmount them to apply the intent log changes.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
See the long thread titled "ZFS deduplication", last active
approximately 2 weeks ago.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ether subtrees of the filesystem and recover as
much as you can even if many upper nodes in the block tree have had
holes shot in them by a miscreant device.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensol
On Thu, 2008-08-21 at 21:15 -0700, mike wrote:
> I've seen 5-6 disk zpools are the most recommended setup.
This is incorrect.
Much larger zpools built out of striped redundant vdevs (mirror, raidz1,
raidz2) are recommended and also work well.
raidz1 or raidz2 vdevs of more than a single-digit nu
lares a response overdue after (SRTT + K * variance).
I think you'd probably do well to start with something similar to what's
described in http://www.ietf.org/rfc/rfc2988.txt and then tweak based on
experience.
- Bill
___
no substitute for just retrying for a long time.
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote:
> It's sort of like network QoS, but not quite, because:
>
> (a) you don't know exactly how big the ``pipe'' is, only
> approximately,
In an ip network, end nodes generally know no more than the pipe size of
the first hop -- and in
1 - 100 of 355 matches
Mail list logo